id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2308.09870
|
Enhancing State Estimation in Robots: A Data-Driven Approach with
Differentiable Ensemble Kalman Filters
|
This paper introduces a novel state estimation framework for robots using
differentiable ensemble Kalman filters (DEnKF). DEnKF is a reformulation of the
traditional ensemble Kalman filter that employs stochastic neural networks to
model the process noise implicitly. Our work is an extension of previous
research on differentiable filters, which has provided a strong foundation for
our modular and end-to-end differentiable framework. This framework enables
each component of the system to function independently, leading to improved
flexibility and versatility in implementation. Through a series of experiments,
we demonstrate the flexibility of this model across a diverse set of real-world
tracking tasks, including visual odometry and robot manipulation. Moreover, we
show that our model effectively handles noisy observations, is robust in the
absence of observations, and outperforms state-of-the-art differentiable
filters in terms of error metrics. Specifically, we observe a significant
improvement of at least 59% in translational error when using DEnKF with noisy
observations. Our results underscore the potential of DEnKF in advancing state
estimation for robotics. Code for DEnKF is available at
https://github.com/ir-lab/DEnKF
|
Xiao Liu, Geoffrey Clark, Joseph Campbell, Yifan Zhou, Heni Ben Amor
|
2023-08-19T01:12:22Z
|
http://arxiv.org/abs/2308.09870v1
|
Enhancing State Estimation in Robots: A Data-Driven Approach with Differentiable Ensemble Kalman Filters
###### Abstract
This paper introduces a novel state estimation framework for robots using differentiable ensemble Kalman filters (DEnKF). DEnKF is a reformulation of the traditional ensemble Kalman filter that employs stochastic neural networks to model the process noise implicitly. Our work is an extension of previous research on differentiable filters, which has provided a strong foundation for our modular and end-to-end differentiable framework. This framework enables each component of the system to function independently, leading to improved flexibility and versatility in implementation. Through a series of experiments, we demonstrate the flexibility of this model across a diverse set of real-world tracking tasks, including visual odometry and robot manipulation. Moreover, we show that our model effectively handles noisy observations, is robust in the absence of observations, and outperforms state-of-the-art differentiable filters in terms of error metrics. Specifically, we observe a significant improvement of at least 59% in translational error when using DEnKF with noisy observations. Our results underscore the potential of DEnKF in advancing state estimation for robotics. Code for DEnKF is available at [https://github.com/ir-lab/DEnKF](https://github.com/ir-lab/DEnKF)
## I Introduction
In robotics, Recursive Bayesian filters, especially Kalman filters, play a crucial role in accurately localizing robots in their surroundings [1], predicting the future movements of human interaction partners [2], tracking objects over time [3], and ensuring stability during robot locomotion [4]. Typically, the success of these filters depends on having an accurate model of the dynamics of the system being observed, as well as a model of the observation process itself. However, modeling complex systems and their noise profiles can be a challenging task, often requiring additional steps like statistical modeling or system identification. Despite advancements in this field, such as particle filters [5], scalability remains an issue when working with high-dimensional systems.
The limitations of traditional Bayesian filters have inspired the development of Deep State-Space Models (DSSM) [6, 7, 8]. DSSM leverages deep learning techniques to learn approximate, nonlinear models of the underlying states and measurements from recorded data. This approach aims to overcome the need for explicit modeling of the processes, which can be challenging for complex dynamical systems. However, incorporating the nonlinear capabilities of neural networks into recursive filtering may come with additional linearization steps or limitations [1], which can impact the quality of the inference. In this paper, we introduce a new approach to robot state estimation by extending prior research on differentiable recursive filters. Specifically, we propose the Differentiable Ensemble Kalman Filter (DEnKF), which employs high-dimensional camera images to estimate and correct the state of a robot arm, as demonstrated in Fig. 1. Our method builds upon the solid foundation established by prior works on differentiable filters, including the contributions made in [7, 8, 9]. Our approach addresses the challenges encountered by differentiable filtering through both theoretical and practical innovations. One such innovation involves the sampling of states from the posterior distribution of a neural network, which eliminates the need to estimate noise parameters for the recursive filter. Another innovation involves the ensemble formulation of the filtering process, which eliminates the need for linearization. Notably, unlike many other Deep State-Space Models (DSSM) in the literature, our approach avoids the use of Recurrent Neural Networks (RNNs), which have been shown to limit the accuracy of learned models and may lead to non-Markovian state-spaces [7].
In this paper, we present an end-to-end learning approach for recursive filtering that simultaneously learns the observation, dynamics, and noise characteristics of a robotic system. The key contributions of our work can be summarized as follows:
Fig. 1: Differentiable Ensemble Kalman Filter (DEnKF), employs an ensemble state to represent the probability density and integrates a stochastic neural network to generate state transitions. In order to project high-dimensional visual inputs to observation space, a sensor model is utilized. By combining an ensemble Kalman Filter with a learned observation model, the DEnKF is able to achieve precise posterior state estimations.
* A stochastic state transition model that uses samples from the posterior of a neural network to implicitly model the process noise, avoiding the need for a parametric representation of the posterior.
* An ensemble formulation that allows for the efficient inference of both linear and nonlinear systems, without the need for an explicit covariance matrix, making it suitable for high-dimensional inputs and noisy observations.
* Empirical evaluations for the autonomous driving task show DEnKF effectively reduce the translational and rotational errors compared to state-of-the-art methods, reducing errors by up to 59% and 36% when dealing with noisy observations, and handling missing observation scenarios with improved error reductions by 2-fold and 3-fold.
## II Related work
Kalman filters (KFs) are well-studied and widely-used state-space models with many applications in robotics [1]. KFs are designed for systems with linear process and observation models and assume normally distributed noise. To overcome some of these limitations and extend the inference capabilities to nonlinear systems, several variants have been proposed, e.g., the Extended Kalman Filter (EKF) [10] and the Unscented Kalman Filter (UKF) [11]. Still, even the EKF and UKF face theoretical and computational challenges when dealing with high-dimensional observations. Among the many reasons for this limitation is the need for explicitly calculating an error covariance during the filtering process.
**Differentiable filters**: Differentiable filters (DFs) aim to adapt the recursive filtering techniques to handle high-dimensional inputs. For instance, BackpropKF [12] proposed a way to train Kalman Filters as recurrent neural networks using backpropagation with the integration of feed-forward networks and convolutional neural networks. Similarly, Differentiable algorithm networks [13] introduced neural network components that encode differentiable robotic algorithms. This methodology is similar to that of Differentiable Particle Filters (DPFs) [9, 14, 15], which employ algorithmic priors to increase learning efficiency. Variations of DPFs were explored in [16] and [17] using adversarial methods for posterior estimation or partial ground truth particles for semi-supervised learning. The training of DFs and modeling of uncertainty along with noise profiles were analyzed in [8]. The authors implemented the components of the DFs as multi-layer perceptrons and enveloped the sub-modules in an RNN layer. The DFs in [8, 18] were tested on real-world tasks, indicating that end-to-end learning is crucial for learning accurate noise models. Similarly, [19] developed a self-supervised visual-inertial odometry model using the differentiable Extended Kalman Filter (dEKF) based on the work in [8]. However, as noted in [7], RNNs can often be a limiting factor in learning accurate models of the system dynamics. The current lack of DFs that handle missing observations is attributed to the use of RNN frameworks to model system dynamics.
**Ensemble Kalman Filters**: A modern variant of KFs that is particularly successful on high-dimensional, nonlinear tasks is Ensemble Kalman Filters (EnKFs) [20]. EnKFs have been shown to enable accurate estimation of state-space dynamics in data assimilation tasks [21] without linearity assumptions. They have found popularity in modeling and forecasting complex weather phenomena that may include millions of state dimensions [22]. Rather than assuming a certain parametric form of the underlying distribution, EnKFs approximate the posterior distribution through an ensemble (or collection) of state vectors. They are computationally efficient since they do not require the explicit calculation (and inversion) of error covariance matrices. In addition, EnKFs do not require explicit parametric characterizations of the process and observation noise. Instead, it only requires the ability to generate samples from the underlying distributions.
In this paper, we leverage this **key insight** in order to create a theoretical and practical connection between EnKFs and stochastic Bayesian neural networks (SNNs) [23, 24]. As stated in [25], SNNs can model two types of uncertainty: 1) aleatoric uncertainty, which arises from inherent stochasticities of a system, i.e., process noise and observation noise; 2) epistemic uncertainty, which is caused by a lack of sufficient data to determine the underlying system uniquely. The integration between EnKFs and SNNs results in a probabilistic filtering process that leverages the advantages of modern neural network techniques. First attempts at differentiable EnKFs were proposed in [26] which searches for optimal parameters utilizing gradient information from EnKFs. This differs substantially from our approach, which is a fully differentiable end-to-end framework. Note that EnKF is theoretically related to Particle Filters (PFs) - both are Monte-Carlo filtering techniques based on similar principles. However, in contrast to PFs, EnKFs provide equal weight to each ensemble member thereby eschewing the well-known sample degeneracy problem. In addition, EnKFs have also been shown to efficiently model complex phenomena using relatively small ensemble sizes [27]. A study of the approximation error of these filters [28] also indicates that with increasing size of state dimensions, EnKFs show a much slower rate of degradation than the PFs.
## III Differentiable EnKF
Recursive Bayesian filtering addresses the general challenge of estimating the state \(\mathbf{x}_{t}\) of a discrete-time dynamical system given a sequence of noisy observations \(\mathbf{y}_{1:t}\). The posterior distribution of the state can be represented as:
\[p(\mathbf{x}_{t}|\mathbf{y}_{1:t},\mathbf{x}_{1:t-1})\propto p(\mathbf{y}_{t} |\mathbf{x}_{t})\ p(\mathbf{x}_{t}|\mathbf{y}_{1:t-1},\mathbf{x}_{1:t-1}). \tag{1}\]
Let \(\text{bel}(\mathbf{x}_{t})=p(\mathbf{x}_{t}|\mathbf{y}_{1:t},\mathbf{x}_{1:t-1})\), applying the Markov property, i.e., the assumption that the next state is dependent only upon the current state, yields:
\[\text{bel}(\mathbf{x}_{t})=\underbrace{p(\mathbf{y}_{t}|\mathbf{x}_{t})}_{ \text{observation model}}\prod_{t=1}^{t}\overbrace{p(\mathbf{x}_{t}|\mathbf{x}_{t-1 })}^{\text{state transition model}}\ \text{bel}(\mathbf{x}_{t-1}), \tag{2}\]
where \(p(\mathbf{y}_{t}|\mathbf{x}_{t})\) is the observation model and \(p(\mathbf{x}_{t}|\mathbf{x}_{t-1})\) is the transition model. The transition model describes the laws that govern the evolution of the system state. By contrast, the observation model identifies the relationship between the hidden, internal state of the system and observed, noisy measurements. An alternative approach of KFs and its variants is to leverage modern deep learning techniques in order to extract complex, nonlinear transition and observation models. Starting with the state transition model in linear KFs:
\[\mathbf{x}_{t}=\mathbf{A}\mathbf{x}_{t-1}+\mathbf{q}_{t}\quad\mathbf{q}_{t} \thicksim\mathcal{N}(0,\mathbf{Q}_{t}). \tag{3}\]
the work in [8] replaces the transition matrix \(\mathbf{A}\) and the process noise \(\mathbf{Q}_{t}\) with trained neural networks \(f_{\boldsymbol{\theta}}\) and \(q_{\boldsymbol{\phi}}\) respectively:
\[\mathbf{x}_{t}=f_{\boldsymbol{\theta}}(\mathbf{x}_{t-1})+\mathcal{N}\left(0, q_{\boldsymbol{\phi}}(\mathbf{x}_{t-1})\right), \tag{4}\]
where \(\boldsymbol{\theta}\) and \(\boldsymbol{\psi}\) denote the neural network weights. Note, that the network \(q_{\boldsymbol{\psi}}(\cdot)\) produces the entries of the covariance matrix \(\mathbf{Q}_{t}\) representing a Gaussian distribution. As shown in Eq. 4, the state estimate \(\mathbf{x}_{t}\) is calculated by generating a sample from a normal distribution with covariance \(\mathbf{Q}_{t}\) which is then added to the neural network prediction \(f_{\boldsymbol{\theta}}(\mathbf{x}_{t-1})\). Hence, the process model and the process noise are calculated using two separate neural networks which may not be producing outputs consistent with each other.
### _Stochastic Neural Models of Dynamics_
In this paper, we avoid this separation by using recent insights in stochastic neural networks (SNNs) [23]. More specifically, the work in [29] has established a theoretical link between the Dropout training algorithm and Bayesian inference in deep Gaussian processes. Accordingly, after training a neural network with Dropout, it is possible to generate empirical samples from the predictive posterior via _stochastic forward passes_. Hence, for the purposes of filtering, we can **implicitly model the process noise** by sampling state from a neural network trained on the training dynamics, i.e., \(\mathbf{x}_{t}\thicksim f_{\boldsymbol{\theta}}(\mathbf{x}_{t-1})\). In contrast to previous approaches, the transition network \(f_{\boldsymbol{\theta}}(\cdot)\) models the system dynamics, as well as the inherent noise model in a consistent fashion without imposing diagonality.
### _Nonlinear Filtering with Differentiable Ensembles_
Introducing non-linearities through neural network realizations of the transition and observation function invalidates the linearity assumptions that are the backbone of many recursive Bayesian filters. To overcome this challenge, we embed our methodology within an EnKF framework. Throughout the filtering process, each ensemble member is propagated forward in time to yield a new approximate posterior distribution. EnKF does not require an explicit representation of the process and observation noise - instead we only need to be able to sample from the noise distribution. We formulate DEnKF as an extension of the EnKF while keeping the core algorithmic steps intact. In particular, we use an initial ensemble of \(E\) members to represent the initial state distribution \(\mathbf{X}_{0}=[\mathbf{x}_{0}^{1},\dots,\mathbf{x}_{0}^{E}]\), \(E\in\mathbb{Z}^{+}\).
**Prediction Step**: We leverage the stochastic forward passes from a trained state transition model to update each ensemble member:
\[\mathbf{x}_{t|t-1}^{i}\thicksim f_{\boldsymbol{\theta}}(\mathbf{x}_{t|t-1}^{i}| \mathbf{x}_{t-1|t-1}^{i}),\ \forall i\in E. \tag{5}\]
Matrix \(\mathbf{X}_{t|t-1}=[\mathbf{x}_{t|t-1}^{i},\cdots,\mathbf{x}_{t|t-1}^{E}]\) holds the updated ensemble members which are propagated one step forward through the state space. Note that sampling from the transition model \(f_{\boldsymbol{\theta}}(\cdot)\) (using the SNN methodology described above) implicitly introduces a process noise.
**Update Step**: Given the updated ensemble members \(\mathbf{X}_{t|t-1}\), a nonlinear observation model \(h_{\boldsymbol{\phi}}(\cdot)\) is applied to transform the ensemble members from the state space to observation space. Following our main rationale, the observation model is realized via a neural network with weights \(\boldsymbol{\psi}\). Accordingly, the update equations for the EnKF become:
\[\mathbf{H}_{t}\mathbf{X}_{t|t-1} =\left[h_{\boldsymbol{\psi}}(\mathbf{x}_{t|t-1}^{i}),\cdots,h_{ \boldsymbol{\psi}}(\mathbf{x}_{t|t-1}^{E})\right], \tag{6}\] \[\mathbf{H}_{t}\mathbf{A}_{t} =\mathbf{H}_{t}\mathbf{X}_{t|t-1}\] (7) \[-\left[\frac{1}{E}\sum_{i=1}^{E}h_{\boldsymbol{\psi}}(\mathbf{x}_ {t|t-1}^{i}),\cdots,\frac{1}{E}\sum_{i=1}^{E}h_{\boldsymbol{\psi}}(\mathbf{x}_{t| t-1}^{i})\right].\]
\(\mathbf{H}_{t}\mathbf{X}_{t|t-1}\) is the predicted observation, and \(\mathbf{H}_{t}\mathbf{A}_{t}\) is the sample mean of the predicted observation at \(t\). EnKF treats observations as random variables. Hence, the ensemble can incorporate a measurement perturbed by a small stochastic noise thereby accurately reflecting the error covariance of the best state estimate [20]. In our differentiable version of the EnKF, we also incorporate a sensor model which can learn projections between a latent space and higher-dimensional observations spaces, i.e. images. To this end, we leverage the methodology from Sec. III-A to train a stochastic sensor model \(s_{\boldsymbol{\xi}}(\cdot)\):
\[\tilde{\mathbf{y}}_{t}^{i}\thicksim s_{\boldsymbol{\xi}}(\tilde{\mathbf{y}}_{t}^{i}| \mathbf{y}_{t}),\ \forall i\in E. \tag{8}\]
where \(\mathbf{y}_{t}\) represents the noisy observation. Sampling yields observations \(\tilde{\mathbf{Y}}_{t}=[\tilde{\mathbf{y}}_{t}^{1},\cdots,\tilde{\mathbf{y}}_{t }^{E}]\) and sample mean \(\tilde{\mathbf{y}}_{t}=\frac{1}{E}\sum_{i=1}^{i}\tilde{\mathbf{y}}_{t}^{i}\). The innovation covariance \(\mathbf{S}_{t}\) can then be calculated as:
\[\mathbf{S}_{t}=\frac{1}{E-1}(\mathbf{H}_{t}\mathbf{A}_{t})(\mathbf{H}_{t} \mathbf{A}_{t})^{T}+r_{\boldsymbol{\zeta}}(\tilde{\mathbf{y}}_{t}). \tag{9}\]
where \(r_{\boldsymbol{\zeta}}(\cdot)\) is the measurement noise model implemented using MLP. We use the same way to model the observation noise as in [8], \(r_{\boldsymbol{\zeta}}(\cdot)\) takes an learned observation \(\tilde{\mathbf{y}}_{t}\) in time \(t\) and provides stochastic noise in the observation space by constructing the diagonal of the noise covariance matrix. The final estimate of the ensemble \(\mathbf{X}_{t|t}\) can be obtained by performing the measurement update step:
\[\mathbf{A}_{t} =\mathbf{X}_{t|t-1}-\frac{1}{E}\sum_{i=1}^{E}\mathbf{x}_{t|t-1}^{i}, \tag{10}\] \[\mathbf{K}_{t} =\frac{1}{E-1}\mathbf{A}_{t}(\mathbf{H}_{t}\mathbf{A}_{t})^{T} \mathbf{S}_{t}^{-1},\] (11) \[\mathbf{X}_{t|t} =\mathbf{X}_{t|t-1}+\mathbf{K}_{t}(\tilde{\mathbf{Y}}_{t}-\mathbf{ H}_{t}\mathbf{X}_{t|t-1}), \tag{12}\]
where \(\mathbf{K}_{t}\) is the Kalman gain. In inference, the ensemble mean \(\mathbf{\bar{x}}_{t|t}=\frac{1}{E}\sum_{i=1}^{E}\mathbf{x}_{t|t}^{i}\) is used as the updated state.
The neural network structures for all learnable modules are described in Table I. Furthermore, we highlight couple of the theoretical properties (in Appendix) of EnKF and its relations to DEnKF.
## IV Experiments
We evaluate the DEnKF framework on two common robotics tasks: a) a visual odometry task for autonomous driving and b) a robot manipulation task in both simulation and real-world. We compare our results to a number of state-of-the-art differential filtering methods [8, 9, 12].
**Training:** DEnKF contains four sub-modules: a state transition model, an observation model, an observation noise model, and a sensor model. The entire framework is trained in an end-to-end manner via a mean squared error (MSE) loss between the ground truth state \(\hat{\mathbf{x}}_{t|t}\) and the estimated state \(\bar{\mathbf{x}}_{t|t}\) at every timestep. We also supervise the intermediate modules via loss gradients \(\mathcal{L}_{f_{\boldsymbol{\theta}}}\) and \(\mathcal{L}_{s_{\boldsymbol{\theta}}}\). Given ground truth at time \(t\), we apply the MSE loss gradient calculated between \(\hat{\mathbf{x}}_{t|t}\) and the output of the state transition model to \(f_{\boldsymbol{\theta}}\) as in Eq. 13. We apply the intermediate loss gradients computed based on the ground truth observation \(\hat{\mathbf{y}}_{t}\) and the output of the stochastic sensor model \(\tilde{\mathbf{y}}_{t}\):
\[\mathcal{L}_{f_{\boldsymbol{\theta}}}=\|\bar{\mathbf{x}}_{t|t-1}-\hat{ \mathbf{x}}_{t|t}\|_{2}^{2},\ \ \mathcal{L}_{s_{\boldsymbol{\xi}}}=\|\tilde{\mathbf{y}}_{t}-\tilde{ \mathbf{y}}_{t}\|_{2}^{2}. \tag{13}\]
All models in the experiments were trained for 50 epochs with batch size 64, and a learning rate of \(\eta=10^{-5}\). We chose the model with the best performance on a validation set for testing. The ensemble size of the DEnKF was set to **32 ensemble members.**
### _Visual Odometry Task_
In this experiment, we investigate performance on the popular KITTI Visual Odometry dataset [30]. Following the same evaluation procedure as our baselines [8, 9, 12], we define the state of the moving vehicle as a 5-dimensional vector \(\mathbf{x}=[x,y,\theta,v,\hat{\theta}]^{T}\), including the position and orientation of the vehicle, and the linear and angular velocity w.r.t. the current heading direction \(\theta\). The raw observation \(\mathbf{y}\) corresponds to the RGB camera image of the current frame and a difference image between the current frame and the previous frame, where \(\mathbf{y}\in\mathbb{R}^{150\times 50\times 6}\). The learned observation \(\tilde{\mathbf{y}}\) is defined as \(\tilde{\mathbf{y}}=[v,\hat{\theta}]^{T}\), since only the relative changes of position and orientation can be captured between two frames.
**Data:** The KITTI Visual Odometry dataset consists of 11 trajectories with ground truth pose (translation and rotation matrices) of a vehicle driving in urban areas with a data collection rate around 10Hz. To facilitate learning, we standardize the data on every dimension to have a 0 mean and a standard deviation of 1 during training.
**Results:** We assess the performance of state estimation using an 11-fold cross-validation withholding 1 trajectory at each time. We report the root mean squared error (RMSE), mean absolute error (MAE), and the standard KITTI benchmark metrics, the translational error (m/m), and the rotational error (deg/m) in Table II. The error metrics are computed from the test trajectory over all subsequences of 100 timesteps, and all subsequences of 100, 200, 400, and 800 timesteps. Figure 2 shows the performance of DEnKF and other differentiable filtering techniques. Note that lower error metrics can be obtained by imposing domain- and data-specific information, i.e., using stereo images [31], incorporating LiDAR [32, 33], or applying SLAM and loop-closure related assumptions [31, 34]. However, we opt for the most commonly used setup when comparing filtering technique in a task-agnostic fashion (as performed in [8, 9, 12]) to ensure fair and comparable evaluations.
**Comparison:** Table II presents the outcomes of our proposed method in comparison with the existing state-of-the-art differentiable filters, including differentiable Extended Kalman filter (dEKF) [8], differentiable particle filter (DPF) [9], and modified differentiable particle filter with learned process and process noise models (dPF-M-lr) [8]. To provide a fair comparison, we do not include unstructured LSTM models as baselines since prior works [8, 12] have shown that LSTM models do not achieve comparable results. In our comparison, we use the same pre-trained sensor model \(s_{\boldsymbol{\xi}}\) with the same visual inputs and integrate it into all the DF frameworks evaluated here. In this experiment, the motion model of the vehicle is known. The only unknown part of the state is the velocities. Therefore, we use the learnable process model to update those state variables and use the known motion model to update the (\(x,y,\theta\)). For dEKF, we supply the computed Jacobian matrix in training and testing since the motion model is known. For DPF, we use 100 particles to train and test. DPF contains a different learnable module called observation likelihood estimation model \(l\), which takes an image embedding and outputs a likelihood for updating each particle's weight.
Table II shows that DEnKF achieves a RMSE of 1.33,
\begin{table}
\begin{tabular}{l l} \hline \(f_{\boldsymbol{\theta}}\): & 2\(\times\)NN(32, ReLu), 2\(\times\)SNN(64, ReLu), 1\(\times\)SNN(S, -) \\ \(h_{\boldsymbol{\theta}}\): & 2\(\times\)fc(32, ReLu), 2\(\times\)fc(64, ReLu), 1\(\times\) fc(0, -) \\ \(r_{\boldsymbol{\zeta}}\): & 2\(\times\)fc(16, ReLu), 1\(\times\)fc(0, -) \\ & conv(7\(\times\)7, 64, stride 2, ReLu), conv(3\(\times\)3, 32, stride 2, ReLu), \\ \(s_{\boldsymbol{\xi}}\): & conv(3\(\times\)3, 16, stride 2, ReLu), flatten(0, 2\(\times\)SNN(64, ReLu), \\ & 2\(\times\)SNN(32, ReLu), 1\(\times\)SNN(0, -) \\ \hline \multicolumn{3}{l}{: fully connected, conv: convolution, S, O: state and observation dimension.} \\ \end{tabular}
\end{table} TABLE I: Differentiable EnKF learnable sub-modules.
Fig. 2: Visual Odometry results with different differentiable filters: the error rate for LSTM and BKF are reported from [12], dEKF, DPF, and dPF-M are reproduced.
which is \(\sim\)54%, \(\sim\)41%, and \(\sim\)37% lower than that of dEKF, DPF, and dPF-M-lrn, respectively. Specifically, DEnKF reduces the translational error by \(\sim\)85%, \(\sim\)79%, and \(\sim\)75% for Test 100/200/400/800 compared to dEKF, DPF, and dPF-M-lrn. It also reduces the rotational error by \(\sim\)62%, \(\sim\)51%, and \(\sim\)42% for each baseline. Notably, dPF-M-lrn manifests the best performance among all the baselines, it implements learnable process noise model as described in Eq. 4, and it uses a Gaussian Mixture Model for computing the likelihood for all particles. While dPF-M-lrn performs the best among all baselines, DEnKF shows higher tracking accuracy than dPF-M-lrn and runs 0.009s faster. It is worth noting that the inference time for dPF-M-lrn is higher than any other DF in Table II.
**Noisy and missing observation:** According to [35], failures of vehicle cameras may compromise autonomous driving performance - potentially even leading to injuries and death. Common failures are listed by [35], i.e., brightness, blurred vision, and brackish. We conducted an evaluation of the performance of DEnKF and other DFs in the presence of noisy observations during inference. In Fig.3, we added salt-and-pepper and blurring effects to the test images, and reported the performance of DFs under these conditions in TableIII (top and mid). Our findings show that DEnKF with noise performs worse compared to DEnKF without noise, with a 17% and 29% increase in translational and rotational error, respectively, compared to the metrics from Table II. However, DEnKF remains more robust against noise perturbations than dEKF, DPF, and dPF-M-lrn, achieving \(\sim\)80%, \(\sim\)66%, and \(\sim\)59% improvement on translational error with salt-and-pepper noise for Test 100/200/400/800. We also performed an experiment on missing observations by providing no visual input with a chance of 30% at every timestep. In this case, DEnKF's modularity allows the state transition model \(f_{\mathbf{\theta}}\) to propagate the state forward through the state space, whereas other RNN-based filters remain in the same hidden state until an observation is processed. Error metrics for this scenario are reported in Table III (bottom), where we re-built the process model for each DFs to account for such problems. DEnKF's incorporation of a stochastic neural network in the forward model handling missing observation scenarios outperforms other DFs, resulting in a reduction of \(\sim\)54% and \(\sim\)36% for translational and rotational error, respectively, compared to dPF-M-lrn.
### _Robot Manipulation Task_
In the second experiment, we assess the efficacy of DEnKF in a challenging robot manipulation setting. Specifically, we train and employ DEnKF to monitor the state of a UR5 robot while performing tabletop arrangement tasks. Similar to behavioral cloning from observation tasks [36], actions are not provided in this experiment, and the DEnKF is trained to learn to propagate state over time. The robot state is defined as \(\mathbf{x}=[J_{1},\cdots,J_{7},x,y,z]^{T}\), where \(J_{1}\)-\(J_{7}\) denote the 7 joint angle of the UR5 robot, and \((x,y,z)\) represents the 3D robot end-effector (EE) position w.r.t. \((0,0,0)\) which is the center of the manipulation platform. As shown in Fig. 4(top), raw observations \(\mathbf{y}\in\mathbb{R}^{224\times 224\times 3}\) are images captured from a camera placed in front of the table. The learned observation \(\tilde{\mathbf{y}}\) is defined to have the same dimension as the robot state, where \(\tilde{\mathbf{y}}=[J_{1},\cdots,J_{7},x,y,z]^{T}\).
**Data:** The data collection process is conducted both in the MuJoCo [37] simulator and in the real-world. We record the UR5 robot operating on a random object by performing one of "pick", "push", and "put down" actions. We collect 2,000 demonstrations in simulation and 100 on the real robot, changing the location of each object for each demonstration. We use ABR control and robosuite [38] in addition to MuJoCo to ensure rigorous dynamics in the simulator. Each sequence length is around 350 steps with 0.08 sec as the timestep. We use an 80/20 data split for training and testing.
**Result:** We conduct a performance evaluation of DEnKF in three different domains, namely real-world, simulation, and sim-to-real. We train two separate DEnKFs on simulation and real-world datasets, respectively, and then perform sim-to-real transfer by fine-tuning the simulation-trained DEnKF on real-world data. State estimation with uncertainty measurement using distributed ensemble members in simulation is illustrated in Fig. 4. Following the same comparison protocol as in Sec. IV-A, we supply all DFs the same pre-trained sensor model \(s_{\mathbf{\xi}}\) in each domain, but no known motion model is enabled at this time. We train all learnable modules except \(s_{\mathbf{\xi}}\) and reported the mean absolute error (MAE) in the joint angle space (deg) and end-effector positions (cm) for DEnKFs and other DF baselines. The experimental results indicate that DEnKF and the other baseline DFs are capable of achieving domain adaptation by fine-tuning the simulation framework for real-world scenarios. Notably, the DEnKF with sim-to-real transfer achieves accurate state estimation, resulting in a reduction of 29% in MAE for joint angle space and 33% for end-effector (EE) positions when compared to the dPF-M-lrn. In Table IV, the DEnKF with sim-to-real transfer exhibits an average of 2.6cm offset (MAE) from the ground truth for EE positions across testing sequences. We further analyze the state tracking in EE space by visualizing the EE trajectories in 3D, as depicted in Fig. 5, where the fine-tuned DEnKF is utilized to estimate the state with two real-robot test examples of action sequence "pick up" and "put down".
Moreover, an additional experiment is conducted to test the trade-off between the accuracy and computational performance of proposed DEnKF framework as shown in Fig. 6. Tellingly, increasing the number of ensemble members can have a substantial positive effect on performance (\(+9.6\%\)) while the computational overhead is only marginally affected (increase from 0.075 sec to 0.134 sec).
## V Conclusions
In this paper, we present the Differentiable Ensemble Kalman Filters (DEnKF) as an extended version of Ensemble Kalman Filters, and demonstrate their applications to state estimation tasks. We show that our framework is applicable in both simulation and real-world, and that it is capable of performing state estimation with complex tasks, e.g., KITTI visual odometry, and robot manipulation. We also
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Real-world} & \multicolumn{2}{c}{Simulation} & \multicolumn{2}{c}{Sim-to-real} & \multirow{2}{*}{
\begin{tabular}{c} Wall clock \\ time (s) \\ \end{tabular} } \\ & Joint (deg) & EE (cm) & Joint (deg) & EE (cm) & Joint (deg) & EE (cm) & \\ \hline dEKF [8] & 16.0862\(\pm\)0.063 & 5.6680\(\pm\)0.060 & 4.9357\(\pm\)0.224 & 1.9112\(\pm\)0.148 & 8.3041\(\pm\)0.525 & 4.3645\(\pm\)0.072 & **0.0469\(\pm\)0.003** \\ DPF [9] & 15.9302\(\pm\)0.080 & 5.0834\(\pm\)0.301 & 4.4623\(\pm\)0.220 & 1.5135\(\pm\)0.191 & 5.9531\(\pm\)0.031 & 3.9695\(\pm\)0.006 & 0.0515\(\pm\)0.002 \\ dPF-M-lrn [8] & 12.8366\(\pm\)0.086 & 3.9521\(\pm\)0.436 & 3.8233\(\pm\)0.230 & 1.2639\(\pm\)0.081 & 5.4389\(\pm\)0.011 & 3.9405\(\pm\)0.014 & 0.0854\(\pm\)0.001 \\ DEnKF & **11.4222\(\pm\)0.005** & **3.4260\(\pm\)0.002** & **2.5587\(\pm\)0.093** & **0.8241\(\pm\)0.019** & **3.9531\(\pm\)0.034** & **2.6368\(\pm\)0.002** & 0.0712\(\pm\)0.002 \\ \hline \multicolumn{7}{l}{Means\(\pm\)standard errors.} \\ \end{tabular}
\end{table} TABLE IV: Result evaluations on UR5 manipulation task measured in MAE from 3 different domains – real-world, simulation, and sim-to-real. Results for dEKF, dPF, and dPF-M-lrn are reproduced for detailed comparisons.
Fig. 5: EE positions visualization. Top: UR5 executes “pick up” action; Bottom: UR5 executes “put down” action.
Fig. 6: Computational time vs. MAE (EE position) with increasing number of ensemble members.
discuss state tracking in high-dimensional observation spaces with varied noise conditions and missing observations. In particular, DEnKF manages to decrease the error metrics on translation and rotation by at least 59% and 36% with noisy observations versus the state-of-the-art approaches. These experiments manifest the DEnKF significantly improves the tracking accuracy and uncertainty estimates, thus, has great potential in many robotic applications.
The proposed framework is modular, which allows for flexibility in using individual components separately. However, it should be noted that various learning tasks may require distinct curricula. For example, challenging visual tasks may necessitate an extended training period for the sensor model before it can be incorporated into end-to-end learning. Therefore, a universal curriculum that guarantees optimal performance of all sub-modules in every situation does not currently exist. In future research, we plan to explore the potential of the DEnKF framework in detecting perturbations as a downstream application. Specifically, leveraging the learned system dynamics from the state transition model and the mapping from observation to state space learned by the sensor model and the Kalman update step, we aim to use the distance between the outputs of the two steps to detect perturbations in the system. Overall, the results of this study suggest that DEnKF has great potential in many robotic applications.
## Acknowledgment
The authors gratefully acknowledge support of this work through a grant by "The Global KAITEKI Center" (TGKC) of the Global Futures Laboratory at Arizona State University. TGKC is a research alliance between Arizona State University and The KAITEKI Institute, an affiliate of the Mitsubishi Chemical Group.
|
2305.09654
|
Photochemical hazes dramatically alter temperature structure and
atmospheric circulation in 3D simulations of hot Jupiters
|
Photochemical hazes are expected to form in hot Jupiter atmospheres and may
explain the strong scattering slopes and muted spectral features observed in
the transmission spectra of many hot Jupiters. Absorption and scattering by
photochemical hazes have the potential to drastically alter temperature
structure and atmospheric circulation of these planets but have previously been
neglected in general circulation models (GCMs). We present GCM simulations of
hot Jupiter HD 189733b that include photochemical hazes as a radiatively active
tracer fully coupled to atmospheric dynamics. The influence of haze radiative
feedback strongly depends on the assumed haze optical properties. For soot
hazes, two distinct thermal inversions form, separated by a local temperature
minimum around 10$^{-5}$ bar caused by upwelling on the dayside mixing air with
low haze abundance upwards. The equatorial jet broadens and slows down. The
horizontal distribution of hazes remains relatively similar to simulations with
radiatively passive tracers. For Titan-type hazes, the equatorial jet
accelerates and extends to much lower pressures, resulting in a dramatically
different 3D distribution of hazes compared to radiatively passive or soot
hazes. Further experimental and observational studies to constrain the optical
properties of photochemical hazes will therefore be crucial for understanding
the role of hazes in exoplanet atmospheres. In the dayside emission spectrum,
for both types of hazes the amplitude of near-infrared features is reduced,
while the emitted flux at longer wavelengths ($>$4 $\mu$m) increases. Haze
radiative feedback leads to increased phase curve amplitudes in many infrared
wavelength regions, mostly due to stronger dayside emission.
|
Maria E. Steinrueck, Tommi Koskinen, Panayotis Lavvas, Vivien Parmentier, Sebastian Zieba, Xianyu Tan, Xi Zhang, Laura Kreidberg
|
2023-05-16T17:53:13Z
|
http://arxiv.org/abs/2305.09654v1
|
Photochemical hazes dramatically alter temperature structure and atmospheric circulation in 3D simulations of hot Jupiters
###### Abstract
Photochemical hazes are expected to form in hot Jupiter atmospheres and may explain the strong scattering slopes and muted spectral features observed in the transmission spectra of many hot Jupiters. Absorption and scattering by photochemical hazes have the potential to drastically alter temperature structure and atmospheric circulation of these planets but have previously been neglected in general circulation models (GCMs). We present GCM simulations of hot Jupiter HD 189733b that include photochemical hazes as a radiatively active tracer fully coupled to atmospheric dynamics. The influence of haze radiative feedback strongly depends on the assumed haze optical properties. For soot hazes, two distinct thermal inversions form, separated by a local temperature minimum around \(10^{-5}\) bar caused by upwelling on the dayside mixing air with low haze abundance upwards. The equatorial jet broadens and slows down. The horizontal distribution of hazes remains relatively similar to simulations with radiatively passive tracers. For Titan-type hazes, the equatorial jet accelerates and extends to much lower pressures, resulting in a dramatically different 3D distribution of hazes compared to radiatively passive or soot hazes. Further experimental and observational studies to constrain the optical properties of photochemical hazes will therefore be crucial for understanding the role of hazes in exoplanet atmospheres. In the dayside emission spectrum, for both types of hazes the amplitude of near-infrared features is reduced, while the emitted flux at longer wavelengths (\(>\)4 \(\mu\)m) increases. Haze radiative feedback leads to increased phase curve amplitudes in many infrared wavelength regions, mostly due to stronger dayside emission.
Exoplanet atmospheres (487) -- Exoplanet atmospheric dynamics (2307) -- Exoplanet atmospheric structure (2310) 0000-0002-0002-4870-8870]Maria E. Steinrueck
0000-0002-8861-7888]Tommi Koskinen
0000-0002-4882-7885]Panayotis Lavvas
0000-0002-4882-7885]Vivien Parmentier
0000-0002-4882-7885]Sebastian Zieba
0000-0002-4882-7885]Xianyu Tan
0000-0002-4882-7885]Xi Zhang
0000-0002-4882-7885]Laura Kreidberg
## 1 Introduction
Transit observations of many short-period giant planets reveal the presence of aerosols at low pressures (Sing et al., 2016; Crossfield & Kreidberg, 2017; Gao et al., 2021). Among the observed spectral signatures of aerosols are short-wavelength scattering slopes (e.g., Pont et al., 2008, 2013; Nikolov et al., 2015; Alam et al., 2020), muted wings of the sodium and potassium lines (e.g., Huitson et al., 2012; Gibson et al., 2013; Mallonn & Strassmeier, 2016) and the low amplitude of the near-infrared water feature near 1.4 \(\mu\)m (e.g., Line et al., 2013; Deming et al., 2013; McCullough et al., 2014; Iyer et al., 2016; Wakeford et al., 2017). In some cases, there is evi
dence for an aerosol layer spanning many pressure scale heights (Pont et al., 2013; Estrela et al., 2021), requiring aerosols to be present at pressures as low as 1 \(\mu\)bar (Estrela et al., 2021).
Two fundamentally different formation mechanisms for these aerosols have been proposed: particles forming through condensation of gases as they are transported towards cooler regions of the atmosphere (condensate clouds) and particles forming through a complex chain of photochemical reactions initiated by UV light at high altitudes (photochemical hazes). While condensate clouds are the most likely type of aerosol in most of the hotter hot Jupiters (e.g., Sudarsky et al., 2000; Wakeford and Sing, 2015; Powell et al., 2018; Gao et al., 2020), photochemical hazes are thought to dominate over condensate clouds for cooler planets, especially at high altitudes. The exact temperature of the transition is model-dependent. While Gao et al. (2020) find photochemical hazes to be the dominant source of opacity for equilibrium temperatures \(<950\) K, Lavvas and Koskinen (2017) predict that photochemical hazes could explain the transmission spectrum of HD 189733b (\(T_{\rm eq}\approx 1,200\) K). Arfaux and Lavvas (2022) found that the upper temperature limit based on the observations is between 1400 and 1700 K. In laboratory experiments, Fleury et al. (2019) observed that photochemical hazes could form in hydrogen-dominated atmospheres as hot as 1500 K if the C/O ratio is supersolar.
In addition, it has been proposed that photochemical hazes could explain "super-Rayleigh slopes" (scattering slopes that are steeper than what would be expected for Rayleigh scattering with a constant abundance of scatterers) more naturally than condensate clouds (Pont et al., 2013; Ohno and Kawashima, 2020). Thus, photochemical hazes can be important for explaining the optical and UV spectrum even for planets in which condensate clouds dominate the infrared opacity.
It has been well-established that aerosols have a strong potential to alter the atmospheric temperature and circulation, as known from examples in the Solar System: On Titan, absorption and scattering by hazes are important contributions to the energy budget of the atmosphere. Hazes create a thermal inversion at low pressures and have a cooling effect on deeper atmospheric regions and the surface, called the anti-greenhouse effect (McKay et al., 1991). Coupling a haze microphysics model with a general circulation model has been crucial for explaining the observed haze structure and circulation of Titan (Rannou et al., 2002; Lebonnois et al., 2012).
For extrasolar giant planets, multiple studies on the radiative effects of condensate clouds in GCMs of hot Jupiters establish that radiative feedback from aerosols is significant. Here, we briefly review these papers, sorted roughly from the least complex model to the most complex one. Oreshenko et al. (2016) examined the effect of including scattering in a double-gray model of a hot Jupiter assuming uniform scattering properties throughout the atmosphere. Roman and Rauscher (2017) took a similar approach but increased the complexity by prescribing different static, horizontally inhomogeneous cloud coverages motivated by optical phase curve observations of Kepler-7b. For their vertical cloud coverage, they assumed that the cloud would extend from a chosen cloud base to the top of the model, with a constant mixing ratio. They found that inhomogeneous clouds significantly impacted the temperature structure as well as the equatorial jet, but that the prescribed static cloud coverages resulted in a simulation that was not energy-balanced. In a follow-up study, Roman and Rauscher (2019) updated their model to include a physically motivated cloud location, such that clouds form in any atmospheric column in which the temperature profile crosses the condensation curve of a relevant cloud species. Based on these models, Harada et al. (2021) found that radiative feedback from clouds significantly affected high-resolution spectra of hot Jupiters. Parmentier et al. (2016) used an approach similar to Roman and Rauscher (2019) but using wavelength-dependent radiative transfer, though the results of their simulations including haze feedback are only briefly discussed in their publication. In Roman et al. (2021) and Parmentier et al. (2021), their respective models were applied to a much larger range of equilibrium temperatures. Lines et al. (2019) and Christie et al. (2021) also employed a similar though somewhat more complex approach, calculating cloud properties such as the vertical distribution and particle size based on 1D cloud model EDDYSED (Ackerman and Marley, 2001).
A different and dynamically more self-consistent approach is to include one or several cloud species as a tracer in the model, thus simulating how clouds are transported within the atmosphere. After first studies modeling clouds as passive tracers (Parmentier et al., 2013; Charnay et al., 2015), neglecting radiative feedback, Charnay et al. (2015) were the first to model radiatively active tracers representing clouds on a short-period extrasolar giant planet (in their case, a mini-Neptune). In their model, all material exceeding the vapor pressure condensed into particles with a prescribed, fixed size. Heating by clouds produced a dayside thermal inversion, the strength of which was limited by the evaporation of the cloud (an effect also observed in the local equilibrium cloud model of Roman and Rauscher
2019). Cloud radiative feedback in addition led to a more severe depletion of clouds in the equatorial zone compared to higher latitudes. More recently, Komacek et al. (2022) simulated radiatively active tracer clouds using a comparable model in the atmospheres of ultra-hot Jupiters, showing that cloud patchiness might lead to a higher thermal emission on the nightside. Finally, both Lee et al. (2016) and Lines et al. (2018) coupled a full microphysics model to a GCM of a hot Jupiter. Their model traces the abundances of multiple gas species and captures the key processes of nucleation, particle growth of mixed-species grains through surface reactions, and evaporation in addition to transport and gravitational settling of cloud particles. It is by far the most complex cloud model that has been applied to extrasolar giant planets. In both studies, heating and cooling by clouds had a significant effect on temperature structure, cloud abundance and atmospheric circulation. Comparing the results between both studies, Lines et al. (2018) further concluded that explicit treatment of scattering (as opposed to adding the scattering cross section to the absorption cross section, as done in Lee et al., 2016) is important.
Given the established significance of radiative feedback for condensate clouds on short-period giant planets, the role of photochemical hazes on these atmospheres is less well-studied. Studies using one-dimensional radiative transfer models have found that absorption and scattering by photochemical hazes can lead to stark changes in the temperature profile, similar to the anti-greenhouse effect on Titan. For mini-Neptunes, Morley et al. (2015) found that soot-based photochemical hazes created a thermal inversion of up to 200 K at low pressures, while simultaneously cooling deeper layers of the atmosphere by several hundred Kelvin. This temperature inversion led to emission spectra that substantially differed from models of a clear atmosphere or an atmosphere with condensate clouds. More recently, Lavvas and Arfaux (2021) and Arfaux and Lavvas (2022) examined haze radiative feedback in hot Jupiter atmospheres. They confirmed that in this case, a thermal inversion also formed at low pressures, with the detailed temperature structure depending on the refractive index of the hazes. However, the feedback of photochemical hazes on the atmospheric circulation has not yet been studied with general circulation models.
The goal of this work is to investigate the role of radiative feedback of photochemical hazes in the atmospheres of hot Jupiter exoplanets. Mass mixing ratios of photochemical hazes generally peak at lower pressures than condensate clouds, suggesting that hazes could affect atmospheric dynamics differently from clouds. First simulations of the 3D distribution of photochemical hazes, modeling hazes as a radiatively passive tracer, demonstrate that a complex and highly inhomogeneous global distribution can be expected (Steinrueck et al., 2021). Motivated by these findings, we add complexity to the haze model of Steinrueck et al. (2021) by coupling the haze model to the radiative transfer, thus adding heating and cooling by hazes to the dynamics. In addition, we switch to using wavelength-dependent radiative transfer (Showman et al., 2009; Kataria et al., 2013) rather than the previously used double-gray radiative transfer to model more realistic heating and cooling rates. In terms of the level of complexity and modeling approach, our model can thus be viewed as a photochemical haze equivalent to the Charnay et al. (2015) model. As in Steinrueck et al. (2021), we focus on HD 189733b, which is one of the best-characterized exoplanets to date and which shows evidence of aerosols in its transmission spectrum (e.g., Tinetti et al., 2007; Knutson et al., 2007; Pont et al., 2008; Gibson et al., 2012; Knutson et al., 2012; Majeau et al., 2012; McCullough et al., 2014; Louden and Wheatley, 2015; Angerhausen et al., 2015; Brogi et al., 2016; Flowers et al., 2019; Seidel et al., 2020; Sanchez-Lopez et al., 2020; King et al., 2021).
While this paper focuses on hot Jupiters, for which higher quality observations are available, we anticipate that our work will also lay the groundwork for later studies of haze radiative feedback on cooler and smaller tidally locked giant planets, for which photochemical hazes are predicted to form efficiently (Morley et al., 2015; Horst et al., 2018; He et al., 2018; Kawashima and Ikoma, 2019; Adams et al., 2019; Lavvas et al., 2019) and for which there is ample observational evidence of aerosols in transmission spectra (Crossfield and Kreidberg, 2017).
The remainder of the paper is structured as follows: Section 2 describes our model. In Section 3, we compare simulation results from the double-gray model used in Steinrueck et al. (2021) to results using a wavelength-dependent model using the correlated-k method. Section 4 describes simulations with haze radiative feedback, with Section 4.1 focusing on a refractive index of soot and Section 4.2 focusing on a refractive index similar to Titan-type hazes. In Section 5, we explore the impact on model-predicted transmission and emission spectra as well as phase curves. Finally, we discuss caveats and directions for future work in Section 6 and summarize our findings in Section 7.
## 2 Methods
We use SPARC/MITgcm to simulate the atmosphere of hot Jupiter HD 189733b. SPARC/MITgcm cou
ples the plane-parallel, wavelength-dependent radiative transfer code of Marley and McKay (1999) to the general circulation model of Adcroft et al. (2004). It has been applied to a wide range of hot Jupiters and other exoplanets (e.g., Showman et al., 2009, 2013; Lewis et al., 2014; Kataria et al., 2015, 2016; Steinrueck et al., 2019; Parmentier et al., 2018, 2021). In addition, to facilitate comparison with Steinrueck et al. (2021), we also include one simulation using MITgcm with double-gray radiative transfer.
### Atmospheric Dynamics
We solve the global primitive equations on a cubed-sphere grid using the MITgcm in its atmosphere configuration. The primitive equations are an approximation of the fluid dynamics equations that is valid for stably-stratified shallow atmospheres. It has been demonstrated that they are a good approximation when simulating the atmospheres of hot Jupiters (Showman and Guillot, 2002; Mayne et al., 2014). The simulation parameters are summarized in Table 1.
We use a fourth-order Shapiro filter to suppress small numerical fluctuations at the grid scale that otherwise could grow and cause instabilities. Similar to Liu and Showman (2013), we include a drag in the deep atmosphere. This both stabilizes the simulation and ensures independence of the initial condition. The form of the drag is given by \(k_{v}=k_{F}(p-p_{\rm drag,top})/(p_{\rm bottom}-p_{\rm drag,top})\), where \(p_{\rm bottom}\) is the bottom boundary of the simulation domain (200 bar). Carone et al. (2020) found that a bottom boundary of 200 bar is sufficiently deep for planets with a rotation period \(\gtrapprox 1.5\) days, well-fulfilled by HD 189733b. We choose \(k_{F}=10^{-4}\) s\({}^{-1}\) and \(p_{\rm drag,top}=10\) bar.
Thorngren et al. (2019) suggested that based on the observed distribution of hot Jupiter radii, the internal heat flux in most hot Jupiters likely is significantly higher than frequently assumed in GCMs. As a consequence, the radiative-convective boundary also is shallower, reaching into typical simulation domains of GCMs. We therefore changed the treatment of the bottom boundary condition compared to our previous model (Steinrueck et al., 2021), where a uniform net flux was prescribed at the bottom. Instead, we assume that the deepest model layers have reached the convective zone and relax the temperature at the bottommost layer towards a prescribed value, with a relaxation timescale of \(10^{5}\) s. This treatment, similar to May et al. (2021), crudely mimics the effect of convective mixing in controlling the deep temperature structure in our model. The temperature of the bottom-most layer at 170 bars, 2891 K, was chosen by interpolating temperature profiles from the grid of models presented in Thorngren et al. (2019) to the gravity and incident flux of HD 189733b. The intrinsic temperature corresponding to this temperature profile is \(\approx 375\) K. Further, we include a convective adjustment scheme based on the dry adiabatic adjustment scheme used in the Community Atmosphere Model (CAM, Collins et al., 2004, p. 100) in our simulations. We found that in addition to being physically motivated, these changes lead to improved numerical stability at long simulation runtimes.
### Radiative Transfer
#### 2.2.1 Wavelength-dependent radiative transfer
The radiative transfer used in SPARC/MITgcm is based on the plane-parallel, two-stream radiative transfer code by Marley and McKay (1999), that was originally developed for Titan (McKay et al., 1989) and later adapted for brown dwarfs and exoplanets (e.g., Marley et al., 1996; Fortney et al., 2005, 2008; Morley et al.,
\begin{table}
\begin{tabular}{l r r} \hline \hline \multicolumn{1}{c}{ Parameter} & \multicolumn{1}{c}{Value} & Units \\ \hline Radius\({}^{I}\),\({}^{2}\) & 1.13 & \(R_{\rm Jup}\) \\ Gravity\({}^{I}\) & 21.93 & m s\({}^{-2}\) \\ Rotation period\({}^{I}\) & 2.21857567 & d \\ Semimajor axis\({}^{3}\) & 0.03142 & AU \\ Specific heat capacity & \(1.3\cdot 10^{4}\) & J kg\({}^{-1}\) K\({}^{-1}\) \\ Specific gas constant & 3714 & J kg\({}^{-1}\) K\({}^{-1}\) \\ Horizontal resolution & C32\({}^{d}\) & \\ Vertical resolution & 60 & layers \\ Lower pressure boundary & \(1.75\cdot 10^{-7}\) & bar \\ Upper pressure boundary & 200 & bar \\ Temperature of bottom-most layer\({}^{d}\) & 2891 & K \\ Hydrodynamic time step & 25 & s \\ Radiative time step & 50 & s \\ \hline \end{tabular} \({}^{1}\)Stassun et al. (2017)
\({}^{2}\)R\({}_{\rm Jup}\) here denotes the nominal equatorial radius of Jupiter, with a value of \(7.1492\cdot 10^{7}\) m, as defined by IAU 2015 Resolution B3.
\({}^{3}\)Southworth (2010)
\end{table}
Table 1: Model Parameters
2012). It was first coupled to the MITgcm by Showman et al. (2009). We use the version with 11 wavelength bins introduced by Kataria et al. (2013), which is optimized for computational speed while maintaining accuracy. The code uses the correlated-k method (e.g., Goody and Yung, 1989) to describe molecular opacities within each wavelength bin. Correlated-k coefficients are calculated assuming abundances based on the equilibrium chemistry calculations of Lodders and Fegley (2002) and Visscher et al. (2006), assuming solar elemental abundances. Molecular opacities are taken from Freedman et al. (2008), including the updates from Freedman et al. (2014). We note that our previous work (Steinrueck et al., 2019) as well as Drummond et al. (2018) and Drummond et al. (2020) found that on HD 189733b, transport-induced disequilibrium abundances of CH\({}_{4}\) and H\({}_{2}\)O can alter temperatures in the lower atmosphere by up to 10% compared to equilibrium chemistry. However, the method of Steinrueck et al. (2019), who assumed homogeneous CH\({}_{4}\), CO and H\({}_{2}\)O abundances throughout the atmosphere, is only a valid approximation for pressures above \(\approx 10^{-4}\) bar. At lower pressures, photochemistry destroys CH\({}_{4}\), leading to a rapid drop in the CH\({}_{4}\) abundances with decreasing pressure (Moses et al., 2011). Assuming a constant CH\({}_{4}\) abundances as in Steinrueck et al. (2019) thus considerably overestimates CH\({}_{4}\) abundances at low pressures. Incidentally, at these low pressures, equilibrium chemistry predicts low CH\({}_{4}\) abundances that rapidly decline with decreasing pressure, closer to what is observed in 1D models that include photochemistry (Moses et al., 2011), despite underpredicting CH\({}_{4}\) abundance at pressures between \(\approx 10^{-4}-1\) bar. Because this paper focuses on the temperature, circulation and haze distribution at these low pressures, we assume equilibrium abundances for all species for simplicity. We note that this assumption does not affect the haze production rate in our model, which we treat as a free parameter.
Absorption and scattering by hazes is calculated based on Mie theory (Mie, 1908). In order to smooth the Mie oscillations that would be observed for a single particle size, we use a narrow log-normal with a geometric standard deviation of 1.05 for the particle size distribution within the radiative transfer code (similar to Parmentier et al., 2016, 2021). The haze opacity is linearly related to the local haze abundance. The haze abundance used in the radiative transfer calculation is directly coupled to the time- and location-dependent tracer describing the haze mass mixing ratio (see below). The refractive index of the haze particles, an important input quantity for our calculations, is poorly constrained. In the absence of laboratory measurements specifically conducted with exoplanets in mind, soots frequently have been used as analog for high-temperature hazes (e.g., Morley et al., 2013, 2015; Lavvas and Koskinen, 2017; Ohno and Kawashima, 2020; Lavvas and Arfaux, 2021; Steinrueck et al., 2021). We therefore assume refractive indices of haze particles based on measurements of soot particles formed in combustion experiments for our nominal simulations. Specifically, we use the refractive indices from Lavvas and Koskinen (2017), who combine the measurements from several different groups (Lee and Tien, 1981; Chang and Charalampopoulos, 1990; Gavilan et al., 2016) in order to cover a broad wavelength range. While the detailed complex refractive index can vary between different soot experiments (e.g., Jager et al., 1999), soots are in general known to be highly absorbing over a broad wavelength range. To explore the effect of a different composition of the hazes, we also ran simulations using refractive indices typical of Titan hazes. Here, we use the refractive index from Lavvas et al. (2010), who base the real part of the refractive index on laboratory experiments simulating haze formation on Titan (Khare et al., 1984) and retrieve the imaginary part from observations with the _Descent Imager/Spectral Radiometer_ (DISR) of the _Huygens_ probe. We note that both Titan's haze (collected by the _Huygens_ probe) as well as laboratory Titan haze analogs (tholins) pyrolyze at temperatures above \(\approx 600\) K (Israel et al., 2005; Morisson et al., 2016) and thus are an unlikely candidate for hazes in hot Jupiter atmospheres. However, given the lack of knowledge of the optical properties of hazes in hot Jupiter atmospheres, it is useful to consider Titan-type hazes as an example of hazes that are more reflective and have a stronger wavelength-dependence of the extinction cross-sections than soots. Titan-type haze refractive indices have been used in this sense in multiple other studies of hot Jupiters (Ohno and Kawashima, 2020; Lavvas and Arfaux, 2021). We discuss the limitations of the choice of the refractive indices in more detail in Section 6. The refractive indices for soots and Titan-type hazes used in this work are identical to the ones used by Lavvas and Arfaux (2021) and are shown in Fig. 1.
#### 2.2.2 Double-gray radiative transfer
In addition to the simulations using SPARC, we also include one simulation using the double-gray radiative transfer used in Steinrueck et al. (2021) for comparison. In this simulation, the TWOSTR package (Kylling et al., 1995), which is based on the multistream discrete ordinate algorithm DISORT (Stamnes et al., 1988), is used to solve the radiative transfer equations for a plane-parallel atmosphere in the two-stream approximation. For the opacities, we choose a value of \(\kappa_{v}=6\cdot 10^{-4}\sqrt{T_{\rm irr}/2000\rm K}\)
m\({}^{2}\) kg\({}^{-1}=5.5\cdot 10^{-4}\) m\({}^{2}\) kg\({}^{-1}\) in the visible band and a value of \(\kappa_{\rm th}=10^{-3}\) m\({}^{2}\) kg\({}^{-1}\) in the thermal band (Parmentier and Guillot, 2014). Except for the choice of the bottom boundary condition and initial condition, which are chosen to be identical to the other simulations in this work, the simulation setup is identical to Steinrueck et al. (2021).
### Haze model
Photochemical hazes are included in the GCM as a tracer. The haze mass mixing ratio \(\chi\) obeys
\[\frac{D\chi}{Dt}=-g\frac{\partial(\rho\chi V_{s})}{\partial p}+P+L, \tag{1}\]
where \(D/Dt\) is the material derivative \(\partial/\partial t+\mathbf{v_{H}}\cdot\nabla_{\mathbf{H}}+\omega\partial/ \partial\mathbf{p}\), with \(\mathbf{v_{H}}\) being the horizontal velocity, \(\nabla_{\mathbf{H}}\) the horizontal gradient operator on a sphere in pressure coordinates and \(\omega\) the vertical velocity in pressure coordinates. Furthermore, \(g\) is the gravitational acceleration, \(\rho\) is the gas density and \(V_{s}\) is the settling velocity of the haze particles in the atmosphere in m s\({}^{-1}\). For the production term \(P\), we assume a log-normal distribution in pressure,
\[P=F_{0}\,g\cos\theta\cdot\frac{1}{\sqrt{2\pi}p\sigma}\,\exp\left(-\frac{\ln^{ 2}(p/m)}{2\sigma^{2}}\right), \tag{2}\]
with a median \(m=2\)\(\mu\)bar and a standard deviation \(\sigma=0.25\ln(10)\approx 0.576\). Here, \(F_{0}\) is the column-integrated haze mass production rate at the substellar point (given in Table 2) and \(\theta\) is the angle of incidence of the starlight. The parameters of the distribution were chosen such that haze production is negligible in the two top-most layers. We note that except for the value of \(F_{0}\), this production term is identical to the production term used in Steinrueck et al. (2021), though we here choose to write it directly as a function of \(p\) for improved clarity.
The loss term \(L\) is given by
\[L=\begin{cases}0&\text{for $p<p_{\rm deep}$},\\ -\chi/\tau_{loss}&\text{for $p>p_{\rm deep}$},\end{cases} \tag{3}\]
with the loss time scale \(\tau_{\rm loss}=10^{3}\) s and \(p_{\rm deep}=100\) mbar. This term is an idealized representation of the condensation of cloud species on top of the haze particles, thus removing them from the distribution of pure hazes, as well as the thermal destruction of hazes in the deep atmosphere. The particular value of \(p_{\rm deep}\) was chosen for numerical reasons: In tests exploring the sensitivity of the 3D haze distribution to the choice of \(p_{\rm deep}\), larger values led to longer convergence times for the haze distribution. However, the final haze distribution for \(p\ll p_{\rm deep}\) did not substantially depend on \(p_{\rm deep}\). A more detailed description of the model can be found in Steinrueck et al. (2021). For the simulations presented here, we fix the particle size to 3 nm, close to particle size in 1D microphysics models of photochemical hazes in the atmosphere of HD 189733b (Lavvas and Koskinen, 2017), and assume a particle density of 1,000 kg m\({}^{-3}\).
### Simulation runtime and overview of the simulations
Table 2 provides an overview of the simulations. All simulations were initiated from a state of rest with an initial temperature profile interpolated from the grid of Thorngren et al. (2019) and run for 4,500 Earth days simulation time. The simulation time necessary for convergence of the haze distribution depends on two factors: How fast hazes are transported downward and how long it takes to produce the amount of hazes present in the equilibrium state. The former is determined by the smaller one of the vertical mixing timescale and the gravitational settling timescale. For small particle sizes, the vertical mixing timescale is shorter except at very low pressures. The vertical mixing time scale \(\tau_{\rm mix}=H^{2}/K_{zz}\) (estimated using Eq. 9 in Steinrueck et al., 2021) varies from less than an hour at 1 \(\mu\)bar to \(\approx 900\) days at 100 mbar, and thus is not the limiting factor for convergence. The simulation runtime was therefore chosen by monitoring the total mass of hazes
Figure 1: Real (top panel) and imaginary (bottom panel) parts of the complex refractive indices used in this work, shown at the 196-wavelength-bin resolution used for post-processing.
in the simulation until it reached a quasi-steady state. We note that in the quasi-steady state, the total mass of hazes still fluctuated by up to 10% over timescales of a few hundred days. Unless stated otherwise, our simulation results stated below have been averaged over the last 100 days of simulation time.
#### 2.4.1 Transit spectra
To obtain transit spectra, we use a one-dimensional line-by-line radiative transfer code. To account for inhomogeneities at the terminator, we calculate the transmission spectrum separately for the morning and evening terminator as well as for the combined effect of the two limbs. Molecular and atomic species included are H\({}_{2}\)O, CH\({}_{4}\), CO, CO\({}_{2}\), Na and K. The code further includes Rayleigh scattering by H\({}_{2}\) and collision-induced absorption by H\({}_{2}\)-H\({}_{2}\) and H\({}_{2}\)-He pairs. We treat the haze particles using Mie scattering with the same complex refractive indices as in the GCM. We choose the reference pressure such that the planet radius in the Spitzer 3.6 \(\mu\)m band matches the observations at that wavelength. Detailed descriptions of the code and opacities used can be found in Lavvas and Koskinen (2017) and Lavvas and Arfaux (2021).
#### 2.4.2 Reflection spectra, emission spectra, and phase curves
We calculate reflection spectra, emission spectra and phase curves using the same radiative transfer code and opacity sources as in the GCM with wavelength-dependent radiative transfer (Section 2.2.1), however, using 196 frequency bins. At each orbital phase, the radiative transfer equation is solved along the line of sight for each atmospheric column. The outgoing fluxes then are combined by performing a weighted average across the disk that is visible from Earth at the given phase. For the star, we use a NextGen spectrum (Hauschildt et al., 1999) and a stellar radius of 0.805 \(R_{\odot}\)(Boyajian et al., 2015).
## 3 Passive tracer simulations: double-gray vs correlated-k
Before looking at the effects of radiative feedback, we have to compare how the simulation results obtained from the model with wavelength-dependent, correlated-k radiative transfer without haze radiative feedback compare to the gray model used in Steinrueck et al. (2021). The temperature structure differs substantially
\begin{table}
\begin{tabular}{l l r l} \hline \hline \multicolumn{1}{c}{ Radiative transfer} & \multicolumn{1}{c}{Haze feedback} & \multicolumn{1}{c}{Haze production rate\({}^{\lx@sectionsign}\)} & \multicolumn{1}{c}{Refractive index} \\ & & \multicolumn{2}{c}{(kg m\({}^{-2}\)\(s^{-1}\))} & \\ \hline double-gray & off & \(2.5\cdot 10^{-12}\) & N/A \\ correlated-k & off & \(2.5\cdot 10^{-12}\) & N/A \\ correlated-k & on & \(2.5\cdot 10^{-12}\) & soot \\ correlated-k & on & \(5\cdot 10^{-12}\) & soot \\ correlated-k & on & \(1\cdot 10^{-11}\) & soot \\ correlated-k & on & \(2.5\cdot 10^{-11}\) & soot \\ correlated-k & on & \(2.5\cdot 10^{-11}\) & Titan-type \\ correlated-k & on & \(1\cdot 10^{-10}\) & Titan-type \\ \hline \hline \end{tabular}
* at substellar point, column-integrated
\end{table}
Table 2: List of simulations
Figure 2: Dayside temperature profiles, calculated using an average weighted by the cosine of the angle of incidence.
between the simulations (Fig. 2 and 3). The gray simulation is almost isothermal for pressures \(\lessapprox\)10 mbar. In contrast, in the correlated-k simulation, the temperature declines steadily with decreasing pressure up until \(\approx 10^{-5}\) bar. Below that pressure, the temperature profile becomes isothermal. Only for 10 bar\(<p<\)100 mbar, temperatures are similar. In this region, the nightside average temperatures are almost similar. The dayside average of the correlated-k model is somewhat cooler for \(p\lessapprox 1\) bar and somewhat hotter for \(p\gtrapprox 1\) bar. The double-gray model further significantly underestimates day-to-night temperature contrast for \(p\lessapprox 50\) mbar. It is well-known that gray models overestimate temperatures at low pressures, both in 1D (e.g., Guillot, 2010) and 3D models (Lee et al., 2021). This effect is particularly strong when choosing a constant-with-pressure opacity, as is the case in our double-gray model.
Qualitatively, there are many similarities in the atmospheric circulation, including that both models produce predominantly day-to-night flow at low pressures and a strong super-rotating equatorial jet at higher pressures, typical for 3D simulations of hot Jupiters. Looking at the more detailed picture, however, there are significant differences. A comparison of the zonal-mean zonal velocity is shown in Fig. 4. In the correlated-k simulation, the core region of the equatorial jet is more narrow in latitude than in the double-gray simulation. Further, in the double-gray simulation, the jet broadens with increasing altitude. In the correlated-k simulation, there is less broadening with altitude. Furthermore, the peak velocity drops from \(\approx 4,800\) m s\({}^{-1}\) in the gray simulation to \(\approx 4,200\) m s\({}^{-1}\) in the correlated-k simulation. Lee et al. (2021) also compared the changes in atmospheric circulation and temperature structure between a correlated-k and a double-gray model in a simulation of HD 209458b. Their findings are very similar to ours. The only exception to this is the peak strength of the equatorial jet, which in their model increases with the correlated-k approach, while it decreases in our model.
Looking at the horizontal velocities on isobars (shown in Fig. 5 as arrows), perhaps the most striking change is that the location of the mid-latitude nightside vortices moves poleward and closer to the antistellar longitude in the correlated-k simulation. The shape of the vortices also becomes more asymmetrical. Further, there are significant changes in the vertical velocities. In the double-gray simulation, the largest vertical velocities are at the chevron-shaped morning terminator downwelling feature (which previously has been identified as hydraulic jump, Showman et al., 2009; Steinrueck et al., 2021) and at mid-latitudes between evening terminator and substellar point. While there still is strong downwelling in these regions in the correlated-k simulation, the vertical velocities are somewhat lower than in the double-gray simulation. Instead, the largest downward vertical velocities are found on the nightside near the pole, at \(\approx 75^{\circ}\) latitude, near the antistellar longitude. At this location, downward velocities reach a value of 120 m s\({}^{-1}\) at a pressure of 1 \(\mu\)bar. This is more than 1.5 times the peak vertical velocity at the same pressure level in the double-gray simulation. In both the double-gray and the correlated-k simulation, the regions of strong up- and downwelling remain vertically coherent for over three orders of magnitude (between 1 mbar and 1 \(\mu\)bar). We further note that in the double-gray simulation, there is a narrow band of strong upwelling at the evening terminator. In the correlated-k simulation, there are subtle hints of such a band but it is by far not as prominent.
The differences in atmospheric circulation result in substantial changes in the three-dimensional haze distribution (Fig 6). In the double-gray simulation, as described in detail in Steinrueck et al. (2021), hazes accumulate in the mid-latitude nightside vortices between 3 \(\mu\)bar and 0.1 mbar. In the correlated-k simulation, instead, the haze mass mixing ratio remains low in the center of the vortices. However, there is a band of enhanced haze mass mixing ratio circling the center of the nightside vortices, following the horizontal projection of the streamlines. This band intersects with all three major downwelling regions (pole, west of antistellar point, near
Figure 3: Nightside average temperature profiles.
morning terminator). The haze mixing ratio clearly is further enhanced near these intersections. As the band almost reaches down to the equator, the equatorial region on the nightside also has enhanced haze mixing ratios. The equatorial region on the dayside and near the evening terminator, which is dominated by upwelling, is strongly depleted of hazes (especially east of the sub-stellar point). At high latitudes on the dayside, there are intermediate mixing ratios. As pressure increases, the mixing ratio on most of the dayside decreases only slowly, while the mixing ratio in the enhanced regions decreases much faster. Thus, the circular bands with enhanced mixing ratio surrounding the nightside vortices lose their prominence with increasing pressure. The horizontal haze distribution thus gradually morphs into a pattern that resembles two broad bands of enhanced haze mixing ratio spanning around the planet, broadening and moving to higher latitudes on the dayside. This pattern qualitatively resembles the banded pattern at pressures above 0.1 mbar in the double-gray simulation (Steinrueck et al., 2021). The bands, however, are closer to the equator in the correlated-k simulation and both bands connect at the equator near the morning terminator.
Comparing the globally-averaged vertical profiles of the haze mass mixing ratio (Fig. 7), the mixing ratio drops off much faster with increasing pressure in the correlated-k simulation. Presumably, this can be attributed to the stronger downwelling velocities. In addition, the mass mixing ratio gradient remains more constant with pressure in the correlated-k simulation. These changes have implications for the transmission spectrum when comparing simulations with the same haze distribution rates, as discussed in Section 5.1.
## 4 Simulations with haze radiative feedback
### Soot-like refractive index
In the simulations with haze radiative feedback and soot-like refractive index, the dayside temperature increases dramatically at low pressures compared to the simulation without haze feedback (Fig. 2). At the 1 \(\mu\)bar level, near the center of the haze production region, the change is as high as 700 K in the simulation with the highest haze production rate (\(2.5\cdot 10^{-11}\) kg m\({}^{-2}\) s\({}^{-1}\)) and 400 K in the simulation with the lowest haze production rate (\(2.5\cdot 10^{-12}\) kg m\({}^{-2}\) s\({}^{-1}\)). On the nightside (Fig. 3), in contrast, the temperature increase is quite moderate. This means that the day-to-night temperatures contrast increases significantly at pressures \(<10\) mbar, from about 200 K to 400 K in the simulation with the lowest haze production rate and 500 to 700 K (depending on pressure) in the simulation with the highest haze production rate.
In the dayside-averaged temperature profile (Fig. 2), two distinct thermal inversions are present that are separated by a temperature minimum near 10 \(\mu\)bar, just below the haze production region. This temperature minimum is not observed in 1D simulations and is a result of the interaction of hazes with atmospheric dynamics (as further explained towards the end of this section).
The haze radiative feedback significantly alters atmospheric circulation. Looking at the zonal-mean zonal velocity, the equatorial jet broadens significantly in lat
Figure 4: Comparison of the zonal-mean zonal velocity in the double-gray (left panel) and correlated-k (right panel) simulations without haze radiative feedback. The contours outline the regions in which the zonal-mean zonal velocity is larger than 50% and 75% of its peak value within the simulation.
itude while its overall strength decreases (Fig. 8). The strength of upwelling on the dayside increases substantially (Fig. 9). In particular, the narrow upwelling region at the evening terminator that appeared in the double-gray simulation but was barely visible in the correlated-k simulation without haze radiative feedback appears again and becomes much stronger for increased haze production rates. The chevron-shaped downwelling and adjacent upwelling feature at the morning terminator associated with the hydraulic jump significantly changes its shape as well. While a chevron-shape is retained close to the equator, additional upwelling parallel to the terminator appears at higher latitudes. The downwelling regions on the nightside become less localized. Their peak velocity is reduced significantly, but downwelling is distributed over a much larger region. In a very rough sense, one could say that the atmospheric circulation with soot-based haze radiative feed
Figure 5: Horizontal (arrows) and vertical (colorscale) velocities on isobars in the double-gray (left column) and correlated-k (right panel) simulations without haze radiative feedback. Positive vertical velocities correspond to upwelling. The substellar point is located at the center of each panel.
back changes in a way that makes it more similar to the double-gray simulation, especially for the cases with low-to-intermediate haze production rates. This is likely because the absorption cross section of soot has a relatively weak and smooth wavelength dependence. Therefore, adding soot opacity at low pressures somewhat resembles adding a gray opacity at these regions.
In general, the horizontal distribution of the hazes (Fig. 10) remains qualitatively similar to the distribution in the passive correlated-k simulation. The center of the nightside vortices remains depleted of hazes. Again, below the haze production region, there are bands of enhanced haze mixing ratio surrounding the center of the vortices, with localized higher haze mixing ratios where the bands intersect with the downwelling areas. At somewhat higher pressures (\(p\gtrapprox 0.1\) mbar), the dayside haze mixing ratio becomes more uniform compared to the simulation without haze radiative feedback and
Figure 6: Haze mass mixing ratio \(\chi\) (on a logarithmic scale) on isobars in the double-gray (left column) and correlated-k (right panel) simulations without haze radiative feedback. The colorscale range has been chosen to be identical in both figures.
the equatorial region is no longer depleted. Thus, rather than having one narrower circumplanetary band with increased haze mixing ratio in each hemisphere, there is one broader band that includes the equatorial region.
The globally-averaged vertical mixing ratio profiles (Fig. 7) qualitatively change compared to the wavelength-dependent, passive simulation: The mass mixing ratio declines less evenly (like in the passive, gray case). In addition, it is insightful to also examine the dayside-averaged haze mixing ratio profiles (Fig. 11). One can see that for the soot haze radiative feedback simulations, the mixing ratio profile on the dayside is close to constant for a significant pressure region. This is because of the stronger upwelling on large portions of the dayside. The extent of the pressure region with almost-constant mixing ratio increases with a higher haze production rate. Likely, this is partially caused by the increased upward velocities on the dayside. However, the fact that the equatorial jet further weakens in the \(2.5\cdot 10^{-11}\) kg m\({}^{-2}\) s\({}^{-1}\) simulation could also contribute, as the jet acts to homogenize the mixing ratio between day- and nightside.
The vertically almost homogeneous haze mixing ratio on the dayside below the haze production region is also directly tied to the temperature minimum below the haze production region near 10 \(\mu\)bar. Upwelling on the dayside transports air with relatively low haze mixing ratio upwards from deeper layers, causing relatively low mass mixing ratios and thus low rates of stellar heating (Fig. 12, top panel) just below the haze production region. In the haze production region, the mass mixing ratio (and thus also the heating rate) then increases much faster with height than seen in the global average or in one-dimensional models which assume mixing to act only in a diffusive way.
### Titan-type refractive index
Compared to any of the other simulations, the atmospheric circulation changes dramatically in the simulations with Titan-type hazes (Fig. 13). The strength of the equatorial jet increases drastically, especially at low pressures (Fig. 8). While in all other simulations, there is westward flow on at least parts of the dayside, especially west of the substellar point, close to the peak of the haze production profile (2 \(\mu\)bar), in these two simulations, there is eastward flow throughout the entire dayside. This substantially changes the 3D distribution of the hazes (Fig. 13). In the haze production region, hazes are now advected eastward from the dayside, resulting in a higher haze mixing ratio at the evening terminator than at the morning terminator. This is the opposite of what was observed in all of the simulations with passive tracers (both in the double-gray and the correlated-k case) and radiative feedback with soot refractive indices. Below the haze production region, the equatorial jet (which widens substantially on the dayside) homogenizes haze abundances across the equatorial region and most of the dayside. The only regions that remains depleted of hazes are the night side vortices. Even deeper in the atmosphere (\(\approx 1\) mbar), the hazes remain mostly in the equatorial region.
We further note that the dayside-average temperature (Fig. 2) and mass mixing ratio profiles (Fig. 11) in the Titan-type case also are qualitatively distinct from the soot-like case. In the case with a haze production rate of \(2.5\cdot 10^{-11}\) kg m\({}^{-2}\) s\({}^{-1}\), the temperature profile is isothermal below the haze production region (compared to the double-peaked thermal inversion and the local temperature minimum below the haze production region in the soot-like case). For the haze production rate of \(1\cdot 10^{-10}\) kg m\({}^{-2}\) s\({}^{-1}\), the temperature profile decreases with increasing pressure below the haze production rate. The dayside-average haze mass mixing ratio steadily decreases with increasing pressure, closely resembling the globally-averaged haze mass mixing ratio profile (Fig. 7). There is no region of almost-constant haze mass mixing ratio below the haze production region (as seen in the soot simulations). Presumably, these changes are directly linked to the fact that the hazes are now efficiently transported between day-and nightside in the equatorial region. Finally, the globally-average
Figure 7: Global average profiles of the haze mass mixing ratio.
haze mass mixing ratio is noticeably larger than in the soot-like case with the same haze production rate.
To examine possible causes for the differences in atmospheric circulation between soot-like and Titan-type hazes, we calculated the instantaneous heating rates at 4,500 days simulation time. Figure 12 shows the dayside average of the radiative heating rates for the two simulations with the same haze production rate (\(2.5\cdot 10^{-11}\) kg m\({}^{-2}\) s\({}^{-1}\)), as well as in the passive simulation. The stellar heating profiles (top panel) differ dramatically between soot and Titan-type hazes. In the soot simulation, the stellar heating is highly concentrated near the peak of the haze production region and rapidly drops off until below the haze production region and then remains relatively constant between 10 \(\mu\)bar and 1 mbar. In contrast, in the Titan-type simulation, the heating rate peaks at a much lower value in the haze production region and declines more gradually with increasing pressure.
For atmospheric dynamics, the most relevant quantity is the net radiative heating rate (bottom panel). Both simulations with haze feedback exhibit overall higher net heating rates at pressures below 100 mbar than the passive simulation, with a large peak in the haze production region. However, in the simulation with Titan-type haze, the peak value is only 2/3 of the peak value in the soot simulation. At the same time, there is more heating in the pressure region between 0.1 mbar and 30 mbar in the Titan-type simulation. Thus, radiative heating is spread out over a larger pressure range in the Titan-type simulation, while it is concentrated at low pressures in the soot case. This is expected, as the extinction cross-section of Titan-type hazes has a much larger wavelength dependence, meaning that in some wavelength regions, the radiation can penetrate much deeper into the atmosphere than at other wavelengths. We suggest that the additional energy deposition between 0.1 mbar and 30 mbar drives the stronger and vertically more extended equatorial jet in the Titan-type case. In contrast, in the soot-like case, the additional energy deposited directly in the haze production region likely cannot drive the equatorial jet because at pressures this low, the radiative timescale is much shorter than the wave propagation timescale and thus the dynamic mechanism for driving the equatorial jet is inhibited (Perez-Becker & Showman, 2013; Komacek & Showman, 2016).
## 5 Predicted Observations
### Transmission spectra
First, we compare the wavelength-dependent models without and with haze radiative feedback to the double-gray model with the best-fit haze production rate while keeping the haze production rate constant (Fig. 14, panel (a)). At the same haze production rate, the wavelength-dependent models (with and without haze radiative feedback) show stronger near-infrared features as well as a steeper short-wavelength slope. The short-wavelength slope is almost parallel to the observed slope. However, there is a large offset, with \(R_{p}/R_{s}\) being about 0.002 lower in the models compared to the observations. The steeper slope is consistent with the haze mass mixing ratio declining faster with increasing pressure in the simulations with wavelength-dependent radiative transfer. This decline results in a lower haze mass mixing ratio below the haze production region (i.e. at pressures higher than \(10^{-5}\) bar), which explains the larger near-infrared features.
Figure 8: Comparison of the zonal-mean zonal velocity in simulations with haze radiative feedback. The contours outline the regions in which the zonal-mean zonal velocity is larger than 50% and 75% of its peak value within the simulation.
Compared to the simulation with wavelength-dependent radiative transfer and passive hazes, the simulation with haze radiative feedback has an even steeper short-wavelength slope. This can largely be attributed to a stronger mixing ratio gradient in the pressure region probed in the simulation with haze feedback (panel (b)). The higher temperature in the haze feedback simulation also may contribute, however, due to the low haze mass mixing ratio in both simulations, the transit spectrum is probing relatively deep in the atmosphere in the near-infrared (ca. 1-100 mbar). The temperature difference between both simulations is much smaller at these pressures compared to higher altitudes.
Given the stronger near-infrared features when keeping the haze production rate constant, it is necessary to look at simulations with increased haze production
Figure 9: Horizontal (arrows) and vertical (colorscale) velocities on isobars in simulations with haze radiative feedback using soot refractive indices with two different haze production rates (left column: \(2.5\cdot 10^{-12}\) kg m\({}^{-2}\) s\({}^{-1}\), right column: \(2.5\cdot 10^{-11}\) kg m\({}^{-2}\) s\({}^{-1}\)). Positive vertical velocities correspond to upwelling. The substellar point is located at the center of each panel.
rates to assess whether radiative feedback can improve the match to observations. Panel (c) of Fig. 14 therefore shows transmission spectra from simulations with radiative feedback of soot-like hazes for different haze production rates. The best match to the WFC3 data is produced by the simulation with a haze production rate of \(5\cdot 10^{-12}\) kg m\({}^{-2}\) s\({}^{-1}\), twice as large as the haze production rate of the best-fit double-gray model. As the haze production rate increases, the short-wavelength slope in general becomes shallower. The reason is that with increased haze opacity, lower pressures with a weaker mass mixing ratio gradient are probed (panel (d)). In addition, the shape of the vertical mixing ratio profile at the terminator changes with increasing haze production rate, especially between 1 and 100 mbar, such that the mixing ratio gradient is less constant with pressure.
Figure 10: Haze mass mixing ratio \(\chi\) (on a logarithmic scale) on isobars in simulations with haze radiative feedback using soot refractive indices with two different haze production rates (left column: \(2.5\cdot 10^{-12}\) kg m\({}^{-2}\) s\({}^{-1}\), right column: \(2.5\cdot 10^{-11}\) kg m\({}^{-2}\) s\({}^{-1}\)). The colorscale range has been chosen to be identical to the one in Fig. 6 in the left column, while it has been offset by a factor of 10 in the right column to facilitate the comparison between different haze production rates.
None of the models with soot hazes thus match the observed transmission spectrum at short wavelengths.
Titan-type hazes absorb much less in the infrared, therefore higher haze production rates are needed to match near-infrared spectra. For both haze production rates simulated, the short-wavelength slope is steeper than in all soot-like models and is roughly parallel to the observed slope. The reason for the steeper slope is the extinction coefficient dropping off by about two orders of magnitude between the UV and the near-infrared. This is well-known and and also has been noted in previous work (e.g., Ohno and Kawashima, 2020; Steinrueck et al., 2021; Lavvas and Arfaux, 2021). Overall, the Titan-type spectra match the transit observations better. However, a smaller but substantial offset between the short-wavelength observations and the models remains.
Overall, the Titan-type hazes can reproduce two of the main features of the measured transmission spectrum fairly well: the optical slope and the strengths of the near-infrared H\({}_{2}\)O absorption feature. The main disagreement is the absolute transit depth in the optical, which differs by a few hundred parts per million. However, such a discrepancy could potentially arise from stellar activity; HD 189733 is known to be active (Boisse et al., 2009). Another possibility is that the absolute level of the measured spectrum is biased by visit-long time-dependent systematic correction (Stevenson et al., 2014). As Arfaux and Lavvas (2022) note, an offset between the optical and near-infrared data due to either of these effects could also bring the _HST_ optical spectrum into agreement with the transit depth observed by _SOFIA_(Angerhausen et al., 2015). Given these possible systematic uncertainties in the transit depth measured with different instruments, the GCM results with Titan-type hazes agree fairly well with the overall morphology of the spectrum.
Figure 11: Dayside average profiles of the haze mass mixing ratio, calculated using an average weighted by the cosine of the angle of incidence.
Figure 12: Dayside-average profiles of the radiative heating rates in the passive simulation as well as two simulations with identical haze production rates, calculated using an average weighted by the cosine of the angle of incidence. The top panel shows the stellar heating and thermal cooling rates on a logarithmic scale, while the bottom panel shows the net difference between stellar heating and thermal cooling on a linear scale. The heating rates shown are instantaneous from simulation snapshots at 4,500 days, i.e., unlike other quantities shown in this work, they are not time-averaged. The top three layers of the model have been omitted due to boundary effects.
Figure 13: Left column: Horizontal (arrows) and vertical (colorscale) velocities on isobars in a simulations with haze radiative feedback using Titan-type refractive indices. Positive vertical velocities correspond to upwelling. The substellar point is located at the center of each panel. Right column: Haze mass mixing ratio \(\chi\) (on a logarithmic scale) on isobars in simulations with haze radiative feedback using soot refractive indices with two different haze production rates (left column: \(2.5\cdot 10^{-12}\) kg m\({}^{-2}\) s\({}^{-1}\), right column: \(2.5\cdot 10^{-11}\) kg m\({}^{-2}\) s\({}^{-1}\)). Compared to Fig. 6, the colorscale has been offset by a factor of 10 in the right column to facilitate the comparison between different haze production rates.
Figure 14: Left column: Model-predicted transmission spectra for different simulations. Black crosses represent observational data (Pont et al., 2013; McCullough et al., 2014) using the analysis by (Sing et al., 2016). The gray vertical lines indicate wavelengths of 350 and 700 nm, corresponding to the pressure regions highlighted in the right column. Right column: Mass mixing ratio averaged across the terminator region for the simulations shown in the left column. For each simulation, the pressure region probed by continuum extinction between 350 and 700 nm (i.e. not including pressures probed by the core of the sodium line) is highlighted as thick line.
### Geometric albedo
Measurements of the geometric albedo of HD 189733b (Evans et al., 2013; Krenn et al., 2023) could also provide constraints on the haze production rate and optical properties. In Fig. 15, we compare the geometric albedo predicted from our simulations to observations. At short wavelengths (\(<0.5\,\mu\)m), both soots and Titan-type hazes efficiently absorb incoming starlight, leading to a stark decrease in the albedo compared to the clear-atmosphere model. Soots are more absorbing than Titan-type hazes, resulting in a geometric albedo that is lower by a factor of about two for the same haze production rate (\(2.5\cdot 10^{-11}\) kg m\({}^{-2}\) s\({}^{-1}\)) for these short wavelengths. For wavelengths \(>0.5\,\mu\)m, sodium absorption dominates. Thus, the geometric albedo is very low for all simulations, including the clear and all hazy cases.
Overall, observational constraints from albedo spectra prefer a clear atmosphere or low haze production rates. The clear atmosphere model as well as the soot model with the lowest haze production rate match the observations with the HST STIS G430L grating (Evans et al., 2013) reasonably well, while all simulations with higher haze production rates (soots and Titan-type hazes) result in a too low albedo at short wavelengths and a less pronounced drop from short wavelengths to longer wavelengths. All simulations underpredict the albedo in the CHEOPS bandpass, though again, the clear-atmosphere- and low-haze-production models are closer to the observations.
### Emission spectra
Haze radiative feedback leads to changes in the emission spectrum, in most cases reducing the amplitude of spectral features in the near-infrared water bands and increasing the flux at long wavelengths (\(>4\)\(\mu\)m) (Fig. 16). These changes are mostly driven by the changes in the temperature structure of the atmosphere due to haze radiative feedback rather than the addition of haze opacity when calculating the transmission spectrum (bottom panel). Between 1 and 2 \(\mu\)m, the emission is probing relatively deep layers of the atmosphere (up to 1 bar outside the water bands, \(\approx\)50-100 mbar inside the water bands), below the thermal inversions. Water is therefore seen in absorption. In this pressure region, simulations with haze radiative feedback exhibit a much smaller temperature gradient than the clear atmosphere simulation, leading to a reduced amplitude of the water feature. Especially in the soot-like simulation with the highest haze production rates (\(2.5\cdot 10^{-11}\) kg m\({}^{-2}\) s\({}^{-1}\)), it appears that the emission is probing the region in which the temperature profile transitions from decreasing with altitude to a thermal inversion. In this transition region, the temperature profile is close to isothermal, leading to a particularly low feature amplitude. Between 2 and 3 \(\mu\)m, the soot-like simulations with intermediate haze production rates and both Titan-type simulations show a spectrum close to that of a blackbody. The soot-like simulation with the highest haze production rate exhibits an emission feature, while the same feature is seen in absorption in the simulation with the lowest haze production rate. At wavelengths beyond 4 \(\mu\)m, all models with haze feedback emit more radiation than the haze-free simulation.
Comparing to existing observations (Fig. 17) remains somewhat inconclusive. The _HST WFC3_ data from Crouzet et al. (2014) cannot distinguish between models with or without thermal inversion. All models are consistent with this observation. None of the models match the longer-wavelength observations well, neither the haze-free one nor the ones with haze feedback. In particular, the IRAC 3.6, 5.8 and 8 \(\mu\)m points (Knutson et al., 2007; Charbonneau et al., 2008) deviate from the models by much more than their one-sigma error bars. All Spitzer observations show more emitted flux at secondary eclipse than our model spectra. Including haze radiative feedback increases emission at these wavelengths and thus moves the models somewhat closer to the observed flux. However, at the same time, the water absorption feature between 6 and 8 micron observed
Figure 15: Geometric albedo predicted from our models, together with observations of HD 189733b. Data points from _HST STIS_ with the G430L grating (Evans et al., 2013) are shown in light gray. The measurement in the CHEOPS bandpass (Krenn et al., 2023) is shown in dark gray, along with the model spectra integrated across the CHEOPS bandpass as filled circles. Note that the CHEOPS bandpass extends to 1.1 \(\mu\)m, beyond the wavelength range shown.
Figure 16: Model-predicted emission spectra. For comparison, blackbody emission spectra at several temperatures are shown as thin gray lines. In the bottom panel, dashed lines show spectra in which the haze opacity was neglected during post-processing, thus isolating the effect of the changed thermal structure.
Figure 17: Comparison of selected model-predicted emission spectra to a range of observations. Upper left: HST WFC3 G141 grism observations (Crouzet et al., 2014). The error bars include the combined uncertainties of the differential spectrum and the white light curve. Upper right: Spitzer IRS observations (Grillmair et al., 2008). Bottom: Spitzer IRAC (Charbonneau et al., 2008; Knutson et al., 2007), IRS photometry (Charbonneau et al., 2008) and MIPS (Charbonneau et al., 2008; Knutson et al., 2007) observations.
with _Spitzer IRS_(Grillmair et al., 2008) disappears in most models with haze radiative feedback, with exception of the model with the weakest haze production rate. With the current quality of observations, the latter is indistinguishable from the haze-free model in emission despite exhibiting two substantial thermal inversions.
We note that the models presented in this study are pure forward models and we did not actively attempt to improve the match to secondary eclipse observations. To improve the match, exploring additional parameters such as metallicity and atmospheric drag would be necessary.
### Phase Curves
As demonstrated in Fig. 18, haze radiative feedback can substantially affect thermal phase curves over a broad range of wavelengths. At most infrared wavelengths, including all wavelengths larger than 4.2 \(\mu\)m, the dayside flux increases drastically, while the nightside flux remains relatively unchanged. This leads to an overall increase in the phase curve amplitude. At the same time, the eastward offset of the phase curve decreases due to the changes in the temperature structure. In the near-infrared, however, there are also multiple wavelength regions, in which the planetary flux decreases in the simulations with haze feedback at all phases (0.79-1.32 \(\mu\)m, 1.51-1.74 \(\mu\)m, 2.03-2.28 \(\mu\)m, 3.7-4.1 \(\mu\)m). These wavelength regions are atmospheric windows, in which the emission emerges from particularly deep regions. Thus, the emission is probing the regions on which hazes have as cooling effect. Notably, in the Spitzer 3.6 \(\mu\)m band, haze feedback has little effect on the phase curve. The reason is that for some wavelengths within the bandpass (3.1-3.5 \(\mu\)m), haze feedback leads to more emission, while for other wavelengths (3.7-4.0 \(\mu\)m), haze feedback decreases the amount of emitted flux. Near the center of the bandpass, there is little difference between the phase curves. As a result, the band-averaged phase curves look similar for all simulations.
To motivate future phase curve observations with JWST, we calculated JWST errorbars using the open-source software PandExo(Batalha et al., 2017) for the generated phase curves of HD 189733 b for the wavelengths shown in Fig. 18 (1.28, 1.42, 2.79 and 6.66 \(\mu\)m). Due to the relatively high brightness of the host star (J = 6.07 mag), the NIRSpec Grisms, NIRSpec PRISM and NIRISS/SOSS oversaturate. Only the NIRCam Grisms and MIRI/LRS stay below the stutation limit and can be used to perform observations at the redder wavelengths. The only available spectroscopic modes to observe the planet at 2.79 and 6.66 micron are the NIRCam F322W2 and MIRI/LRS instruments, respectively. The resulting errorbars for these modes are included in Figure 18.
For the two shorter wavelengths, we simulated JWST observations for a planet similar to HD 189733 b with a fainter host star to prevent oversaturation of the detectors. We searched the NASA Exoplanet Archive1 for a planet with a similar orbital period, equilibrium temperature, mass and radius to HD 189733 b orbiting a fainter host star. The hot jupiter WASP-85A b (J = 9.28 mag) compares well with HD 189733 b and we summarize some fundamental parameters between the two systems in Table 3. Figure 18 shows the simulated JWST errorbars of WASP-85A b in all four wavelengths of interest. The highest precision for the 1.28 and 1.42 micron wavelengths is reached with NIRSpec G140H/F100LP. NIRSpec G235H/F170 provides the highest precision for the 2.79 micron bin. The 6.66 micron phase curve can only be observed with MIRI/LRS.
Footnote 1: [https://exoplanetarchive.ipac.caltech.edu/](https://exoplanetarchive.ipac.caltech.edu/)
Figure 18: Model-predicted phase curves at different wavelengths and in multiple Spitzer bandpasses. The observational data in the Spitzer bandpasses (light blue circles) are the same as in Fig. 12 in Knutson et al. (2012) and are taken from Knutson et al. (2012) (3.6 and 4.5 \(\mu\)m), Knutson et al. (2007, 2009); Agol et al. (2010) (8 \(\mu\)m) and Knutson et al. (2009) (24 \(\mu\)m). In addition, estimated JWST errorbars are shown for both HD 189733b and WASP-85Ab, a planet with comparable properties orbiting a fainter star (see text). The errorbars were calculated for bin widths of 720 s, chosen to be identical to the bins in the Spitzer 3.6 \(\mu\)m and 4.5 \(\mu\)m observations panels.
particular the IRS data, support a non-inverted temperature profile in the pressure regions probed in these observations, most consistent with a low haze production rate or clear atmosphere. The geometric albedo spectrum similarly precludes large haze production rates with soot-like or Titan-type hazes, as either are too absorbing at short wavelengths. A clear atmosphere or low haze production rate still results in a somewhat lower geometric albedo than observations but provides a much better match than large haze production rates.
However, the short-wavelength slope requires aerosols (even when contamination from star spots is considered, e.g., Zhang et al., 2020; Arfaux and Lavvas, 2022). Microphysics models of condensate clouds tend to form large particle sizes and thus struggle to reproduce the short-wavelength slope (Powell et al., 2018; Lines et al., 2018). Photochemical hazes thus remain the most likely candidate for explaining the short-wavelength slope. In our simulations, Titan-type hazes produce a better match to the slope than soots, but require an offset between the optical and NIR data. In addition, the _WFC3_ transmission spectrum requires a source of near-infrared opacity to mute the water feature, either hazes or condensate clouds. For a solar metallicity, the haze production rates necessary to explain the low amplitude of the water feature are in conflict with the constraints from the geometric albedo. While we did not simulate super-solar metallicities, we expect that a higher metallicity would exacerbate this problem, because even more hazes would be needed to reduce the amplitude of the larger water feature.
Finally, observations using high-resolution cross-correlation have detected carbon monoxide (de Kok et al., 2013; Rodler et al., 2013) and water (Birkby et al., 2013) in absorption in the dayside spectrum of HD 189733b, implying a temperature profile decreasing with height. This observation has seemingly caused tension with photochemical hazes producing a thermal inversion at high altitudes. The pressure probed in these observations cannot be directly constrained because the information of the continuum emission is lost in the process of removing telluric and stellar lines. However, using forward models, de Kok et al. (2013) estimate that the pressure probed by the CO lines is between \(10^{-5}\) and \(10^{-3}\) bar for a CO volume mixing ratio of \(10^{-4}\). Our soot-like models naturally exhibit a decreasing temperature profile with height in this pressure region due to the low haze mass mixing ratio below the haze production region caused by upwelling on the day side. The high-resolution observations thus do not necessarily rule out a temperature inversion by soot-like hazes.
The remaining tension between different types of observations thus is that low-resolution secondary eclipse measurements (reflected light and thermal emission) support a low haze production rate, while the observed transmission spectrum requires models with substantial haze opacity in the near-infrared water bands and thus higher haze production rates. A potential way to reconcile this tensions could be models combining photochemical hazes to explain the short-wavelength slope with larger condensate clouds to explain the muted amplitude of the near-infrared water feature (as already pointed out in Pont et al., 2013). The potential impact of such condensate clouds on the simulation results is discussed further below. Another possibility is that the photochemical hazes in HD 189733b's atmosphere could have a lower absorption cross-section in the UV than the soot and Titan-type hazes we considered. Such hazes, possibly in combination with reflecting condensate clouds deeper in the atmosphere, could increase the geometric albedo and thus bring models into better agreement with the optical secondary eclipse measurements. Recently, new refractive indices derived from laboratory experiments simulating haze formation in conditions relevant to super-Earths showed substantially less UV absorption than Titan-type hazes (He et al., 2023; Corrales et al., 2023). However, less absorption in the UV may come at the cost of a worse match to the short-wavelength slope seen in transmission.
### Importance of wavelength-dependent radiative transfer
The comparison of double-gray and correlated-k radiative transfer in Section 3 highlights the importance of the choice of radiative transfer. For the purpose of studying photochemical hazes, it appears necessary to use the more computationally expensive correlated-k radiative transfer. This represents a major challenge for future larger parameter studies. It may be worth evaluating how well radiative transfer schemes with a complexity level between double-gray and correlated-k, for example the picket-fence scheme in Lee et al. (2021), can reproduce the haze distribution from the correlated-k approach.
### Choice of haze optical properties
Our results further demonstrate that the assumed optical properties of hazes strongly influence atmospheric dynamics. The strength and shape of the equatorial jet strongly differs between the two different assumed haze refractive indices. The resulting 3D distribution of the haze mass mixing ratio also looks dramatically different. Hazes with a soot-like refractive index are more concentrated at the nightside and morning terminator than at
the evening terminator, while hazes with a refractive index similar to Titan-type hazes are more concentrated at the evening terminator.
Currently, there are little experimental and theoretical constraints on the optical properties of photochemical hazes in hot Jupiter atmospheres. Measured refractive indices are either derived from soots produced in hydrocarbon flames or from experiments simulating haze formation on Titan, conducted in a nitrogen-dominated atmospheres either at room temperature or at Titan-like temperatures (\(\approx\) 100 K). In recent years, several research groups have produced haze analogs intended to simulate conditions on exoplanets (Horst et al., 2018; Fleury et al., 2019; Gavilan et al., 2018). Out of these experiments, refractive indices have only been published for temperate water-dominated (He et al., 2023) and nitrogen-dominated (Corrales et al., 2023) atmospheres. Both sets of refractive indices show substantial deviations from the Khare et al. (1984) tholins. The differences include the absolute value of the complex refractive index, additional spectral features due to incorporated oxygen, and location of the "spectral window" with a low imaginary refractive index in the optical-to-near-IR.
Refractive indices for hydrogen-dominated, high-temperature haze analogs have not been published so far. However, the color of haze analogs produced in the experiments by Horst et al. (2018) strongly depends on the temperature and gas composition of the atmosphere (He et al., 2018). In addition, these haze analogs have incorporated more oxygen than Titan haze analogs or soots (Moran et al., 2020). Note that their initial experiments only cover temperatures up 600 K, with a more recent update including 800 K(He et al., 2020), not hot enough to match the temperatures in the haze production region of HD 189733b. Fleury et al. (2019) report the formation of solid photochemical products in an experiment at 1,473 K in a hydrogen-dominated gas mixture with a C/O ratio of 1. Their haze analogs show infrared spectral signatures of carbonyl and aldehyde groups, indicating a solid composition based on carbon, oxygen and hydrogen compounds. However, while the detection of carbonyl and aldehyde spectral signatures is evidence of the incorporation of oxygen into the high-temperature haze analogs, no statement about the relative oxygen content can be made from these measurements alone.
The optical properties of photochemical hazes in the atmospheres of hot Jupiters and hot Neptunes thus could substantially deviate from both the soot-like and the Titan-type refractive indices used in our simulations. It is also possible that these optical properties vary substantially between individual planets with different temperatures, around different stellar types or with varying atmospheric composition. Given the dramatic effect of the haze refractive index on atmospheric dynamics and the 3D distribution of hazes, we stress the need for measurements of the refractive indices of laboratory haze analogs specific to hot hydrogen-dominated atmospheres.
### Changes to chemistry and haze production rate due to haze feedback
We note that all the presented simulations have a fixed haze production rate. In a real atmosphere, however, the changed temperature structure and circulation will affect chemical processes in the atmosphere and thus the haze production rate. This effect is non-local and therefore difficult to model: On one hand, the hotter temperatures at low pressures directly affect the photochemical reactions producing haze precursor molecules. On the other hand, the cooler temperatures in the region where methane is quenched between 100 mbar and 10 bar change the amount of methane that is mixed upwards to the photochemically active regions. This will affect the abundance of all methane-derived photochemical products and thus many haze precursor species. Aftau & Lavvas (2022) found in 1D simulations that the changed temperature structure due to haze radiative feedback can alter haze production rates by a factor of a few. If the hazes are as refractory as soot, evaporation or thermal decomposition of the hazes at low pressures due to the haze-induced thermal inversion is not anticipated, as the temperature would have to exceed 1,800 K (see e.g., Fig. 2 in Lavvas & Koskinen, 2017).
### Impact of condensate clouds
Finally, it is likely that in the atmospheres of many hot Jupiters, including HD 189733b, both photochemical hazes and condensate clouds are present. Condensate clouds are likely to further alter temperature structure and atmospheric circulation (e.g., Lee et al., 2016; Lines et al., 2018; Roman & Rauscher, 2019). They can also substantially influence infrared phase curves. Notably, partial cloud coverage is known to decrease the phase curve offset at infrared wavelengths by blocking emission from deeper layers of the atmosphere near the evening terminator (Parmentier et al., 2021; Roman et al., 2021). Photochemical hazes do not produce this effect due to their much smaller particle sizes, which make them more transparent in the infrared. Including condensate clouds therefore could further improve the agreement with phase curve observations. We also note that in our simulations, photochemical hazes did not
substantially lower the night side fluxes. Condensate clouds are therefore still the favored explanation for the observed uniformly low nightside temperatures of hot Jupiters (Beatty et al., 2019; Keating et al., 2019). Many real atmospheres likely include both types of aerosols and future models including both are encouraged.
It is also likely that condensate clouds interact with photochemical hazes, for example by condensing onto photochemical haze particles. While we consider the removal of haze particles from the distribution of "purely photochemical" hazes in a highly idealized fashion (through the sink term), detailed microphysical and laboratory studies of the interactions between photochemical hazes and condensate clouds, similar to Yu et al. (2021), in conditions expected in hot Jupiter atmospheres are desirable.
## 7 Conclusion
In this work, we examined the effect of radiative feedback of photochemical hazes on temperature structure, atmospheric circulation and haze distribution in a 3D general circulation model (GCM) of hot Jupiter HD 189733b using a state-of-the-art GCM with wavelength-dependent (correlated-k) radiative tranfer. First, we performed a detailed comparison of temperature structure, circulation and the distribution of radiatively passive hazes between double-gray and correlated-k radiative transfer. Compared to the double-gray model, the correlated-k simulation has lower temperatures and a stronger day-night temperature contrast at low pressures. There are further changes to the structure of the equatorial jet, which is narrower and broadens less with height in the correlated-k simulation, the location of the mid-latitude nightside vortices and the regions of strongest downwelling. These changes lead to the haze mass mixing ratio peaking along a ring surrounding the center of the nightside vortices in the correlated-k simulation rather than in the center of the vortices. The mass mixing ratio also drops off faster with increasing pressure in the correlated-k simulation.
Then, we performed simulations with correlated-k radiative transfer that included heating and cooling by photochemical hazes. Hazes in our model are an active tracer that dynamically interacts with atmospheric circulation. The majority of simulations assumed a soot-like complex refractive index, but we also explored a refractive index resembling Titan-type hazes. In both cases, a strong temperature inversion forms at low pressures on the dayside. Therefore, the day-to-night temperature contrast increases dramatically for p\(<\)10 mbar (400-700 K instead of 200 K in the passive-haze simulation). The detailed structure of the dayside temperature profile differs between soot-like and Titan-type hazes: For soot-like hazes, there are two separate temperature maxima (near 1 mbar and 1 mbar, respectively), separated by a temperature minimum at 10 mbar (just below the haze production region). The temperature minimum is caused by upwelling on the dayside mixing haze-poor air upwards, an effect not captured in 1D simulations. For Titan-type hazes, the dayside-average temperature profile is more uniform with a large isothermal region at low pressures.
The response of atmospheric circulation to heating and cooling from photochemical hazes strongly depends on the choice of the complex refractive index of the haze particles. For soot-like hazes, which are highly absorptive and exhibit a weak wavelength-dependence of the absorption cross section, the equatorial jet slows down and broadens at low pressures. Vertical velocities increase, especially upwelling near the terminator and on the dayside. The higher the haze production rate, the stronger are these changes. There are only moderate changes to the haze distribution compared to radiatively passive hazes.
For hazes with a refractive index similar to Titan-type hazes, the equatorial jet accelerates substantially, especially at low pressures. This results in eastward velocities throughout the entire dayside in the haze production region (in contrast to polewards and eastward velocities in the radiatively passive and soot-like simulations). Hazes thus are effectively homogenized across most of the globe with the exception of the nightside mid-latitude vortices, which remain depleted of hazes. The distribution of hazes thus strongly contrasts from that of radiatively passive or soot-like hazes, with overall more hazes at the evening terminator than at the morning terminator.
We suggest that the difference between soot-like and Titan-type-haze simulations is caused by the different vertical heating profiles. For Titan-type hazes, the stellar heating is more spread out vertically, leading to a stronger net heating at higher pressures, that then drives a stronger equatorial jet. Due to this unexpectedly strong dependence of the 3D haze distribution and atmospheric circulation on the assumed haze optical properties, we emphasize the need for better constraints on the refractive indices of photochemical hazes under conditions relevant to hot Jupiters, for example by measurements of the refractive indices of laboratory haze analogs formed at high temperatures in hydrogen-dominated atmospheres.
Including haze radiative feedback does not improve the match to transmission spectra. Alternative explanations for the steep short-wavelength slope, such as star
spots, an additional offset between the STIS and WFC3 data (as discussed in Arfaux and Lavvas, 2022), different haze optical properties, or strong sub-grid-scale mixing (Steinruck et al., 2021) are required to explain observations.
In emission, haze radiative feedback leads to a decreased amplitude of the near-infrared water features and increased emission at wavelengths past \(\approx\) 4 \(\mu\)m. In most wavelength regions, the phase curve amplitude increases substantially, while the eastward phase curve offset is reduced. Notably, the soot-like simulation with the lowest haze production rate is almost indistinguishable from the haze-free model in emission despite exhibiting a moderate double-peaked thermal inversion.
We point out that current models (neither 1D nor 3D, whether clear-atmosphere, photochemical hazes or condensate clouds) still do not fully explain the set of existing observations of this benchmark hot Jupiter. While Titan-type hazes provide a better (though imperfect) match to transit observations, the detection of CO in absorption in high-resolution observations favors soot-like hazes because the double-peaked temperature structure in soot-like simulations is compatible with a declining temperature profile with height in the pressure range likely probed in these observations. In addition, geometric albedo constraints from optical secondary eclipse measurements prefer low haze production rates that would require an additional opacity source such as condensate clouds to explain the low amplitude of the near-infrared water band in transmission.
M.S. was supported by NASA Headquarters under the NASA Earth and Space Science Fellowship Program - Grant 80NSSC18K1248 while working as a graduate student at the University of Arizona. X.Z. acknowledges support from NASA Interdisciplinary Consortia for Astrobiology Research (ICAR) grant 80NSSC21K0597 and the NASA Exoplanet Research Grant 80NSSC22K0236. We thank Thaddeus Komacek for conversations on the bottom boundary condition and Peter Gao for sharing the model grid used in Thorngren et al. (2019). M.S. further thanks Elspeth Lee for discussions on the comparison between double-gray and correlated-k radiative transfer, Maria Zamyatina and Kazumasa Ohno for discussions on the haze optical properties, and Gilda Ballester for helpful comments on an early draft. We also thank the referee for helpful suggestions that improved the manuscript substantially. This research made use of NASA's Astrophysics Data System. This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program.
Numpy (Harris et al., 2020), SciPy (Virtanen et al., 2020), Matplotlib (Hunter, 2007), Cartopy (Met Office, 2010 - 2015), Bibmanager (Cubillos, 2020), Pandexo (Batalha et al., 2017), Astropy (Astropy Collaboration et al., 2022)
|
2306.10655
|
An Infinite Product of the Incomplete Beta Function-type Hypergeometric
Function and its Probabilistic Origins
|
Recently it has been shown that the $\alpha$-Sun density $h(x)$ [{\it J.
Math. Anal. Appl.}, {\bf 527} (2023), p. 127371] which interpolates between the
Fr{\'e}chet density and that of the positive, stable distributions whose
density is given by a Fox $H$-function, has a Mellin transform involving an
infinite product of ratios of Incomplete Beta functions. We develop systematic,
but asymptotic, approximations for such products and consequently for the
behaviour of the density as $ x\to 0+$ which complement the recent exact form
for this by Simon [{\it Electron. Commun. Probab.}, {\bf 28} (2023) p. 1 - 13].
The systematic expansion is an example of a Power Product Expansion, and in our
case we derive bounds and estimates which show that this expansion is not
convergent and thus only yields an asymptotic expansion.
|
N. S. Witte
|
2023-06-19T00:35:12Z
|
http://arxiv.org/abs/2306.10655v2
|
An Infinite Product of the Incomplete Beta Function-type Hypergeometric Function and its Probabilistic Origins
###### Abstract.
Recently it has been shown that the \(\alpha\)-Sun density \(h(x)\) [_J. Math. Anal. Appl._, **527** (2023), p. 127371] which interpolates between the Frechet density and that of the positive, stable distributions whose density is given by a Fox \(H\)-function, has a Mellin transform involving an infinite product of ratios of Incomplete Beta functions. We develop systematic, but asymptotic, approximations for such products and consequently for the behaviour of the density as \(x\to 0+\) which complement the recent exact form for this by Simon [_Electron. Commun. Probab._, **28** (2023) p. 1 - 13]. The systematic expansion is an example of the Power Product Expansions, and in our case we derive bounds and estimates which show that it is not convergent and thus only yields an asymptotic expansion.
2010 Mathematics Subject Classification: 60E07 ; 60G70 ; 30A80 ; 33.20 ; 33A35
## 1. The \(\alpha\)-Sun density
In [28] the analytic properties of a density function \(h(x;\alpha,\gamma)\), \(x\in(0,\infty)\), \(\gamma>0\), \(0<\alpha<1\) which arises from the domain of attraction problem for a statistic interpolating between the supremum and sum of random variables were investigated. The parameter \(\alpha\) controls the interpolation between these two cases, while \(\gamma\) parametrises the type of extreme value distribution from which the underlying random variables are drawn. For \(\alpha=0\), \(\gamma>0\) the density reduces to the Frechet density, whereas for \(\alpha=1\), \(0<\gamma<1\) it is a particular Fox \(H\)-function appropriate to positive, stable distributions with index \(\gamma\). However for intermediate \(\alpha\) an entirely new function appears, which is not one of the extensions to the hypergeometric function considered to date.
In their study of this model - the \(\alpha\)-Sun process - Greenwood and Hooghiemstra [16, Theorem 2, Eq. (2.4)] have shown this density \(h(x;\alpha,\gamma)\) satisfies a linear, homogeneous integral equation
\[h(x)=\frac{\gamma}{x}\int_{0}^{x}du\;\frac{h(u)}{(x-\alpha u)^{\gamma}}, \tag{1.1}\]
subject to the conditions: \(\alpha\in(0,1)\), \(\gamma\in(0,\infty)\) identifies the domain of attraction as above, and \(h(x)\) is a real, normalised probability density on \(x\in(0,\infty)\). It was observed that the \(\alpha\)-Sun process, for all \(\alpha\) in \((0,1)\), behaves in a way that is much more like the supremum of the input variables than like the sum. Furthermore there is non-trivial limiting behaviour as \(\alpha\to 1^{-}\).
The central result of [28] gives a Mellin-Barnes integral representation for \(h(x)\), Eq.(3.41) in Prop. 3.9, which is a standard way of defining many special functions, such as the Meijer-\(G\) and Fox-\(H\) functions. Under the conditions \(0\leq\alpha<1\), \(\gamma>0\), \(0<x<\infty\) the density \(h(x;\alpha,\gamma)\) has the representation for any \(c<1\)
\[h(x)=\frac{\gamma}{2\pi ix}\int_{c-i\infty}^{c+i\infty}dt\;x^{-\gamma t}\Gamma (1-t)(1-\alpha)^{-\gamma t}\prod_{j=1}^{\infty}\frac{{}_{2}F_{1}(\gamma,(j-t) \gamma;1+(j-t)\gamma;\alpha)}{{}_{2}F_{1}(\gamma,j\gamma;1+j\gamma;\alpha)}, \tag{1.2}\]
where \({}_{2}F_{1}(a,b;c;z)\) is the Gauss hypergeometric function [23, SS15]. In the general case the Mellin transform of the density \(h(x)\) is defined by
\[H(s):=\int_{0}^{\infty}dx\;x^{s-1}h(x), \tag{1.3}\]
for the vertical strip in the \(s\)-plane, \(1-\gamma<\operatorname{Re}(s)<1+\gamma\). Prop. 3.1 of [28] states that for \(\Re(s)<1+\gamma\) the retarded Mellin transform \(H(s;\alpha,\gamma)\) satisfies the linear, homogeneous functional equation
\[H(s)=\frac{\gamma}{1+\gamma-s}{}_{2}F_{1}(\gamma,1+\gamma-s;2+\gamma-s; \alpha)H(s-\gamma). \tag{1.4}\]
Firstly with regard to the analytical character of \({}_{2}F_{1}(\gamma,1+\gamma-s;2+\gamma-s;\alpha)\) with respect to \(s\) we can make the following immediate observations: For \(0<|\alpha|<1\) the \({}_{2}F_{1}\) is convergent for all \(\Re(s)<1+\gamma\) whereas it has simple poles on the right at \(s=1+\gamma+k\) for \(k\in\mathbb{Z}_{\geq 0}\). For \(|\alpha|=1\) we require in addition that \(\Re(\gamma)<1\) and the poles are the same as for \(|\alpha|<1\). This \({}_{2}F_{1}\) function is one of a pair of solutions to the hypergeometric differential equation about \(\alpha=0\) that is bounded as \(s\to-\infty\) (by \((1-\alpha)^{-\gamma}\)). This distinguishes itself from the other member of the pair which has the simple algebraic form \(\alpha^{s-\gamma-1}\), and which diverges as \(s\to-\infty\). This hypergeometric function can be linearly mapped into other forms via the Kummer or linear fractional transformations of the \(\alpha\)-plane and we record one version [23, 15.8.1] of these
\[{}_{2}F_{1}(\gamma,1+\gamma-s;2+\gamma-s;\alpha)=(1-\alpha)^{-\gamma}{}_{2}F_ {1}(\gamma,1;2+\gamma-s;\tfrac{\alpha}{\alpha-1}).\]
A useful alternative integral representation of the \({}_{2}F_{1}\) is via
\[{}_{2}F_{1}(\gamma,1;2+\gamma-s;\tfrac{\alpha}{\alpha-1})=1-\alpha\gamma(1- \alpha)^{\gamma}\int_{0}^{1}dt\;t^{1+\gamma-s}(1-\alpha t)^{-\gamma-1}. \tag{1.5}\]
The hypergeometric function can be identified in a number of ways. In one instance it is an incomplete Beta function, see [23, 8.17.7],
\[{}_{2}F_{1}(\gamma,1+\gamma-s;2+\gamma-s;\alpha)=(1+\gamma-s)\alpha^{s-\gamma -1}B_{\alpha}(1+\gamma-s,1-\gamma).\]
In particular a sequence of these \({}_{2}F_{1}\) will feature prominently.
**Definition 1.1**.: Let \(F_{j}:={}_{2}F_{1}(\gamma,j\gamma;1+j\gamma;\alpha)\) for \(j\in\mathbb{N}_{0}\). The series expansion in \(\alpha\) also generates a partial fraction form with respect to \(j\) which will be useful in the sequel
\[F_{j}=\sum_{l=0}^{\infty}\alpha^{l}\frac{(\gamma)_{l}}{l!}\frac{j}{j+l\gamma^{ -1}}. \tag{1.6}\]
Some basic properties of this sequence are the following: Let \(0<\alpha<1\) and \(\gamma>0\). The hypergeometric factors \(F_{j}\) satisfy \(F_{0}=1\), are monotonically increasing, \(F_{j+1}>F_{j}\), and are bounded above, \(F_{j}<(1-\alpha)^{-\gamma}\).
## 2. The Product Formula
From Prop. 3.5 of [28] the consecutive product of hypergeometric functions \(F_{k}\) has the leading order behaviour as \(N\to\infty\)
\[\prod_{k=1}^{N}{}_{2}F_{1}(\gamma,k\gamma;1+k\gamma;\alpha)\underset{N\to\infty }{\sim}C(1-\alpha)^{-N\gamma}\left[N+1+\frac{1}{\gamma}\right]^{-\alpha/(1- \alpha)}, \tag{2.1}\]
where \(C\) is a constant independent of \(N\). Another useful function turns out to be
\[G(t):=\frac{1}{\Gamma\left(\frac{1+\gamma-s}{\gamma}\right)}H(s),\quad t:= \frac{1-s}{\gamma}, \tag{2.2}\]
which eliminates the simple poles at \(s=1+k\gamma\), \(k\in\mathbb{N}\), so that in the finite-\(t\) plane \(|G(t)|\leq(1-\alpha)^{-\gamma}\left[\cosh(\pi\Im(t))\right]^{1/2}\). At the prescribed values \(s=1-k\gamma\), \(k\in\mathbb{Z}_{\geq 0}\), we deduce
\[G_{k}:=G(t=k)=\frac{1}{\prod_{l=1}^{k}{}_{2}F_{1}(\gamma,l\gamma;1+l\gamma; \alpha)}, \tag{2.3}\]
which have the growth properties given above.
Furthermore the finite product \(G_{l}\), defined by (2.3), was shown to be [28, Lemma 3.3] an analytic function of complex \(\Re(l)>1+\gamma^{-1}\) in an infinite product form,
\[G_{l}=(1-\alpha)^{\gamma l}\prod_{j=1}^{\infty}\frac{F_{j+l}}{F_{j}}, \tag{2.4}\]
and has an interpolating function
\[G(-t)=(1-\alpha)^{-\gamma t}\prod_{j=1}^{\infty}\frac{F_{j-t}}{F_{j}}. \tag{2.5}\]
Our new result is really a large-\(j\) expansion for \(F_{j}\) but not in the usual asymptotic sense of Poincare with a sum of descending monomials but rather in the sense of a Pade approximant with only denominator factors.
**Proposition 2.1**.: _Let \(0\leq\alpha<1\), \(\gamma>0\). For all \(m\in\mathbb{N}_{0}\) there exists a sequence \(\{f_{m}(\alpha,\gamma)\}_{m=1}^{\infty}\) such that_
\[(1-\alpha)^{\gamma}\left(1+\frac{1}{j}f_{1}\right)\left(1+\frac{1} {j^{2}}f_{2}\right)\cdots\left(1+\frac{1}{j^{m}}f_{m}\right)F_{j}\\ =1+\operatorname{O}(j^{-m-1}),\quad\text{as $j\to\infty$}. \tag{2.6}\]
Proof.: The initial values of \(m=1,2,3,\dots\) can generated by hand or otherwise by expanding successive products with \(F_{j}\) as a series expansion in large \(j\), see (1.6). For example to begin with one observes that the leading term \(\operatorname{O}(j^{-1})\) is
\[\log\left[(1-\alpha)^{\gamma}{}_{2}F_{1}(\gamma,j\gamma;j\gamma+1;\alpha) \right]=-\frac{\alpha}{1-\alpha}j^{-1}+\operatorname{O}(j^{-2}),\]
and that to eliminate this requires multiplication by an additional factor \((1+\frac{\alpha}{1-\alpha}j^{-1})\). Re-expanding again one observes a new leading term of order \(\operatorname{O}(j^{-2})\)
\[\log\left[(1-\alpha)^{\gamma}\left(1+\frac{\alpha}{(1-\alpha)}j^{-1}\right) \,{}_{2}F_{1}(\gamma,j\gamma;j\gamma+1;\alpha)\right]=\frac{\alpha}{\gamma(1- \alpha)^{2}}j^{-2}+\operatorname{O}(j^{-3}).\]
And so on. We will now show that this holds in general for all \(m>1\) by induction. Having assumed its veracity for \(m\), i.e.
\[(1-\alpha)^{\gamma}\left(1+\frac{1}{j}f_{1}\right)\left(1+\frac{ 1}{j^{2}}f_{2}\right)\cdots\left(1+\frac{1}{j^{m}}f_{m}\right)F_{j}\\ =1+\operatorname{O}(j^{-m-1})=1+C_{m+1}j^{-m-1}+\operatorname{O} (j^{-m-2}),\]
where the sub-leading terms indicated above contain decreasing powers of \(j\) which drop by one unit each term, and \(C_{m+1}\) is independent of \(j\) and only of \(f_{1},\dots,f_{m}\). This follows from the fact that \(F_{j}\), by (1.6), has an analytic expansion about \(j=\infty\). Now we compute
\[(1-\alpha)^{\gamma}\left(1+\frac{1}{j}f_{1}\right)\left(1+\frac{ 1}{j^{2}}f_{2}\right)\cdots\left(1+\frac{1}{j^{m}}f_{m}\right)\left(1+\frac{1 }{j^{m+1}}f_{m+1}\right)F_{j}\\ =1+\left[f_{m+1}+C_{m+1}\right]j^{-m-1}+\operatorname{O}(j^{-m-2 }).\]
Now, as before, we choose \(f_{m+1}=-C_{m+1}\), and the leading correction is now \(\operatorname{O}(j^{-m-2})\) thus proving our claim.
**Remark 2.1**.: We will find subsequently in Prop.4.7 that this product does not converge as \(m\to\infty\) because the growth of \(f_{m}\) is super-exponential (see (4.86)) so the representation is asymptotic. There is a parallel expansion for large \(\gamma\) but in each factor both the numerator and denominator contain \(j\) and this has disadvantages for our subsequent applications.
Remark 2.2.: The leading and trailing coefficients of \(f_{m}\) in respect of \(\alpha\) are given by
\[(1-\alpha)^{m}\gamma^{m-1}f_{m} =(-1)^{m-1}\alpha\Big{[}\left((1+\gamma)^{m-1}-\gamma^{m-1}\right) \alpha^{m-2}+\ldots\] \[+\left(\tfrac{1}{2}(2^{m}-2m)+\tfrac{1}{4}(2^{m+1}-2m-3-(-1)^{m}) \gamma\right)\alpha+1\Big{]}. \tag{2.7}\]
The list of the first six \(f\)-coefficients are:
\[(1-\alpha)f_{1} =\alpha, \tag{2.9}\] \[(1-\alpha)^{2}\gamma f_{2} =-\alpha,\] (2.10) \[(1-\alpha)^{3}\gamma^{2}f_{3} =\alpha[\alpha(2\gamma+1)+1],\] (2.11) \[(1-\alpha)^{4}\gamma^{3}f_{4} =-\alpha\left[\alpha^{2}\left(3\gamma^{2}+3\gamma+1\right)+\alpha( 5\gamma+4)+1\right], \tag{2.8}\]
\[(1-\alpha)^{5}\gamma^{4}f_{5} =\] \[\alpha\left[\alpha^{3}\left(4\gamma^{3}+6\gamma^{2}+4\gamma+1 \right)+\alpha^{2}\left(16\gamma^{2}+25\gamma+11\right)+\alpha(13\gamma+11)+1 \right], \tag{2.12}\]
\[(1-\alpha)^{6}\gamma^{5}f_{6} =\] \[-\alpha\left[\alpha^{4}\left(5\gamma^{4}+10\gamma^{3}+10\gamma^{2} +5\gamma+1\right)+\alpha^{3}\left(33\gamma^{3}+83\gamma^{2}+78\gamma+26\right)\right.\] \[\left.+\alpha^{2}\left(65\gamma^{2}+129\gamma+66\right)+2\alpha(14 \gamma+13)+1\right]. \tag{2.13}\]
Prior to continuing we require a preliminary identity for an infinite sum to be recast as a finite sum.
Lemma 2.1.: _Let \(m\in\mathbb{Z}_{\geq 0}\), \(0<\alpha<1\) and \(\gamma>0\). Then the following identity holds_
\[\sum_{k=0}^{\infty}\alpha^{k}\frac{(\gamma)_{k}}{k!}k^{m}=(1-\alpha)^{-\gamma} \sum_{l=0}^{m}S(m,l)(\gamma)_{l}\left(\frac{\alpha}{1-\alpha}\right)^{l}. \tag{2.14}\]
_where \(S(m,i)=\left\{\begin{smallmatrix}m\\ i\end{smallmatrix}\right\}\) is the Stirling number of the second kind or the Stirling "partition" number in the notation of [23, SS26.8(i)], i.e. the number of partitions of \(1,2,\ldots,m\) into exactly \(i\) non-empty subsets._
Proof.: The Stirling numbers of the second kind are the matrix elements of the transformation matrix giving the monomial basis in terms of the descending factorial basis, see [23, SS26.8(i)], which is expressed as
\[k^{m}=\sum_{l=0}^{m}S(m,l)(k-l+1)_{l}=\sum_{l=0}^{m}S(m,l)\frac{k!}{(k-l)!}.\]
Employing this in the left-hand side and interchanging the summations as the \(k\)-sum is uniformly and absolutely convergent
\[\sum_{k=0}^{\infty}\alpha^{k}\frac{(\gamma)_{k}}{k!}k^{m} =\sum_{l=0}^{m}S(m,l)\sum_{k=l}^{\infty}\alpha^{k}\frac{(\gamma)_{k }}{(k-l)!},\] \[=\sum_{l=0}^{m}S(m,l)\sum_{i=0}^{\infty}\alpha^{i+l}\frac{(\gamma) _{i+l}}{i!},\] \[=\sum_{l=0}^{m}S(m,l)\alpha^{l}(\gamma)_{l}\sum_{i=0}^{\infty} \alpha^{i}\frac{(\gamma+l)_{i}}{i!},\quad\text{using }(\gamma)_{i+l}=(\gamma)_{l}(\gamma+l)_{i},\] \[=\sum_{l=0}^{m}S(m,l)\alpha^{l}(\gamma)_{l}(1-\alpha)^{-\gamma-l},\quad\text{by the binomial theorem},\] \[=(1-\alpha)^{-\gamma}\sum_{l=0}^{m}S(m,l)(\gamma)_{l}\left(\frac {\alpha}{1-\alpha}\right)^{l}.\]
In the proof of Prop. 2.1 we only indicated how successive \(f\)-coefficients could be determined in principle but didn't give an explicit formula - our next result gives an explicit recursive means to compute the coefficients \(f_{m}\).
**Proposition 2.2**.: _Let \(0\leq\alpha<1\), \(\gamma>0\). The coefficients \(f_{m}\), \(m\geq 1\) satisfy the recursive formula_
\[f_{m+1}=(-1)^{m}\gamma^{-m-1}\sum_{i=0}^{m+1}S(m+1,i)(\gamma)_{i }\left(\frac{\alpha}{1-\alpha}\right)^{i}\\ +\sum_{n=0}^{m}(-1)^{n+1}\gamma^{-n}\sum_{i=0}^{n}S(n,i)(\gamma)_{ i}\left(\frac{\alpha}{1-\alpha}\right)^{i}\times\sum_{\begin{subarray}{c}m\geq l _{s}>\cdots>l_{1}\geq 1\\ \sum_{k=1}^{l}l_{k}=m+1-n\end{subarray}}f_{l_{s}}\cdots f_{l_{1}}. \tag{2.15}\]
_Therefore \((1-\alpha)^{m+1}\gamma^{m}f_{m+1}\) is a polynomial in \(\alpha,\gamma\) of degrees \(m,m-1\) respectively. The initial coefficient \(f_{1}\) is given by the first term of the right-hand side at \(m=0\), the second term having an empty summand._
Proof.: In order to determine \(f_{m+1}\) we only need to extract the coefficient of \(j^{-m-1}\) in
\[(1-\alpha)^{\gamma}\left(1+\frac{1}{j}f_{1}\right)\cdots\left(1+\frac{1}{j^{ m}}f_{m}\right)F_{j}.\]
In addition we are going to split the sum for \(F_{j}\) (1.6) using the following
\[\frac{j}{j+k\gamma^{-1}}=\sum_{n=0}^{m+1}(-1)^{n}\left(\frac{k}{j\gamma} \right)^{n}+(-1)^{m+2}\left(\frac{k}{j\gamma}\right)^{m+2}\frac{j}{j+k\gamma^ {-1}}.\]
Thus we have
\[-f_{m+1} =[j^{-m-1}](1-\alpha)^{\gamma}\sum_{k\geq 0}\alpha^{k}\frac{( \gamma)_{k}}{k!}\sum_{n=0}^{m+1}\left(1+\frac{1}{j}f_{1}\right)\cdots\left(1+ \frac{1}{j^{m}}f_{m}\right)(-1)^{n}\left(\frac{k}{j\gamma}\right)^{n},\] \[=[j^{-m-1}](1-\alpha)^{\gamma}\sum_{k\geq 0}\alpha^{k}\frac{( \gamma)_{k}}{k!}\sum_{n=0}^{m}\left(1+\frac{1}{j}f_{1}\right)\cdots\left(1+ \frac{1}{j^{m}}f_{m}\right)(-1)^{n}\left(\frac{k}{j\gamma}\right)^{n}\] \[\qquad+(1-\alpha)^{\gamma}\sum_{k\geq 0}\alpha^{k}\frac{( \gamma)_{k}}{k!}(-1)^{m+1}\left(\frac{k}{\gamma}\right)^{m+1},\] \[=(-1)^{m+1}\gamma^{-m-1}(1-\alpha)^{\gamma}\sum_{k\geq 0} \alpha^{k}\frac{(\gamma)_{k}}{k!}k^{m+1}\] \[\qquad+[j^{-m-1}](1-\alpha)^{\gamma}\sum_{k\geq 0}\alpha^{k} \frac{(\gamma)_{k}}{k!}\sum_{n=0}^{m}(-1)^{n}\left(\frac{k}{\gamma}\right)^{n }j^{-n}\sum_{m\geq l_{s}>\cdots>l_{1}\geq 1}j^{-l_{1}}f_{l_{1}}\cdots j^{-l_{s}}f_{l_{s}},\] \[=(-1)^{m+1}\gamma^{-m-1}(1-\alpha)^{\gamma}\sum_{k\geq 0} \alpha^{k}\frac{(\gamma)_{k}}{k!}k^{m+1}\] \[\qquad+(1-\alpha)^{\gamma}\sum_{k\geq 0}\alpha^{k}\frac{( \gamma)_{k}}{k!}\sum_{n=0}^{m}(-1)^{n}\left(\frac{k}{\gamma}\right)^{n}\sum_{ \begin{subarray}{c}m\geq l_{s}>\cdots>l_{1}\geq 1\\ \sum_{i=1}^{i}l_{i}=m+1-n\end{subarray}}f_{l_{1}}\cdots f_{l_{s}}.\]
Applying identity (2.14) to the \(k\)-sums appearing above then yields (2.15).
Let us define a \(t\)-sequence which contains all the \(\alpha\), \(\gamma\) dependencies that occur in the recursive formula (2.15), for \(n\geq 0\)
\[t_{n}(\alpha,\gamma):=(-1)^{n}\gamma^{-n}\sum_{i=0}^{n}S(n,i)( \gamma)_{i}\left(\frac{\alpha}{1-\alpha}\right)^{i}. \tag{2.16}\]
We note for standard conditions, i.e. \(0<\alpha<1\), \(\gamma>0\), and by the combinatorial interpretation of \(S(n,i)\) that \(t_{n}\) oscillates with period \(2\) in \(n\) being positive for even \(n\). The first six \(t\)-coefficients are: \(t_{-1}=0\), \(t_{0}=1\)
\[(1-\alpha)t_{1}=-\alpha, \tag{2.18}\] \[(1-\alpha)^{2}\gamma t_{2}=\left[\alpha^{2}\gamma+\alpha\right],\] (2.19) \[(1-\alpha)^{3}\gamma^{2}t_{3}=-\left[\alpha^{3}\gamma^{2}+\alpha ^{2}(3\gamma+1)+\alpha\right],\] (2.20) \[(1-\alpha)^{4}\gamma^{3}t_{4}=\left[\alpha^{4}\gamma^{3}+\alpha ^{3}\left(6\gamma^{2}+4\gamma+1\right)+\alpha^{2}(7\gamma+4)+\alpha\right], \tag{2.17}\]
\[(1-\alpha)^{5}\gamma^{4}t_{5}=\] \[-\left[\alpha^{5}\gamma^{4}+\alpha^{4}\left(10\gamma^{3}+10 \gamma^{2}+5\gamma+1\right)+\alpha^{3}\left(25\gamma^{2}+30\gamma+11\right)+ \alpha^{2}(15\gamma+11)+\alpha\right], \tag{2.21}\]
\[(1-\alpha)^{6}\gamma^{5}t_{6}=\left[\alpha^{6}\gamma^{5}+\alpha ^{5}\left(15\gamma^{4}+20\gamma^{3}+15\gamma^{2}+6\gamma+1\right)\right.\] \[\left.+\alpha^{4}\left(65\gamma^{3}+120\gamma^{2}+91\gamma+26 \right)+\alpha^{3}\left(90\gamma^{2}+146\gamma+66\right)+\alpha^{2}(31\gamma+ 26)+\alpha\right]. \tag{2.22}\]
Then the recursive formula (2.15) takes the generic form
\[f_{m+1}=-t_{m+1}-\sum_{n=0}^{m}t_{n}\sum_{\begin{subarray}{c}m\geq l_{s}>\dots>l_ {1}\geq 1\\ \sum_{k=1}^{k}l_{k}=m+1-n\end{subarray}}f_{l_{s}}\cdots f_{l_{1}}, \tag{2.23}\]
or alternatively one has
\[\sum_{n=0}^{m+1}t_{n}\sum_{\begin{subarray}{c}m+1\geq l_{s}>\dots>l_{1}\geq 1 \\ n+\sum_{k=1}^{2}l_{k}=m+1\end{subarray}}f_{l_{s}}\cdots f_{l_{1}}=0. \tag{2.24}\]
Consequently we have the expansion
\[(1-\alpha)^{\gamma}F_{j}=\sum_{k\geq 0}j^{-k}t_{k}(\alpha,\gamma), \tag{2.25}\]
and will defer the question of convergence or otherwise of this sum until SS4.
## 3. Power Product Expansions and Lambert Series
At this point it is advantageous to step back from specific problem at hand and recognise a larger picture - what we have been doing is constructing a _power product expansion_ (PPE) for a function that is given in terms of an analytic expansion about \(x=0\), i.e. in terms of an infinite product with a particular structure of its factors. For some background on PPEs one should consult [12], [13], [2], [14]. We define this via the relation
\[A(x):=1+\sum_{n\geq 1}a_{n}x^{n}=\prod_{n\geq 1}\left(1+f_{n}x^{n}\right), \tag{3.26}\]
where the left-hand side encodes a sequence \(\{a_{n}\}_{n\geq 1}\) and the right-hand side another sequence \(\{f_{n}\}_{n\geq 1}\). These sums and products may be convergent or be purely formal as generating functions. On the right-hand side we see an obvious combinatorial interpretation as partitions without repetitions, or strict partitions. In the traditional combinatorial approach there is a "simple" expression for the \(a\)-coefficient, so that for \(m\geq 1\) one has
\[a_{m}=\sum_{\begin{subarray}{c}m\geq l_{s}>\dots>l_{1}\geq 1\\ \sum_{k=1}^{k}l_{k}=m\end{subarray}}f_{l_{s}}\cdots f_{l_{1}}. \tag{3.27}\]
The first six \(a\)-coefficients in terms of \(f\)-coefficients are:
\[a_{1}=f_{1}, \tag{3.29}\] \[a_{2}=f_{2},\] (3.30) \[a_{3}=f_{3}+f_{2}f_{1},\] (3.31) \[a_{4}=f_{4}+f_{3}f_{1},\] (3.32) \[a_{5}=f_{5}+f_{4}f_{1}+f_{3}f_{2},\] (3.33) \[a_{6}=f_{6}+f_{5}f_{1}+f_{4}f_{2}+f_{3}f_{2}f_{1}, \tag{3.28}\]
However, in contrast to many studies of additive combinatorics, we are not going to study examples where the \(f\)-coefficients are given with simple forms and the problem is to determine the consequent properties of the \(a\)-coefficients which have the combinatorial interpretation but to do the reverse. We will take the \(a\)-coefficients as given in some simple explicit form and wish to construct the \(f\)-coefficients or study their properties. It appears that there is not an equivalent "simple" expression for the inverse situation. Inverting the relation (3.27) for the first members gives
\[f_{1} =a_{1}, \tag{3.35}\] \[f_{2} =a_{2},\] (3.36) \[f_{3} =a_{3}-a_{2}a_{1},\] (3.37) \[f_{4} =a_{4}-a_{3}a_{1}+a_{2}a_{1}^{2},\] (3.38) \[f_{5} =a_{5}-a_{4}a_{1}-a_{3}a_{2}+a_{3}a_{1}^{2}+a_{2}^{2}a_{1}-a_{1}^{ 3}a_{2},\] (3.39) \[f_{6} =a_{2}a_{1}^{4}-a_{3}a_{1}^{3}-a_{2}^{2}a_{1}^{2}+a_{4}a_{1}^{2}+a _{2}a_{3}a_{1}-a_{5}a_{1}-a_{2}a_{4}+a_{6}. \tag{3.34}\]
See Comtet [7], Chap. II, Supp. & Ex. 16, pg. 120-1; Eq. (10) of [12]; pg. 1221 of [13]; pg. 94-5 of [15]. A related thread closely connected to this problem is the study of generalised Lambert series and their arithmetic properties, see [5], [6], [9], [10], [8]. On the applications of Mobius inversion in combinatorial analysis see [4]. In addition many of these studies have treated a generalised power product expansion (GPPE) [14] of the form
\[1+\sum_{n\geq 1}a_{n}x^{n}=\prod_{n\geq 1}\left(1+f_{n}x^{n}\right)^{r_{n}}.\]
Let us recall the generating function definitions (3.26) and form its logarithmic derivative
\[x\frac{A^{\prime}}{A}:=\sum_{n=1}^{\infty}d_{n}x^{n}=\sum_{n=1}^{\infty}\frac{ nf_{n}x^{n}}{1+f_{n}x^{n}}, \tag{3.40}\]
which introduces the Lambert series, see Knopp, Kap. XII, SS58 C [18]. Let us write the partition of \(n\) as a sum of multiplicities \(\lambda(n)\equiv\left(1^{\lambda_{1}}2^{\lambda_{2}}\cdots\right)\vdash n=1 \,\lambda_{1}+2\,\lambda_{2}+\cdots+n\,\lambda_{n}\). Gingold and Knopfmacher [13] deduced the form of \(f_{n}\)'s in terms of \(a_{n}\)'s in Lemma 2.1, Eq. (2.3)
\[f_{n}=\sum_{\lambda\vdash n}c(\lambda)a_{1}^{\lambda_{1}}\cdots a_{n}^{\lambda _{n}},\]
without giving a general, closed formula for the coefficients \(c(\lambda)\). They investigated the absolute sum of these coefficients
\[\tilde{B}(n):=\sum_{\lambda(n)}|c(\lambda)|,\]
and this sequence is recorded in OEIS [1] as A220418. They found some properties and in particular bounds for \(B(n)\)
\[\frac{2^{n-1}}{n}\leq\tilde{B}(n)<\frac{2^{n}}{n},\quad n\geq 1,\]
along with the asymptotic growth estimates
\[\tilde{B}(n)\underset{n\to\infty}{=}\frac{2^{n}}{n}\left(1+\mathrm{O}(n^{-1}) \right).\]
### \(t\), \(a\) and \(d\)-Coefficients
The \(t\) and \(a\) coefficients are related by a linear relation between for all \(m\geq 0\)
\[-t_{m+1}=t_{m}a_{1}+t_{m-1}a_{2}+\ldots+t_{1}a_{m}+t_{0}a_{m+1}. \tag{3.41}\]
This follows from comparing (2.24) with (3.27) from which one can conclude
\[\sum_{l=0}^{m+1}t_{l}a_{m+1-l}=0,\]
taking \(a_{0}=1\). The above relation can be formulated as a matrix equation with a lower triangular Toeplitz structure whose solution for the \(a_{n}\) coefficients is given by
\[\begin{pmatrix}a_{1}\\ a_{2}\\ a_{3}\\ a_{4}\\ \vdots\end{pmatrix}=-\begin{pmatrix}t_{0}&0&0&0&0&\cdots\\ t_{1}&t_{0}&0&0&0&\cdots\\ t_{2}&t_{1}&t_{0}&0&0&\cdots\\ t_{3}&t_{2}&t_{1}&t_{0}&0&\cdots\\ \vdots&\vdots&\vdots&\vdots&\vdots&\end{pmatrix}^{-1}\begin{pmatrix}t_{1}\\ t_{2}\\ t_{3}\\ t_{4}\\ \vdots\end{pmatrix}. \tag{3.42}\]
Another formula for \(a_{n}\), \(n\geq 0\), can be found via the geometrical generating functions
\[a_{n}=-[x^{n}]\frac{\sum_{j=1}^{\infty}t_{j}x^{j}}{1+\sum_{j=1}^{\infty}t_{j} x^{j}}. \tag{3.43}\]
**Proposition 3.1**.: _We have_
\[a_{n}=\sum_{\begin{subarray}{c}m_{l}\geq 0\\ \sum_{l\geq 1}m_{l}=n\end{subarray}}(-1)^{\sum_{l\geq 1}m_{l}}\binom{\sum_{l \geq 1}m_{l}}{m_{1},m_{2},\ldots}\prod_{l\geq 1}t_{l}^{m_{l}},\quad n\geq 1, \tag{3.44}\]
_or alternatively_
\[A(x)=\left(1+\sum_{l=1}^{\infty}t_{l}x^{l}\right)^{-1}. \tag{3.45}\]
Proof.: Relation (3.45) follows directly from (3.43). Without loss of generality we can take \(b_{0}\) to be unity in a series expansion of the inverse
\[\left(1+\sum_{n=1}^{\infty}b_{n}x^{n}\right)^{-1}=\sum_{n=0}^{\infty}x^{n} \sum_{\begin{subarray}{c}m_{l}\geq 0\\ \sum_{l=1}m_{l}=n\end{subarray}}(-1)^{\sum_{l}m_{l}}\binom{\sum_{l\geq 1}m_{l}}{m_ {1},m_{2},\ldots}\prod_{l\geq 1}b_{l}^{m_{l}}, \tag{3.46}\]
using the multinomial expansion, see [7, SS3.5], [26].
Explicit solutions for low coefficients are:
\[a_{1}=-t_{1}, \tag{3.48}\] \[a_{2}=-t_{2}+t_{1}^{2},\] (3.49) \[a_{3}=-t_{3}+2t_{2}t_{1}-t_{1}^{3},\] (3.50) \[a_{4}=-t_{4}+2t_{3}t_{1}+t_{2}^{2}-3t_{2}t_{1}^{2}+t_{1}^{4},\] (3.51) \[a_{5}=-t_{5}+2t_{4}t_{1}+2t_{3}t_{2}-3t_{3}t_{1}^{2}-3t_{2}^{2}t _{1}+4t_{2}t_{1}^{3}-t_{1}^{5},\] (3.52) \[a_{6}=-t_{6}+2t_{5}t_{1}+2t_{4}t_{2}-3t_{4}t_{1}^{2}+t_{3}^{2}-6t _{3}t_{2}t_{1}+4t_{3}t_{1}^{3}-t_{2}^{3}+6t_{2}^{2}t_{1}^{2}-5t_{2}t_{1}^{4}+t_ {1}^{6}. \tag{3.47}\]
Note that the foregoing relations are completely reflexive, i.e. symmetrical under \(t_{n}\leftrightarrow a_{n}\).
We can deduce a similar linear recurrence relation linking the \(d_{n}\) coefficients to the \(t_{m}\) coefficients.
**Proposition 3.2**.: _The \(d_{n}\) coefficients and the \(t_{m}\) coefficients satisfy the convolution relation, \(m\geq 1\)_
\[-mt_{m}=\sum_{n=1}^{m}t_{m-n}d_{n}, \tag{3.53}\]
_which can be written as a recurrence relation for the \(d_{n}\) coefficients in the forward direction_
\[d_{m}=-d_{m-1}t_{1}-\ldots-d_{1}t_{m-1}-mt_{m}. \tag{3.54}\]
Proof.: From the defining equation we have
\[\sum_{n\geq 1}d_{n}x^{n}=x\frac{d}{dx}\log A(x)=-x\frac{d}{dx}\log\bigg{(}1+ \sum_{l\geq 1}t_{l}x^{l}\bigg{)},\]
after using (3.45). From this relation we conclude with (3.54).
In addition the \(d_{n}\) coefficients are related to the \(a_{n}\) coefficients by the simple convolution relation
\[na_{n}=d_{n}+\sum_{k=1}^{n-1}d_{k}a_{n-k},\quad n\geq 1, \tag{3.55}\]
which follows from the definition.
### \(f\)-Coefficients
The first six \(f\)-coefficients in terms of the \(t\)-coefficients are:
\[f_{1}=-t_{1}, \tag{3.57}\] \[f_{2}=t_{1}^{2}-t_{2},\] (3.58) \[f_{3}=t_{1}t_{2}-t_{3},\] (3.59) \[f_{4}=t_{1}^{4}-2t_{2}t_{1}^{2}+t_{3}t_{1}+t_{2}^{2}-t_{4},\] (3.60) \[f_{5}=t_{2}t_{1}^{3}-t_{3}t_{1}^{2}-t_{2}^{2}t_{1}+t_{4}t_{1}+t_ {2}t_{3}-t_{5},\] (3.61) \[f_{6}=t_{3}t_{1}^{3}+t_{2}^{2}t_{1}^{2}-t_{4}t_{1}^{2}-3t_{2}t_{3 }t_{1}+t_{5}t_{1}+t_{3}^{2}+t_{2}t_{4}-t_{6}. \tag{3.56}\]
A fundamental relation derived by many authors [24], [9], [13] relates the \(f\)-coefficients and the \(d\)-coefficients, with a number of these studies treating the more general setting of the GPPE.
**Proposition 3.3** ([24], [9], Eq. (2.12)[13]).: _The \(f\)-coefficients and the \(d\)-coefficients satisfy the arithmetic convolution relation_
\[f_{n}=\frac{1}{n}d_{n}+\sum_{\begin{subarray}{c}d|n\\ d>1\end{subarray}}\frac{1}{d}\left(-f_{n/d}\right)^{d}. \tag{3.62}\]
_Note that this is not a Dirichlet convolution, so some of the powerful analytical number theory tools are not available here._
Proof.: The proof of this is well-known and we reproduce it here for the convenience of the reader. Starting with (3.40) one computes
\[\frac{A^{\prime}(x)}{A(x)}:=\sum_{n=1}^{\infty}d_{n}x^{n-1} =\sum_{n=1}^{\infty}\frac{nf_{n}x^{n-1}}{1+f_{n}x^{n}},\] \[=\sum_{n=1}^{\infty}nf_{n}x^{n-1}\sum_{m=0}^{\infty}(-1)^{m}f_{n} ^{m}x^{mn},\] \[=\sum_{n\geq 1,m\geq 0}(-1)^{m}nf_{n}^{m+1}x^{(m+1)n-1},\quad l =(m+1)n,\] \[=-\sum_{l\geq 1}x^{l-1}\sum_{n|l,n\geq 1}n(-1)^{l/n}f_{n}^{l/n},\] \[=-\sum_{l\geq 1}x^{l-1}\sum_{d|l,d\geq 1}(-1)^{d}\frac{l}{d}f_{ l/d}^{d},\]
and compares coefficients. Separating out the term with \(d=1\) gives (3.62).
**Remark 3.1**.: Ritt [24], Feld [9] and subsequent authors have undertaken studies of the growth of the \(f_{n}\) coefficients and the consequent convergence of the associated power-product expansion, albeit in a more general structure, but with the assumption that \(r=\limsup\limits_{n\geq 1}|d_{n}|^{1/n}\) exists. However as we will see this does not apply here, and in our case the rate of growth is faster.
## 4. Bounds and Growth Estimates
We seek estimates and bounds on the growth of \(f_{m}(\alpha,\gamma)\) as \(m\to\infty\) for all \(0\leq\alpha<1\) and bounded \(\gamma>0\). We will undertake this task in a step-by-step manner, firstly for \(t_{n}\), then for \(d_{n}\) and finally \(f_{n}\). Let us recall the definition of the \(t\)-sequence which contains all the \(\alpha\), \(\gamma\) dependencies that occur in the recursive formula (2.15), for \(n\geq 0\)
\[t_{n}(\alpha,\gamma):=(-1)^{n}\gamma^{-n}\sum_{i=0}^{n}S(n,i)(\gamma)_{i} \left(\frac{\alpha}{1-\alpha}\right)^{i}. \tag{4.63}\]
_Remark 4.1_.: In the defining sum (2.16) we can write the Pochhammer symbol as a descending factorial \((-x)_{i}=(-1)^{i}x(x-1)\cdots(x-i+1)\) and employ the other defining characteristic of the second kind Stirling numbers as the coefficients of descending factorial expansion of the monomials (see Eq. 26.8.10 [23])
\[\sum_{i=1}^{n}\left\{\begin{smallmatrix}n\\ i\end{smallmatrix}\right\}(-1)^{i}(-x)_{i}=\sum_{i=1}^{n}\left\{\begin{smallmatrix} n\\ i\end{smallmatrix}\right\}x(x-1)\cdots(x-i+1)=x^{n}.\]
This allows us to deduce the limiting identity \(t_{n}(\alpha,-\gamma)\underset{\alpha\to\infty}{\to}(-1)^{n}\), which is obvious from the explicit formulae (2.17)-(2.22).
The \(t_{n}\) coefficients possess an exponential generating function, which we will subsequently show to be convergent about \(z=0\) (see Prop. 4.4),
\[\sum_{n=0}^{\infty}\frac{z^{n}}{n!}t_{n}=(1-\alpha)^{\gamma}\left[1-\alpha e^ {-z/\gamma}\right]^{-\gamma}, \tag{4.64}\]
so that
\[t_{n}=(-1)^{n}\;(1-\alpha)^{\gamma}\gamma^{-n}\frac{d^{n}}{dz^{n}}\left[1- \alpha e^{z}\right]^{-\gamma}\bigg{|}_{z=0}, \tag{4.65}\]
or alternatively
\[t_{n}=(-1)^{n}(1-\alpha)^{\gamma}\gamma^{-n}\frac{n!}{2\pi i}\oint_{C_{0}} \frac{dz}{z^{n+1}}\left[1-\alpha e^{z}\right]^{-\gamma}, \tag{4.66}\]
where \(C_{0}\) is a loop contour enclosing the origin and none of the branch points \(z_{m}=-\log\alpha+2\pi im\), \(m\in\mathbb{Z}\). The first result follows from the exponential generating function for the second Stirling numbers in [23], Eq. 26.8.12. In addition these coefficients possess a purely formal geometrical generating function
\[\sum_{n=0}^{\infty}z^{n}t_{n}=(1-\alpha)^{\gamma}{}_{2}F_{1}(\gamma,\gamma z^ {-1};1+\gamma z^{-1};\alpha)=(1-\alpha)^{\gamma}\sum_{l\geq 0}\alpha^{l} \frac{(\gamma)_{l}}{l!}\frac{\gamma}{\gamma+lz}, \tag{4.67}\]
which is evident from the sequence of poles at \(z=-\gamma/m\), \(m\in\mathbb{N}\) accumulating at \(z=0\). This latter result can be deduced from [23], Eq. 26.8.11, but such a deduction involves violating a technical condition known to apply here.
Next we give an identity which is the same as in Lemma 2.1, but we reverse the logic that was employed earlier.
**Proposition 4.1**.: _The \(t_{n}\) coefficients are given by the convergent expansion about \(\alpha=0\), \(|\alpha|<1\)_
\[t_{n}(\alpha,\gamma)=(-)^{n}\gamma^{-n}(1-\alpha)^{\gamma}\sum_{l=0}^{\infty}l^ {n}\frac{(\gamma)_{l}}{l!}\alpha^{l}. \tag{4.68}\]
Proof.: We choose a circular loop centred on the origin for \(C_{0}\), \(z=re^{i\theta}\), with \(0<r<\log\alpha^{-1}\). Then we expand the radical factor of the integral via the binomial theorem arriving at
\[t_{n}=(-1)^{n}\gamma^{-n}(1-\alpha)^{\gamma}n!r^{-n}\sum_{l=0}^{\infty}\alpha^ {l}\frac{(\gamma)_{l}}{l!}\int_{-\pi}^{\pi}\frac{d\theta}{2\pi}e^{-in\theta} \exp(lre^{i\theta}).\]
The angular integral is easily evaluated by series expanding the exponential in the last factor in the integrand and we find it is \((lr)^{n}/n!\). This facilitates the cancelling of all the \(r\)-dependence, and we have (4.68). The ratio test then gives us the radius of convergence, as stated.
**Proposition 4.2**.: _The \(t_{n}(\alpha,\gamma)\) coefficients satisfy the mixed difference equation_
\[t_{n}(\alpha,\gamma)=t_{n-1}(\alpha,\gamma)-\frac{1}{1-\alpha}\left(1+\gamma^ {-1}\right)^{n-1}t_{n-1}(\alpha,\gamma+1), \tag{4.69}\]
_or in terms of the re-defined coefficient \(\tilde{t}_{n}(\alpha,\gamma)=\gamma^{n}t_{n}(\alpha,\gamma)\) the alternative_
\[\tilde{t}_{n}(\alpha,\gamma)=\gamma\tilde{t}_{n-1}(\alpha,\gamma)-\frac{ \gamma}{1-\alpha}\tilde{t}_{n-1}(\alpha,\gamma+1), \tag{4.70}\]
Proof.: The Stirling numbers of the second kind satisfy the recurrence relation, see Eq. 26.8.22 [23],
\[\left\{\genfrac{}{}{0.0pt}{}{n}{i}\right\}=i\left\{\genfrac{}{}{0.0pt}{}{n-1 }{i}\right\}+\left\{\genfrac{}{}{0.0pt}{}{n-1}{i-1}\right\},\quad 0<i\leq n-1,n>1.\]
Employing this in (2.16) we have two terms on the right-hand side: in the first of these we replace \(i=\gamma+i-\gamma\) so that \(i(\gamma)_{i}=(\gamma)_{i+1}-\gamma(\gamma)_{i}=\gamma(\gamma+1)_{i}-\gamma( \gamma)_{i}\) ; in the second we replace \(i\mapsto i+1\) and use \((\gamma)_{i+1}=\gamma(\gamma+1)_{i}\) again. As a result we have three terms which are of the same form as the definition (2.16) with additional factors, and two of these coalesce. Simplifying we have (4.69).
Central to our task will be knowledge of the growth of the \(t_{n}\) coefficients with respect to \(n\), which will be done in two independent ways - via explicit upper and lower bounds for the magnitude of \(t_{n}\) and secondly via asymptotics estimates of the value of \(t_{n}\). Our bounds will in fact be sharp and both will converge to our estimates.
One can find upper and lower bounds to the magnitude of the \(t_{n}\) coefficients with a very similar form.
**Proposition 4.3**.: _Let \(0<\gamma<1\) and \(0<\alpha<1\). The magnitude of the \(t_{n}\) coefficients are bounded above through the formula_
\[|t_{n}|<(1-\alpha)^{\gamma}\frac{\gamma^{-n}}{\Gamma(\gamma)}\mathrm{Li}_{-n- \gamma+1}(\alpha), \tag{4.71}\]
_where \(\mathrm{Li}_{s}(z)\) is the polylogarithm as defined by [23], Eq. 25.12.10. The magnitude of the \(t_{n}\) coefficients are bounded below through the formula_
\[|t_{n}|>(1-\alpha)^{\gamma}\frac{\gamma^{-n}}{\Gamma(\gamma)}\left[\mathrm{Li} _{-n-\gamma+1}(\alpha)-(1-\gamma)\mathrm{Li}_{-n-\gamma+2}(\alpha)\right]. \tag{4.72}\]
_If the large \(n\) asymptotics of the polylogarithm are employed then the bounds reduce to_
\[|t_{n}|<(1-\alpha)^{\gamma}\frac{(\gamma)_{n}}{\gamma^{n}}\left(\log\alpha^{-1 }\right)^{-n-\gamma}, \tag{4.73}\]
_and_
\[|t_{n}|>(1-\alpha)^{\gamma}\frac{(\gamma)_{n}}{\gamma^{n}}\left(\log\alpha^{-1 }\right)^{-n-\gamma}\left[1-\frac{(1-\gamma)}{n+\gamma-1}\log\alpha^{-1} \right]. \tag{4.74}\]
Proof.: Our strategy is to apply bounds for the ratio \((\gamma)_{l}/l!\) in formula (4.68). The simplest bounds suitable for this purpose is Gautschi's inequality, Eq. 5.6.4 of [23] or the original [11] which states, subject to \(x>0\) and \(0<s<1\),
\[(x+1)^{s-1}<\frac{\Gamma(x+s)}{\Gamma(x+1)}<x^{s-1}.\]
For the upper bound we use
\[\frac{\Gamma(l+\gamma)}{\Gamma(l+1)}<l^{\gamma-1},\]
and the definition of polylogarithm. For the lower bound we deduce a shifted identity derived from Gautschi's lower bound, namely
\[\frac{\Gamma(l+\gamma)}{\Gamma(l+1)}>(l+\gamma-1)l^{\gamma-2},\]
which introduces an additional, negative term. For the asymptotic forms of these we require polylogarithms with large, negative order and the following series representation suffices for this. Let \(|\mu|<2\pi\) and \(s\neq 1,2,3,\ldots\) then we have [23, Eq. 25.12.12]
\[\mathrm{Li}_{s}(e^{\mu})=\Gamma(1-s)(-\mu)^{s-1}+\sum_{k=0}^{\infty}\frac{ \zeta(s-k)}{k!}\mu^{k},\]
where \(\zeta(s)\) is the Riemann zeta function, analytically continued to \(\mathrm{Re}(s)<0\). Taking only the leading order term of this yields both (4.73) and (4.74).
In addition to the above proof there is an alternative approach to the upper bound given above.
**Proposition 4.4**.: _The magnitude of the \(t_{n}\) coefficients are bounded above through the formula_
\[|t_{n}|\leq(1-\alpha)^{\gamma}n!\left(\gamma r_{0}\right)^{-n-\gamma}\left(n+ \gamma r_{0}\right)^{\gamma}, \tag{4.75}\]
_where \(r_{0}\) is given by_
\[r_{0}+\frac{n}{\gamma}=W_{0}(\tfrac{n}{\alpha\gamma}e^{n/\gamma}), \tag{4.76}\]
_with \(W_{0}(\cdot)\) being the principal branch of the Lambert \(W\)-function, [23] SS4.13. If the asymptotic formula for the Lambert \(W_{0}\)-function is again employed in the above equation we have_
\[\alpha e^{r_{0}}\sim\frac{n}{n+\gamma\log(\tfrac{n}{\alpha\gamma})}. \tag{4.77}\]
Proof.: Starting from (4.66) specialised to a circular loop of radius \(r\) one can deduce the upper bound
\[|t_{n}|\leq(1-\alpha)^{\gamma}\gamma^{-n}n!\sup_{z=re^{i\theta},-\pi<\theta< \pi}r^{-n}\left|1-\alpha e^{z}\right|^{-\gamma}.\]
The right-hand side has a supremum at \(\theta=0\) of \(r^{-n}(1-\alpha e^{r})^{-\gamma}\). This supremum can be minimised with respect to \(r\) at \(r_{0}\) given by
\[\alpha e^{r_{0}}=\frac{n}{n+\gamma r_{0}},\]
or by (4.76). Clearly \(r_{0}<\log\alpha^{-1}\) as \(1-\alpha e^{r_{0}}=\alpha\gamma n^{-1}r_{0}e^{r_{0}}\). Furthermore, due to the elementary inequality \((1+r)^{-1}>e^{-r}\) for \(r>0\), we can deduce the lower bound \(n(n+\gamma)^{-1}\log\alpha^{-1}<r_{0}\). The large \(n\) asymptotics of \(r_{0}\) follow from those of the \(W_{0}\)-Lambert function taken to second order in [23, Eq. 4.13.1_1].
**Proposition 4.5**.: _The \(t_{n}\) coefficients possess the large \(n\) asymptotic growth formula_
\[t_{n}\sim(-1)^{n}\frac{(1-\alpha)^{\gamma}}{\sqrt{2\pi}}n!(n+1)^{-1/2}z_{0}^{- n-\gamma}\left(z_{0}+\frac{n+1}{\gamma}\right)^{\gamma}\left(z_{0}+\frac{n+1}{ \gamma}+1\right)^{-1/2}, \tag{4.78}\]
_where \(z_{0}\) is given by_
\[z_{0}+\tfrac{n+1}{\gamma}=W_{0}(\tfrac{n+1}{\alpha\gamma}e^{(n+1)/\gamma}), \tag{4.79}\]
_with \(W(\cdot)\) being the principal branch of the Lambert \(W\)-function, [23] SS4.13._
_If the asymptotic formula for the Lambert \(W_{0}\)-function is employed for \(z_{0}\) in the above equation, (see [23], Eq. 4.13.1_1), one has_
\[t_{n}\sim(-1)^{n}\frac{1}{\sqrt{2\pi}}(1-\alpha)^{\gamma}\gamma^{1/2}n!(n+1)^ {\gamma-1}\left(\gamma\log\alpha^{-1}\right)^{-n-\gamma}, \tag{4.80}\]
_or the \(n\)-th root growth_
\[\left|t_{n}\right|^{1/n}\sim\frac{n}{e\gamma\log\alpha^{-1}}. \tag{4.81}\]
_A variant of the above estimate which is simpler for our purposes is the following formula_
\[t_{n}\sim(-1)^{n}(1-\alpha)^{\gamma}\frac{(\gamma)_{n}}{\gamma^{n}}\left(\log \alpha^{-1}\right)^{-n-\gamma}. \tag{4.82}\]
Proof.: We perform a steepest descent analysis of the integral (4.66). The phase function is \(f(z)=-(n+1)\log z-\gamma\log(1-\alpha e^{z})\) with a unique critical point denoted by \(z_{0}\) and the solution of
\[\frac{n+1}{z_{0}}=\alpha\gamma\frac{e^{z_{0}}}{1-\alpha e^{z_{0}}},\]
which is solved by (4.79). Employing the second derivative
\[f_{0}^{{}^{\prime\prime}}=\frac{n+1}{z_{0}^{2}}\left[1+\frac{n+1}{\gamma}+z_{ 0}\right], \tag{4.83}\]
we find the leading term to be given by (4.78).
The growth of the \(d_{n}\) coefficients with respect to \(n\) is required and turns out that this is controlled by the growth of the \(t_{m}\) coefficients - the last term on the right-hand side of (3.54) is the dominant one whereas all the others are sub-leading.
**Proposition 4.6**.: _As \(m\to\infty\)\(d_{m}\) grows as_
\[-\frac{d_{m}}{mt_{m}}=1+\mathrm{O}(m^{-1}). \tag{4.84}\]
Proof.: Since \(t_{m}\neq 0\) we can rewrite the convolution (3.54) as
\[\frac{d_{m}}{mt_{m}}=-1-\sum_{j=1}^{m-1}\frac{j}{m}\frac{t_{m-j}t_{j}}{t_{m}} \frac{d_{j}}{jt_{j}},\]
motivating the definition \(u_{m}:=d_{m}/(mt_{m})\). This allows us to deduce the inequality
\[|u_{m}|\leq 1+\sum_{j=1}^{m-1}\frac{j}{m}\frac{|t_{m-j}||t_{j}|}{|t_{m}|}|u_{j}|.\]
This, in turn, facilitates the use of discrete analogues to the Gronwall-Bellman type inequalities [20], [21, Theorem 1.2.3], and given in their original formulation for the continuous setting [22], [17], [19] and [3]. The result is the following explicit inequality for the \(u_{m}\) coefficients
\[|u_{m}|\leq 1+\sum_{j=1}^{m-1}\frac{j}{m}\frac{|t_{m-j}||t_{j}|}{|t_{m}|}\prod_{k= j+1}^{m-1}\left(1+\frac{k}{m}\frac{|t_{m-k}||t_{k}|}{|t_{m}|}\right).\]
The relevant controlling aspect of the growth of \(u_{m}\) with \(m\to\infty\) is computed using the bounds 4.71 and 4.72
\[\frac{|t_{m-k}||t_{k}|}{|t_{m}|} \leq(1-\alpha)^{\gamma}\left(\log\alpha^{-1}\right)^{-\gamma}\frac {(\gamma)_{m-k}(\gamma)_{k}}{(\gamma)_{m}}\left[1-\frac{1-\gamma}{m+\gamma-1} \log\alpha^{-1}\right]^{-1},\] \[=(1-\alpha)^{\gamma}\left(\log\alpha^{-1}\right)^{-\gamma}\frac{ (\gamma)_{k}}{(m-1+\gamma)\cdots(m-k+\gamma)}\left[1+\mathrm{O}(m^{-1})\right]\] \[=\mathrm{O}(m^{-k}),\]
when \(k=\mathrm{O}(1)\). As this ratio is symmetrical under \(k\mapsto m-k\) the same conclusion applies when \(k=\mathrm{O}(m)\). This implies that the inner product is of order \(1+\mathrm{O}(m^{-j-1})\) with \(j\geq 1\) and consequently \(|u_{m}|\leq 1+\mathrm{O}(m^{-1})\).
From the previous result we conclude that \(\limsup_{n\geq 1}n^{-1}|d_{n}|^{1/n}=\frac{1}{e\gamma\log\alpha^{-1}}\). Our final estimates concern the growth of the \(f_{n}\) coefficients, and we observe that this in turn is completely controlled by the growth of the \(d_{n}\) coefficients.
**Proposition 4.7**.: _As \(n\to\infty\)\(f_{n}\) grows as_
\[\frac{nf_{n}}{d_{n}}=1+\mathrm{O}(\kappa^{n}(\log n)^{\log 2}), \tag{4.85}\]
_where \(\kappa=0.7514184\ldots\). Consequently we have_
\[\lim_{n\to\infty}\frac{f_{n}}{t_{n}}=-1,\quad\limsup_{n\geq 1}n^{-1}|f_{n}|^{1/n }=\frac{1}{e\gamma\log\alpha^{-1}}. \tag{4.86}\]
_Therefore expansion (2.6) of Prop. 2.1 for our particular application is not convergent and is in fact asymptotic in the sense of large \(\gamma\) and small \(\alpha\)._
Proof.: Our strategy is to adapt the method of [9] to our situation. Starting with (3.62) rephrased slightly differently we deduce the inequality
\[n|f_{n}|\leq|d_{n}|+\sum_{\genfrac{}{}{0.0pt}{}{c=1}{1\leq e<n}}e|f_{e}|^{n/e}, \tag{4.87}\]
summing over the proper divisors \(e\) of \(n\). Let \(e_{1}\) denote the divisor contributing the greatest term to the above sum. Then if we use \(n/2\) as an upper bound for the number of proper divisors of \(n\) we have
\[\frac{n|f_{n}|}{|d_{n}|}\leq 1+\tfrac{1}{2}ne_{1}\frac{|f_{e_{1}}|^{n/e_{1}}}{| d_{n}|}=1+\tfrac{1}{2}ne_{1}^{1-n/e_{1}}\frac{|d_{e_{1}}|^{n/e_{1}}}{|d_{n}|} \left(\frac{e_{1}|f_{e_{1}}|}{|d_{e_{1}}|}\right)^{n/e_{1}}.\]
From Prop.4.6 we know
\[\frac{|d_{e_{1}}|^{n/e_{1}}}{|d_{n}|}\leq\frac{e_{1}^{n/e_{1}}}{n}\frac{|t_{e_ {1}}|^{n/e_{1}}}{|t_{n}|}\left(1+\mathrm{O}(n^{-1},e_{1}^{-1})\right),\]
and from Prop.4.3
\[\frac{|t_{e_{1}}|^{n/e_{1}}}{|t_{n}|}\leq\left[\sqrt{2\pi}\frac{e^{-\gamma}}{ \Gamma(\gamma)}\left(\frac{1-\alpha}{\log\alpha^{-1}}\right)^{\gamma}\right] ^{n/e_{1}-1}\left[\frac{(e_{1}+\gamma)^{n/e_{1}}}{n+\gamma}\right]^{\gamma-1/ 2}\left(\frac{e_{1}+\gamma}{n+\gamma}\right)^{n}.\]
If we make the simplifying definitions
\[C=\sqrt{2\pi}\frac{e^{1-\gamma}}{\Gamma(\gamma)}\left(\frac{1-\alpha}{\log\alpha ^{-1}}\right)^{\gamma},\qquad V_{e_{1}}:=C^{1-e_{1}/n}\left[\frac{e_{1}+\gamma} {(n+\gamma)^{e_{1}/n}}\right]^{\gamma-1/2}\frac{e_{1}|f_{e_{1}}|}{|d_{e_{1}}|},\]
we deduce that the above inequality becomes
\[V_{n}\leq 1+\tfrac{1}{2}e_{1}\left(\frac{e_{1}+\gamma}{n+\gamma}\right)^{n}V_{e _{1}}^{n/e_{1}}.\]
We will now apply this inequality recursively by constructing a cascade of proper divisors \(e_{0}:=n,e_{1},e_{2},\dots,e_{k},e_{k+1},\dots\) where \(e_{k+1}|e_{k}\), \(e_{k+1}<e_{k}\) and \(e_{k+1}\) is the divisor with the largest contribution \(e_{k+1}|f_{e_{k+1}}|^{e_{k}/e_{k+1}}\). This cascade will terminate because of the strict inequality applying and all satisfy \(e_{k}\geq 1\), and we denote the terminal divisor \(e_{m+1}=1\). However the utility of this construction is that we do not need to identify these divisors because of the simple inequality \(e_{k+1}\leq e_{k}/2\). Therefore starting from the top and descending we can generate a sequence of upper bounds \(e_{k}\leq 2^{-k}n\) whereas if we start from the terminating divisor and ascend we generate a sequence of lower bounds \(2^{m+1-k}\leq e_{k}\). We note that combining these bounds we deduce that \(2^{m+1}\leq n\) and we find the upper bound \(m+1\leq\left\lceil\frac{\log n}{\log 2}\right\rceil\). To further simplify matters we note that
\[\left(\frac{e_{k+1}+\gamma}{e_{k}+\gamma}\right)^{e_{k}}\leq\begin{cases}e\,2^ {-e_{k}}&\text{if $e_{k}$ is even}\\ 2^{-e_{k}}&\text{if $e_{k}$ is odd}\end{cases},\]
and introduce the factor of \(e\) in the upper bound to cover both cases. Thus our final inequality becomes a nested recursion
\[V_{e_{k}}\leq 1+\tfrac{1}{2}e_{k+1}2^{-e_{k}}V_{e_{k+1}}^{2}.\]
Because of the exponential suppression of the denominator \(2^{g_{k}}\) where the exponents are \(g_{k}=k+1-m+2^{m+1-k}\) we can take the numerators to be generated by the recurrence \(x_{n}=1+x_{n-1}^{2}\) with \(x_{1}=1\). This is the OEIS sequence [1] A003095, where \(x_{n}\) counts the number of binary trees of height less than or equal to \(n\) and has the asymptotic growth as \(n\to\infty\) of \(x_{n}\sim c^{2n}\) with \(c=1.2259024435287485386279474959130085213...\). In conclusion we find exponential convergence of \(V_{n}\) to unity
\[V_{n}\leq 1+2^{-2^{m+1}+m-1}c^{2m+2}=1+\tfrac{1}{2}\left(\tfrac{1}{2}c^{2} \right)^{n}\left(\tfrac{1}{2}n\right)^{\log 2},\]
and (4.85) follows.
## 5. Gamma Function Products
In our final section we employ the product expansion (2.6) as an asymptotic expansion in order to derive some leading order approximations for the \(\alpha\)-Sun density. At this juncture it is important to clarify a distinction between the true and rigorous convergent nature of the \(j\)-product in (1.2) and what we are proposing
here: each factor of \(F_{j}\) is replaced by an internal product of \(m\) factors via (2.6) up to a fixed cut-off, and the order of products reversed in order to perform the \(j\) product exactly, so that one has tractable result for the Mellin transform \(H(s)\) and can compute the inverse Mellin transform explicitly. We will only carry this out for the first two orders and compare this with the recent related result of Simon [25].
To begin with we recall some classical results concerning evaluations of simple factorisable infinite products.
**Lemma 5.1**.: _Let the \(m\)-th root of unity be denoted by \(\omega_{m}=e^{2\pi i/m}\). Then for \(z\notin\mathbb{N}\;\omega_{m}^{-j}\) with \(j=0,\ldots,m-1\), \(m\in\mathbb{N}\), \(m\geq 2\) we have_
\[\prod_{n=1}^{\infty}\frac{n^{m}}{n^{m}-z^{m}}=\prod_{j=0}^{m-1}\Gamma(1- \omega_{m}^{j}z). \tag{5.1}\]
_Furthermore let \(|z|<1\)[27] then the above product can also be evaluated as_
\[\prod_{j=0}^{m-1}\Gamma(1-\omega_{m}^{j}z)=\exp\left(\sum_{k=1}^{\infty}\frac {\zeta(mk)}{k}z^{mk}\right). \tag{5.2}\]
Our first technical result is the evaluation of infinite products resulting from the truncation of the \(m\)-expansion in (2.6).
**Corollary 5.1**.: _Let \(\omega_{m}^{(k+1/2)}=e^{2\pi i/m\cdot(k+\frac{1}{2})}\) for \(m\in\mathbb{N}\), \(k\in\mathbb{Z}_{\geq 0}\). Furthermore let \(t\in\mathbb{C}\) excluding all \(1-f_{m}^{1/m}\omega_{m}^{(k+1/2)}\). Then_
\[\prod_{j=1}^{\infty}\frac{1+f_{1}j^{-1}}{1+f_{1}(j-t)^{-1}}\cdots \frac{1+f_{m}j^{-m}}{1+f_{m}(j-t)^{-m}}\\ =\frac{\Gamma(1-t+f_{1})}{\Gamma(1-t)\Gamma(1+f_{1})}\cdots\prod _{k=0}^{m-1}\frac{\Gamma(1-t-f_{m}^{1/m}\omega_{m}^{(k+1/2)})}{\Gamma(1-t) \Gamma(1-f_{m}^{1/m}\omega_{m}^{(k+1/2)})}. \tag{5.3}\]
Proof.: We compute the finite product \(J\in\mathbb{N}\) for \(m>1\) using (5.1)
\[\prod_{j=1}^{J}\left(1+\frac{f_{m}}{(j-t)^{m}}\right) =\prod_{k=0}^{m-1}\frac{\Gamma(J+1-t-f_{m}^{1/m}\omega_{m}^{(k+1/ 2)})}{\Gamma(J+1-t)}\frac{\Gamma(1-t)}{\Gamma(1-t-f_{m}^{1/m}\omega_{m}^{(k+1/ 2)})},\] \[\underset{J\to\infty}{\to}(J+1)^{-f_{m}^{1/m}\sum_{k=0}^{m-1} \omega_{m}^{(k+1/2)}}\prod_{k=0}^{m-1}\frac{\Gamma(1-t)}{\Gamma(1-t-f_{m}^{1/ m}\omega_{m}^{(k+1/2)})},\] \[=\prod_{k=0}^{m-1}\frac{\Gamma(1-t)}{\Gamma(1-t-f_{m}^{1/m} \omega_{m}^{(k+1/2)})},\]
after using the sum identity \(\sum_{k=0}^{m-1}\omega_{m}^{(k+1/2)}=0\). For \(m=1\) one can use the fact that only ratios \(\prod_{j=1}^{J}\left(1+\frac{f_{1}}{j}\right)/\left(1+\frac{f_{1}}{(j-t)}\right)\) are required and the cancellation occurs via that mechanism.
**Proposition 5.1**.: _Let \(0\leq\alpha<1\). At the first order \(m=1\) we compute the Mellin inversion and find a previously given result (see Eq. 3.61 in [28])_
\[h(x)\sim\frac{\gamma(1-\alpha)^{-\gamma(1-\alpha)^{-1}}}{\Gamma(\frac{1}{1- \alpha})}x^{-1-\gamma/(1-\alpha)}e^{-(1-\alpha)^{-\gamma}x^{-\gamma}}. \tag{5.4}\]
_At the second order we have_
\[h(x)\sim\frac{\gamma x^{-1}}{\Gamma\left(\frac{1}{1-\alpha},1+ \frac{\alpha^{1/2}\gamma^{-1/2}}{1-\alpha},1-\frac{\alpha^{1/2}\gamma^{-1/2}} {1-\alpha}\right)}\Bigg{\{}\] \[\Gamma\left(\begin{array}{cc}\frac{\left[\alpha^{1/2}\gamma^{- 1/2}-\alpha\right]}{1-\alpha}&-\frac{\left[\alpha^{1/2}\gamma^{-1/2}+\alpha \right]}{1-\alpha}\\ -\frac{\alpha}{1-\alpha}&-\frac{\alpha}{1-\alpha}\end{array}\right)[(1-\alpha )x]^{-\gamma/(1-\alpha)}\] \[\times{}_{2}F_{2}\left(\begin{array}{cc}\frac{1}{1-\alpha},& \frac{1}{1-\alpha}\\ \frac{\left[1-\alpha^{1/2}\gamma^{-1/2}\right]}{1-\alpha},&\frac{\left[1+ \alpha^{1/2}\gamma^{-1/2}\right]}{1-\alpha}\end{array};-(1-\alpha)^{-\gamma}x^ {-\gamma}\right)\] \[+\Gamma\left(\begin{array}{cc}\frac{\left[-\alpha^{1/2}\gamma^{- 1/2}+\alpha\right]}{1-\alpha}&-2\frac{\alpha^{1/2}\gamma^{-1/2}}{1-\alpha}\\ -\frac{\alpha^{1/2}\gamma^{-1/2}}{1-\alpha}&-\frac{\alpha^{1/2}\gamma^{-1/2}}{1 -\alpha}\end{array}\right)[(1-\alpha)x]^{-\gamma\left[1+(1-\alpha)^{-1}\alpha^ {1/2}\gamma^{-1/2}\right]}\] \[\times{}_{2}F_{2}\left(\begin{array}{cc}1+\frac{\alpha^{1/2} \gamma^{-1/2}}{1-\alpha},&1+\frac{\alpha^{1/2}\gamma^{-1/2}}{1-\alpha}\\ 1-\frac{\left[\alpha-\alpha^{1/2}\gamma^{-1/2}\right]}{1-\alpha},&1+2\frac{ \alpha^{1/2}\gamma^{-1/2}}{1-\alpha}\end{array};-(1-\alpha)^{-\gamma}x^{- \gamma}\right)\] \[+\Gamma\left(\begin{array}{cc}\frac{\left[\alpha^{1/2}\gamma^{- 1/2}+\alpha\right]}{1-\alpha}&2\frac{\alpha^{1/2}\gamma^{-1/2}}{1-\alpha}\\ \frac{\alpha^{1/2}\gamma^{-1/2}}{1-\alpha}&\frac{\alpha^{1/2}\gamma^{-1/2}}{1 -\alpha}\end{array}\right)[(1-\alpha)x]^{-\gamma\left[1-(1-\alpha)^{-1}\alpha^ {1/2}\gamma^{-1/2}\right]}\] \[\times{}_{2}F_{2}\left(\begin{array}{cc}1-\frac{\alpha^{1/2} \gamma^{-1/2}}{1-\alpha},&1-\frac{\alpha^{1/2}\gamma^{-1/2}}{1-\alpha}\\ 1-\frac{\left[\alpha+\alpha^{1/2}\gamma^{-1/2}\right]}{1-\alpha},&1-2\frac{ \alpha^{1/2}\gamma^{-1/2}}{1-\alpha}\end{array};-(1-\alpha)^{-\gamma}x^{- \gamma}\right)\Bigg{\}}. \tag{5.5}\]
Proof.: We give details just for the first order case as this is typical for the general situation. Applying (5.3) to the specific case at hand we find that
\[\prod_{j=1}^{\infty}\frac{1+f_{1}j^{-1}}{1+f_{1}(j-t)^{-1}}=\frac{\Gamma( \frac{1}{1-\alpha}-t)}{\Gamma(1-t)\Gamma(\frac{1}{1-\alpha})}.\]
Taking the above expression for the last factor in (1.2) we have the Mellin-Barnes integral
\[h(x)\sim\frac{\gamma}{2\pi ix}\int_{c-i\infty}^{c+i\infty}dt\:x^{-\gamma t}(1 -\alpha)^{-\gamma t}\frac{\Gamma(\frac{1}{1-\alpha}-t)}{\Gamma(\frac{1}{1- \alpha})},\]
with \(c<1\). We note that the only singularities of the integrand are simple poles at \(t_{l}=l+(1-\alpha)^{-1}\), \(l\geq 0\). However because the integrand has an exponential decay as \(t\to c\pm i\infty\) due to the Gamma function factor but also algebraic decay as \(|t|\to\infty\) for \(\mathrm{Re}(t)>1\) one can fold ends of the contour over the positive-\(t\) axis and thus enclosing all of these poles. One computes the residues as
\[h(x)\sim\frac{\gamma x^{-1}}{\Gamma(\frac{1}{1-\alpha})}\sum_{l=0}^{\infty} \frac{(-1)^{l}}{l!}\left[(1-\alpha)x\right]^{-\gamma(l+(1-\alpha)^{-1})},\]
and the summation is trivial, leading to (5.4). To include the second order factors as well one has two new sequences of simple poles at \(t_{l}=l+1\pm(1-\alpha)^{-1}\alpha^{1/2}\gamma^{-1/2}\) in addition to the first order set. We assume that these three sequences do not overlap, as in the case of generic \(\alpha,\gamma\). After some computation we recognise the series definition of the \({}_{2}F_{2}\) hypergeometric function, and simplifying we arrive at (5.5).
**Corollary 5.2**.: _The small \(x\to 0^{+}\) asymptotic form of the second order approximation to the density (5.5) assumes the simple form_
\[h(x)\underset{x\to 0^{+}}{\sim}\frac{\gamma(1-\alpha)^{-\gamma/(1-\alpha)}}{ \Gamma\left(\frac{1}{1-\alpha},1+\frac{\alpha^{1/2}\gamma^{-1/2}}{1-\alpha},1 -\frac{\alpha^{1/2}\gamma^{-1/2}}{1-\alpha}\right)}x^{-1-\gamma/(1-\alpha)}e^ {-(1-\alpha)^{-\gamma}x^{-\gamma}}, \tag{5.6}\]
_which has precisely the same \(x\)-dependence as (5.4) and only differs in the constant factor._
Proof.: Utilising the large argument asymptotic leading order terms for the hypergeometric function, [23, Eq. 16.11.7] with \(\kappa=1\), \(\nu=a_{1}+a_{2}-b_{1}-b_{2}\), in (5.5)
\[\frac{\Gamma(a_{1})\Gamma(a_{2})}{\Gamma(b_{1})\Gamma(b_{2})}{}_{2 }F_{2}\left(\begin{array}{cc}a_{1},&a_{2}\\ b_{1},&b_{2}\end{array};z\right)\underset{z\to\infty}{\sim}z^{\nu}e^{z}\\ +\frac{\Gamma(a_{1})\Gamma(a_{1}-a_{2})}{\Gamma(b_{1}-a_{1})\Gamma (b_{2}-a_{1})}\left(e^{\pm\pi i}z\right)^{-a_{1}}+\frac{\Gamma(a_{2})\Gamma(a _{2}-a_{1})}{\Gamma(b_{1}-a_{2})\Gamma(b_{2}-a_{2})}\left(e^{\pm\pi i}z\right) ^{-a_{2}},\]
we find significant simplification occurs because all three terms end up with the same \(x\)-dependence and the amalgamation of their coefficients into a single common factor, resulting in (5.6).
In concluding we present clear numerical evidence of our approximations versus the recent result in [25, Thm. 2], which gives the asymptotic form of the density
\[h(x;\alpha,\gamma)\underset{x\to 0^{+}}{\sim}c(\alpha,\gamma)x^{-1-\gamma(1- \alpha)^{-1}}e^{-(1-\alpha)^{-\gamma}x^{-\gamma}}, \tag{5.7}\]
where
\[c=\gamma(1-\alpha)^{\frac{\gamma}{\alpha-1}}\exp\left(\frac{\alpha\psi(1)}{ \alpha-1}\right)\prod_{k=1}^{\infty}\frac{\exp\left(\frac{\alpha}{(\alpha-1) k}\right)}{{}_{2}F_{1}\left(1,\gamma;1+k\gamma;\frac{\alpha}{\alpha-1}\right)}. \tag{5.8}\]
Note that (5.4) is a global approximation to the density as the constant factor is its true normalisation even though the \(x\)-dependent part is identical to the true asymptotic form as \(x\to 0^{+}\), as given by the result of Theorem 2 in [25]. Such approximations are then a large \(\gamma\) and small \(\alpha\) asymptotic development, in the sense of \((1-\alpha)^{-1}\) being of \(\mathrm{O}(1)\). When such conditions are violated, as is apparent in the following plots, then the approximation can vanish and change sign. Another point to note is that the constant \(c(\alpha,\gamma)\) (5.8) diverges to \(0\) or \(\infty\) as \(\alpha\to 1^{-}\) when \(\gamma<1\) or \(\gamma\geq 1\) respectively.
Figure 1. Plot of the constant factor in the \(x\to 0+\) asymptotic form of \(h(x)\) versus \(\alpha\) for \(\gamma=\frac{1}{2}\). Blue dots are the exact \(c(\alpha,\gamma)\) (5.8), the black line is the first order approximation (5.4) and the red line is the absolute value of the second order approximation (5.6).
Figure 2. As per Fig. 1 for \(\gamma=1\).
## 6. Acknowledgements
The author thanks Howard Cohl for the invitation to present a preliminary version of the current work to the 2022 AMS fall sectional meeting _hypergeometric functions, q-series and generalizations_ in Salt Lake City, Utah, and thanks Cindy Greenwood and Thomas Simon for much appreciated correspondence. He would also like express gratitude to the School of Mathematics and Statistics, University of Melbourne, for hosting his visit during the final stages of this study.
|
2302.13873
|
Operator moment dilations as block operators
|
Let $\mathcal{H}$ be a complex Hilbert space and let
$\big\{A_{n}\big\}_{n\geq 1}$ be a sequence of bounded linear operators on
$\mathcal{H}$. Then a bounded operator $B$ on a Hilbert space $\mathcal{K}
\supseteq \mathcal{H}$ is said to be a dilation of this sequence if
\begin{equation*}
A_{n} = P_{\mathcal{H}}B^{n}|_{\mathcal{H}} \; \text{for all}\; n\geq 1,
\end{equation*} where $P_{\mathcal{H}}$ is the projection of $\mathcal{K}$
onto $\mathcal{H}.$ The question of existence of dilation is a generalization
of the classical moment problem. We recall necessary and sufficient conditions
for the existence of self-adjoint, isometric and unitary dilations and present
block operator representations for these dilations. For instance, for
self-adjoint dilations one gets block tridiagonal representations similar to
the classical moment problem.
Given a positive invertible operator $A$, an operator $T$ is said to be in
the $\mathcal{C}_{A}$-class if the sequence
$\{A^{-\frac{1}{2}}T^nA^{-\frac{1}{2}}:n\geq 1\}$ admits a unitary dilation. We
identify a tractable collection of $\mathcal{C}_A$-class operators for which
isometric and unitary dilations can be written down explicitly in block
operator form. This includes the well-known $\rho$-dilations for positive
scalars. Here the special cases $\rho =1$ and $\rho =2$ correspond to
Sch\"{a}ffer representation for contractions and Ando representation for
operators with numerical radius not more than one respectively.
|
B. V. Rajarama Bhat, Anindya Ghatak, Santhosh Kumar Pamula
|
2023-02-27T15:24:58Z
|
http://arxiv.org/abs/2302.13873v1
|
# Operator moment dilations as block operators
###### Abstract.
Let \(\mathcal{H}\) be a complex Hilbert space and let \(\left\{A_{n}\right\}_{n\geq 1}\) be a sequence of bounded linear operators on \(\mathcal{H}\). Then a bounded operator \(B\) on a Hilbert space \(\mathcal{K}\supseteq\mathcal{H}\) is said to be a dilation of this sequence if
\[A_{n}=P_{\mathcal{H}}B^{n}|_{\mathcal{H}}\text{ for all }n\geq 1,\]
where \(P_{\mathcal{H}}\) is the projection of \(\mathcal{K}\) onto \(\mathcal{H}\). The question of existence of dilation is a generalization of the classical moment problem. We recall necessary and sufficient conditions for the existence of self-adjoint, isometric and unitary dilations and present block operator representations for these dilations. For instance, for self-adjoint dilations one gets block tridiagonal representations similar to the classical moment problem.
Given a positive invertible operator \(A\), an operator \(T\) is said to be in the \(\mathcal{C}_{A}\)-class if the sequence \(\left\{A^{-\frac{1}{2}}T^{n}A^{-\frac{1}{2}}:n\geq 1\right\}\) admits a unitary dilation. We identify a tractable collection of \(\mathcal{C}_{A}\)-class operators for which isometric and unitary dilations can be written down explicitly in block operator form. This includes the well-known \(\rho\)-dilations for positive scalars. Here the special cases \(\rho=1\) and \(\rho=2\) correspond to Schaffer representation for contractions and Ando representation for operators with numerical radius not more than one respectively.
Key words and phrases:Moment problem, Dilation, Block operator, Poisson transform, Positive-definite kernel, Toeplitz operator, Hankel operator 2010 Mathematics Subject Classification: 46L07, 47A12, 47A20, 47A57, 47B35
## 1. Introduction
Starting from the pioneering work of Sz.-Nagy ([25]), the dilation theory has played a fundamental role in operator theory. This has been explored by many. The early works mostly used techniques from classical function theory and basic operator theory. Thanks to W. Arveson, V. Paulsen, and others ([28]) we have new tools coming from the theory of completely positive maps. The literature here is vast and so we simply refer to a recent survey of dilation theory by Orr Shalit [35] and the references therein.
Let \(\mathcal{H}\) be a complex Hilbert space and let \(\left\{A_{n}\right\}_{n\geq 1}\) be a sequence of bounded linear operators on \(\mathcal{H}\). Then a bounded operator \(B\) on a Hilbert space \(\mathcal{K}\supseteq\mathcal{H}\) is said to be a dilation if
\[A_{n}=P_{\mathcal{H}}B^{n}|_{\mathcal{H}}\text{ for all }n\geq 1.\]
Generally it is convenient to include \(n=0\) in the sequence, where we would be taking \(A_{0}=I\) (the identity operator of the Hilbert space). The operator moment problem is to determine conditions on the sequence of operators to ensure existence of a dilation \(B\) with prescribed property, for instance we may demand \(B\) to be unitary/isometry/self-adjoint/positive or we may require the spectrum of \(B\) to be contained in a given subset of complex plane.
Two special cases of this problem are very well-known. A famous result of Sz.-Nagy tells us that if \(A_{n}=A^{n},n\geq 0\) for some operator \(A\), then it admits a unitary dilation if and only if \(A\) is a contraction.
Another special situation is when \(\mathcal{H}\) is one dimensional so that \(A_{n}\) is a sequence of scalars and \(B\) is a self-adjoint operator on \(\mathcal{K}\). This amounts to requiring
\[A_{n}=\langle u,B^{n}u\rangle=\int_{\sigma(B)}x^{n}d\mu(x)\]
where \(u\) is a unit vector in \(\mathcal{K}\) and \(\mu\) is the spectral measure of \(B\) coming from the vector state \(\langle u,(\cdot)u\rangle.\) In other words, this is the classical moment problem of seeking a measure with specified moments.
The general operator moment problem is less well-known, but if we start searching we find considerable literature scattered here and there. This problem was already thought of by Sz.-Nagy ([26]). Some further references can be found in [13, 21, 39]). Also, we refer an excellent book of V. I. Paulsen [28], where the author discusses the dilation problem for operator valued sequences, mostly through various exercises.
One of the main purpose of this article is to present some of the basic results in the field in a unified way. Some of the early literature mentioned above were inspired by classical moment problems and typically use similar methods. Here in we mostly use modern techniques coming from the theory of completely positive maps or we use the method of positive kernels to show the existence of the dilation. Another aspect we emphasize is the existence of dilations as block operator matrices, the kind got by Schaffer for Sz.-Nagy dilation. This seems to be a fairly general phenomenon. This is very useful as it helps us to visualize the dilation. For the powers of a contraction, the Schaffer construction gives the dilation as a \(2\times 2\) block operator perturbation of the bilateral shift. In the last Section we have \(4\times 4\) block operator perturbations of the bilateral shift appearing as dilation operators.
Throughout the article, \(\mathcal{H}\) denotes a complex Hilbert space and we follow Physicists convention by considering the inner product \(\langle\cdot,\cdot\rangle\) on \(\mathcal{H}\) as linear in the second variable and anti-linear in the first variable. Also note that \(\mathcal{B}(\mathcal{H})\) denotes the space of all bounded linear operators on \(\mathcal{H}\) and \(C(X)\) is the \(C^{*}\)-algebra of all continuous functions on a compact Hausdorff space \(X.\) In general, we denote \(C^{*}\)-algebras by \(\mathcal{A},\mathcal{B}\) etc. In particular, the \(C^{*}\) algebra of all \(n\times n\) matrices with entries from \(\mathcal{A}\) is denoted by \(M_{n}(\mathcal{A})\). For a subset \(M\) of \(\mathcal{H},\) the subspace \([M]:=\overline{\operatorname{span}}(M)\) is the smallest closed subspace of \(\mathcal{H}\) containing \(M\).
**Definition 1.1** (Dilation of operator sequences).: _Let \(\{A_{n}\}_{n\geq 0}\) be a sequence of bounded linear operators on a Hilbert space \(\mathcal{H}\) with \(A_{0}=I\). The operator sequence \(\{A_{n}\}_{n\geq 0}\) is said to admit a dilation if there exist a bounded linear operator \(B\) on a Hilbert space \(\mathcal{K}\supseteq\mathcal{H}\) such that_
\[A_{n}=P_{\mathcal{H}}B^{n}\big{|}_{\mathcal{H}},\ \text{ for all }\ n\geq 0. \tag{1}\]
_Then \(B\) is called the dilation. A dilation is said to be positive/self-adjoint/isometry/unitary if \(B\) has that property. A positive/self-adjoint/isometric dilation is said to be minimal if_
\[\mathcal{K}=\overline{\operatorname{span}}\{B^{n}(\mathcal{H}):n\in\mathbb{Z }_{+}\}. \tag{2}\]
Some clarifications are in order. Here by convention for any operator \(B,\)\(B^{0}\) is taken as identity. So the condition \(A_{0}=P_{\mathcal{H}}B^{0}\big{|}_{\mathcal{H}}\) is superfluous once we take \(A_{0}=I.\) Therefore we are effectively just considering the dilation of \(\{A_{n}\}_{n\in\mathbb{N}}\). However, for minimality it is important to include \(n=0\) in Equation (2). In the case of unitary dilations, the minimality condition in Equation (2) should be replaced by
\[\mathcal{K}=\overline{\operatorname{span}}\{B^{n}(\mathcal{H}):n\in\mathbb{Z}\}. \tag{3}\]
**Problem 1.2**.: _Given a sequence of operators we wish to consider dilations with prescribed property of being positive, self-adjoint etc. Some natural questions that arise are the following:_
1. _Existence:_ _What are necessary and sufficient conditions for existence of dilations with prescribed property?_
2. _Uniqueness:_ _When a dilation with prescribed property exists, is it possible to prove uniqueness of dilation up to unitary equivalence under the assumption of minimality?_
3. _Construction:_ _Can we explicitly construct these dilations instead of just abstractly proving their existence and uniqueness._
The question about uniqueness is easy to answer. For easy reference we state it as a theorem.
**Theorem 1.3**.: _Let \(\{A_{n}\}_{n\geq 0}\) be a sequence of bounded operators on a Hilbert space \(\mathcal{H}\) admitting a self-adjoint/positive/isometric/unitary dilation \(B\) on a Hilbert space \(\mathcal{K}\supseteq\mathcal{H}.\) Then the given
operator sequence admits a minimal dilation with the same prescribed property. Moreover, such a minimal dilation is unique up to unitary equivalence._
Proof.: Suppose \(B\) is a self-adjoint dilation. Then \(B\) restricted to \(\overline{\operatorname{span}}\{B^{n}h:h\in\mathcal{H},n\in\mathbb{Z}_{+}\}\) is again a self-adjoint dilation. Moreover, the inner products, \(\langle B^{m}g,B^{n}h\rangle,g,h\in\mathcal{H},m,n\in\mathbb{Z}_{+}\) are completely determined by the given sequence \(\{A^{n}:n\in\mathbb{Z}_{+}\}.\) This shows the uniqueness of minimal dilation up to unitary equivalence. Clearly the same statement holds for positive and isometric dilations. For unitary dilations we change \(\mathbb{Z}_{+}\) by \(\mathbb{Z}\) and we have the analogous result.
The first question is more delicate. It is obvious that given an arbitrary operator valued sequence \(\{A_{n}\}_{n\geq 0}\), there may not exist \(B\) such that dilation Equation (1) holds. Some necessary conditions follow easily. We list the following:
1. The sequence \(\{A_{n}\}_{n\in\mathbb{Z}_{+}}\) should satisfy the growth bound \(\|A_{n}\|\leq M^{n}\) for some \(M>0\).
2. For \(B\) to be positive (resp. self-adjoint), \(A_{n}\)'s should be positive (resp. self-adjoint). For \(B\) to be isometric or unitary, \(A_{n}\)'s should be contractive.
The condition (1) is natural as we are looking for dilations which are bounded operators. The condition (2) is also obvious.
We now briefly describe the plan of the article. We recall and summarize some known answers to the first question of Problem 1.2 in Theorem 2.1, Theorem 2.4, Theorem 2.6, Theorem 3.1, and Theorem 4.1. To be more precise, in Theorem 2.1, we provide necessary and sufficient conditions of operator valued moment sequence that admit self-adjoint dilation. This result was initially obtained by Sz.-Nagy [26]. However, we provide a contemporary approach of dilation using the theory of completely positive maps. In Theorem 2.1, and Theorem 2.4, we have necessary and sufficient criteria for self-adjoint dilation problem in terms of Hankel matrices. This can be treated as the operator analog of the Hamburger moment problem. Finding necessary and sufficient conditions for an operator sequence to have positive dilation referred to the Hausdorff moment problem. In Theorem 2.6, we obtain necessary and sufficient conditions for operator valued moment sequences to have positive dilation. In Theorem 3.1, we have necessary and sufficient condition for an operator valued sequence to have unitary dilation. This can be treated as an operator analog of Toeplitz moment problem.
The main focus of this article is the third question. We try to obtain block operator forms for various classes of dilations. In Theorem 2.9, we show that if \(\{A_{n}\}_{n\geq 0}\) admits self-adjoint dilation (say \(B\)) then \(B\) has a tri-diagonal form (see Theorem 2.8) whose blocks are given by the recursive relation (see Section 2). In Theorem 3.2, we show that if \(\{A_{n}\}_{n\geq 0}\) admits isometric dilation (say \(V\)), then \(V\) has the form as in Equation (24) and blocks of \(V\) are given by the recursive relation described in Section 3.
Then we focus on operator valued sequences of so called \(\mathcal{C}_{\mathcal{A}}\)-class that admit isometric dilations. We present several necessary and sufficient conditions for an operator that belong to \(\mathcal{C}_{\mathcal{A}}\)-class (see Theorem 4.1 and Theorem 4.2). In the final section, a special sub-class of \(\mathcal{C}_{\mathcal{A}}\)-class operators for which we can write down isometric and unitary dilations explicitly in block operator form has been studied (see Theorem 5.3, and Theorem 5.5). Moreover, we describe their minimal dilation spaces (see Proposition 5.4 and Remark 5.6).
## 2. Self-adjoint and positive dilations
### Classical moment problems
This is a well known topic in analysis. The subject has received considerable attention over the years, beginning with the pioneering efforts of Stieltjes, Riesz, Hamburger, Hausdorff and Krein, followed by the works of Haviland, Akhiezer, Fuglede, Berg, Atzmon, and many others (see for example [4, 32] and references therein). For the convenience of the reader we recall a few basic results from this theory, which are relevant for our current discussion.
Given a sequence \(\{m_{n}:n\geq 0\}\) of real numbers, the moment problem discusses the existence of a measure \(\mu\) supported on a set \(K\subseteq\mathbb{R}\) such that
\[m_{n}=\int_{K}x^{n}\;d\mu(x),\ \forall n\geq 0. \tag{4}\]
Determining the existence of such a measure \(\mu\) is known as \(K\)-moment problem [33]. Three specific choices of \(K\) stand out due to their natural importance and for historical reasons.
For a real sequence \(\{m_{n}:\ n\geq 0\}\), the _Hamburger moment problem_ is determining as when does there exist a positive Radon measure \(\mu\) on \(\mathbb{R}\) such that for all \(n\geq 0\), the integral \(\int\limits_{-\infty}^{\infty}x^{n}\;d\mu(x)\) converges and satisfies
\[m_{n}=\int\limits_{-\infty}^{\infty}x^{n}\;d\mu(x).\]
Similarly, the well known _Stieltjes moment problem_ and _Hausdorff moment problem_ ask for existence of such measures \(\mu\) (see Equation (4)) supported on \([0,\infty)\) and \([0,1]\) respectively. Equivalent conditions for the existence of solutions of these moment problems are very well known [4, 32] (also see references therein). In fact, for each \(n\geq 0\), the associated Hankel matrices of the moment sequence are defined by
\[H_{n}=\begin{bmatrix}m_{0}&m_{1}&\cdots&m_{n}\\ m_{1}&m_{2}&\cdots&m_{n+1}\\ \vdots&\vdots&\ddots&\vdots\\ m_{n}&m_{n+1}&\cdots&m_{2n}\end{bmatrix}_{(n+1)\times(n+1)},\ \ H_{n}^{(1)}= \begin{bmatrix}m_{1}&m_{2}&\cdots&m_{n+1}\\ m_{2}&m_{3}&\cdots&m_{n+2}\\ \vdots&\vdots&\ddots&\vdots\\ m_{n+1}&m_{n+2}&\cdots&m_{2n+1}\end{bmatrix}_{(n+1)\times(n+1)}.\]
It is very well known that the Hamburger moment problem has a solution if and only if the associated Hankel matrix
\[H_{n}\geq 0\text{ for every }n\geq 0.\]
Further, the Stieltjes moment problem has a solution if and only if the associated Hankel matrices
\[H_{n}\geq 0\text{ and }H_{n}^{(1)}\geq 0\text{ for every }n\geq 0.\]
Furthermore, the Hausdorff moment problem has a solution if and only if the sequence \(\{m_{n}:n\geq 0\}\) is completely monotonic i.e.,
\[(-1)^{k}(\Delta^{k}m)_{n}\geq 0\text{ for every }n,k\geq 0,\]
where
\[(\Delta^{k}m)_{n}=\sum_{i=0}^{k}\binom{k}{i}(-1)^{i}m_{i+n}.\]
One may also consider measures supported on subsets of the complex plane. For instance see [3].
Coming to dilations of operator sequences, clearly the starting point is the following famous theorem of Sz.-Nagy. Let \(T\in\mathcal{B}(\mathcal{H})\) and consider the operator valued sequence \(\{T^{n}\}_{n\geq 0}\). This sequence admits a minimal isometric or unitary dilation if and only if \(T\) is a contraction. Moreover, the minimal dilations are unique up to unitary equivalence. ( See [25] or [28, Theorem 1.1]). We call them as dilations (or power dilations) of \(T\). In 1955, Schaffer [34] provided an explicit construction of minimal isometric and unitary dilations as follows: Let \(D_{T}=(I-T^{*}T)^{\frac{1}{2}},D_{T^{*}}=(I-TT^{*})^{\frac{1}{2}}\) and \(\mathcal{D}_{T}:=\overline{\text{range}}D_{T}\), \(\mathcal{D}_{T^{*}}=\overline{\text{range}}D_{T^{*}}\). Take
\[\mathcal{K}:=\mathcal{H}\oplus\mathcal{D}_{T}\oplus\mathcal{D}_{T}\oplus \cdots.\]
Then
\[V=\begin{bmatrix}T&0&0&\dots\\ D_{T}&0&0&\dots\\ 0&I&0&\dots\\ 0&0&I&\dots\\ \vdots&\vdots&\ddots&\ddots\end{bmatrix}\]
on \(\mathcal{K}\) is a minimal isometric dilation of \(T\). Take \(\mathcal{L}=\dots\oplus\mathcal{D}_{T^{*}}\oplus\mathcal{D}_{T^{*}}\oplus \mathcal{H}\oplus\mathcal{D}_{T}\oplus\mathcal{D}_{T}\oplus\cdots\) and define \(U\) on \(\mathcal{L}\) by
\[U=\left(\begin{array}{ccccc}\ddots&&&&\\ &I&&&&\\ &&I&&\\ &&&D_{T^{*}}&\mathbf{T}&\\ &&-T^{*}&D_{T}&&\\ &&&&I&\\ &&&&\ddots\end{array}\right).\]
The bold font indicates the location of operator \(T\) from \(\mathcal{H}\) to \(\mathcal{H}\) and \(0\) entries are not displayed. In this article we provide Schaffer type constructions for several operator valued moment sequences.
### Self-adjoint dilations
In 1952, Sz-Nagy obtained the following necessary and sufficient condition for a sequence of operators to admit self-adjoint dilation with spectrum contained in a given compact set (see Theorem 2.1). For reader's convenience we present this result using the theory of completely positive maps. This method seems to have become a standard method to prove existence of operator dilations.
**Theorem 2.1**.: _[_26_]_ _Let \(X\subseteq\mathbb{R}\) be a compact set. Let \(\{A_{n}:\ n\geq 0\}\) be a sequence of bounded self-adjoint operators on a Hilbert space \(\mathcal{H}\) with \(A_{0}=I\). It admits a self-adjoint operator dilation \(B\) with \(\sigma(B)\subseteq X\) if and only if_
\[c_{0}+c_{1}A_{1}+\dots+c_{n}A_{n}\geq 0, \tag{5}\]
_whenever the complex polynomial \(c_{0}+c_{1}x+\dots+c_{n}x^{n}\geq 0\) for all \(x\in X.\)_
Proof.: Suppose Equation (5) is satisfied. Then by the functional calculus,
\[\langle g,\ (c_{0}+c_{1}A_{1}+c_{2}A_{n}+\dots+c_{n}A_{n})g\rangle=\langle g,(c_{0} +c_{1}B+c_{2}B^{2}+\dots+c_{n}B^{n})g\rangle\geq 0,\]
for every \(g\in\mathcal{H}\) whenever \(\sum\limits_{i=0}^{n}c_{i}x^{i}\) is positive on \(X\). To prove the converse, let us define a map \(\varphi\colon C(X)\to\mathcal{B}(\mathcal{H})\) by
\[\varphi(x^{n})=A_{n},\ \ \text{for all $n\geq 0$},\]
where \(C(X)\) is the \(C^{*}\)-algebra of continuous functions on \(X\). Let \(\mathcal{P}(X)\) denote the algebra of all polynomials over \(X\), then \(\mathcal{P}(X)\) is a \(*\)-subalgebra of \(C(X)\), it separates points of \(X\) and hence \(\mathcal{P}(X)\) is dense in \(C(X)\) by Stone-Weierstrass theorem. From the hypothesis, it follows that \(\varphi(p)\geq 0\) whenever \(p\in\mathcal{P}(X)\) is positive. So \(\varphi\) can be extended to a positive map on \(C(X)\). Thus \(\varphi\) is completely positive as \(C(X)\) is a commutative \(C^{*}\)-algebra. Therefore by Stinespring's theorem, there is a Hilbert space \(\mathcal{K}\), an isometry \(V\colon\mathcal{H}\to\mathcal{K}\) and a unital \(*\)-homomorphism \(\pi\colon C(X)\to\mathcal{B}(\mathcal{K})\) such that
\[\varphi(a)=V^{*}\pi(a)V,\ \text{for all $a\in\mathcal{A}$}.\]
Let \(\pi(x)=B\). Then \(B^{*}=\pi(x)^{*}=\pi(x^{*})=\pi(x)=B\) since \(X\subset\mathbb{R}\). This implies that
\[A_{n}=\varphi(x^{n})=V^{*}\pi(x^{n})V=V^{*}\pi(x)^{n}V=V^{*}B^{n}V=P_{V( \mathcal{H})}B^{n}\big{|}_{V(\mathcal{H})},\ \ \text{for all $n\geq 0$}.\]
Identifying \(h\in\mathcal{H}\) with \(Vh\in V(\mathcal{H})\) the proof is complete.
**Remark 2.2**.: _It follows from the proof of Theorem 2.1 that if a sequence \(\{A_{n}:\ n\geq 0\}\) admits self-adjoint dilation then_
\[A_{2}-A_{1}^{2}=\varphi(x^{2})-\varphi(x)^{2}=V^{*}\pi(x)\big{[}I-VV^{*}\big{]} \pi(x)V\geq 0,\]
_since \(V\) is an isometry._
The existence of self-adjoint dilation for an operator sequence is also linked with positivity of the corresponding Hankel matrix. To see this we begin with the following observation.
**Lemma 2.3**.: _Let \(\{A_{n}\}_{n\geq 0}\) be a sequence in \(\mathcal{B}(\mathcal{H})\) with \(A_{0}=I\). Then \(\sum\limits_{i,j=0}^{n}X_{i}^{*}A_{i+j}X_{j}\geq 0\) for any \(X_{0},X_{1},\cdots,X_{n}\in\mathcal{B}(\mathcal{H}),n\geq 0\) if and only if the associated Hankel matrix_
\[H_{n}:=\begin{bmatrix}I&A_{1}&A_{2}&\cdots&A_{n}\\ A_{1}&A_{2}&A_{3}&\cdots&A_{n+1}\\ \vdots&\vdots&\vdots&\cdots&\vdots\\ A_{n}&A_{n+1}&A_{n+2}&\cdots&A_{2n}\end{bmatrix}\geq 0,\ \ \text{for all}\ \ n\geq 0.\]
Proof.: Suppose that \(H_{n}\geq 0\) for all \(n\geq 0\) and let \(X_{0},X_{1},\cdots,X_{n}\in\mathcal{B}(\mathcal{H})\). For every \(g\in\mathcal{H}\), we see that
\[\big{\langle}g,\ \Big{(}\sum_{i,j=0}^{n}X_{i}^{*}A_{i+j}X_{j}\Big{)}g\big{\rangle} =\sum_{i,j=0}^{n}\langle X_{i}g,\ A_{i+j}X_{j}g\rangle=\langle \widetilde{g},H_{n}\widetilde{g}\rangle\geq 0,\]
where \(\widetilde{g}=\begin{bmatrix}X_{0}g\\ \vdots\\ X_{n}g\end{bmatrix}\). To prove the converse, choose and fix \(g\in\mathcal{H}\) with \(\|g\|=1.\) Now for \(h_{0},h_{1},\cdots,h_{n}\in\mathcal{H}\), take \(X_{i}=|h_{i}\rangle\langle g|\ (0\leq i\leq n)\). Then
\[\Big{\langle}\begin{bmatrix}h_{0}\\ \vdots\\ h_{n}\end{bmatrix},\ H_{n}\begin{bmatrix}h_{0}\\ \vdots\\ h_{n}\end{bmatrix}\Big{\rangle} =\sum_{i,j=0}^{n}\langle h_{i},\ A_{i+j}h_{j}\rangle\] \[=\Big{\langle}g,\ \Big{(}\sum_{i,j=0}^{n}|h_{i}\rangle\langle g|^{*}A_{i +j}|h_{j}\rangle\langle g|\Big{)}g\Big{\rangle}\] \[=\langle g,\sum_{i,j=0}^{n}X_{i}^{*}A_{i+j}X_{j}g\rangle\] \[\geq 0.\]
Let \(\{A_{n}\}_{n\geq 0}\) be a sequence of self-adjoint operators acting on a Hilbert space \(\mathcal{H}\). Consider the associated Hankel matrices:
\[H_{n}:=\begin{bmatrix}I&A_{1}&A_{2}&\cdots&A_{n}\\ A_{1}&A_{2}&A_{3}&\cdots&A_{n+1}\\ \vdots&\vdots&\vdots&\cdots&\vdots\\ A_{n}&A_{n+1}&A_{n+2}&\cdots&A_{2n}\end{bmatrix},H_{n}^{(2)}:=\begin{bmatrix} A_{2}&A_{3}&A_{4}&\cdots&A_{n+2}\\ A_{3}&A_{4}&A_{5}&\cdots&A_{n+3}\\ \vdots&\vdots&\vdots&\cdots&\vdots\\ A_{n+2}&A_{n+3}&A_{n+4}&\cdots&A_{2n+2}\end{bmatrix}.\]
In the following theorem we use both the method of completely positive maps as well as that of positive kernels.
**Theorem 2.4**.: _Let \(\{A_{n}\}_{n\geq 0}\) be a sequence of self-adjoint operators with \(A_{0}=I\) and \(\|A_{n}\|\leq 1\) for all \(n\). Then it admits a self-adjoint contraction dilation if and only if \(H_{n}\geq 0\) and \(H_{n}^{(2)}\leq H_{n}\) for each \(n\)._
Proof.: Suppose there exists a self-adjoint contraction dilation \(B\) on a Hilbert space \(\mathcal{K}\supseteq\mathcal{H}\). Then by the proof of Theorem 2.1, there exists a completely positive map \(\varphi:C[-1,1]\rightarrow\mathcal{B}(\mathcal{H})\)
such that \(\varphi(x^{m})=A_{m}\) for each \(m\geq 0.\) Consider the element \(L\in M_{n+1}(C[-1,1]),\)where
\[L=\begin{bmatrix}1&x&x^{2}&\cdots&x^{n}\\ 0&0&0&\cdots&0\\ 0&0&0&\cdots&0\\ \vdots&\vdots&\vdots&\cdots&\vdots\\ 0&0&0&\cdots&0\end{bmatrix}.\]
Then \(L^{*}L\geq 0\) in \(M_{n+1}(C[-1,1])\). Since \(H_{n}=\varphi_{n+1}(L^{*}L),\) and \(\varphi\) is completely positive, \(H_{n}\geq 0.\) We define \(G\in M_{n+1}(C[-1,1])\) by
\[G=\begin{bmatrix}\sqrt{1-x^{2}}&x\sqrt{1-x^{2}}&\cdots&x^{n}\sqrt{1-x^{2}}\\ 0&0&\cdots&0\\ 0&0&\cdots&0\\ \vdots&\vdots&\cdots&\vdots\\ 0&0&\cdots&0\end{bmatrix}.\]
Then \(G^{*}G\geq 0\) in \(M_{n+1}(C[-1,1])\). Since \(H_{n}-H_{n}^{(2)}=\varphi_{n+1}(G^{*}G),\) and \(\varphi\) is completely positive, \(H_{n}-H_{n}^{(2)}\geq 0.\)
Conversely, assume that \(H_{n}\geq 0\) and \(H_{n}^{(2)}\leq H_{n}\) for each \(n.\) Let \(M:=\mathbb{Z}_{+}\times\mathcal{H}\). Define a map \(k\colon M\times M\to\mathbb{C}\) by
\[k((m,g),(n,h))=\langle g,\;A_{m+n}h\rangle,\;\text{ for every }m,n\in\mathbb{ Z}_{+},\;g,h\in\mathcal{H}. \tag{6}\]
Since \(H_{n}\geq 0\) for all \(n\geq 0\), \(k\) is a positive definite kernel. Let \(V\) be a vector space of all complex functions on \(M\) which is zero except for finitely many points of \(M\). Since \(k\) is positive definite, \(V\) is a semi-inner product space with respect to:
\[\langle\xi,\eta\rangle:=\sum_{x,y\in M}\overline{\xi(x)}\eta(y)k(x,y)\text{ for every }\xi,\eta\in V.\]
Let us take \(\mathcal{N}=\{\xi\in V:\;\langle\xi,\xi\rangle=0\}\). Then by Cauchy-Schwarz inequality, \(\mathcal{N}=\{\xi:\;\langle\xi,\eta\rangle=0,\text{ for every }\eta\in V\}\) is subspace of \(V\). Take \(\mathcal{K}\) as the Hilbert space obtained by the completion of the quotient space \(V/\mathcal{N}\). Define \(\lambda\colon M\to\mathcal{K}\) by
\[\lambda(m,g)=\delta_{(m,g)}+\mathcal{N},\;\text{for all}\;(m,g)\in M.\]
Since \(\langle\lambda(0,g),\lambda(0,h)\rangle=\langle g,h\rangle\) for every \(g,h\in\mathcal{H}\), we see that \(\mathcal{H}\) can be identified as a subspace of \(\mathcal{K}\) via the map \(g\mapsto\lambda(0,g).\) Moreover, \(\mathcal{K}=\overline{\text{span}}\{\lambda(m,g):\;m\geq 0,\;g\in\mathcal{H}\}\). Define \(B(\lambda(m,g))=\lambda(m+1,g)\) for every \((m,g)\in M\). Then
\[\|B\Big{(}\sum_{i=0}^{n}c_{i}\lambda(m_{i},g_{i})\Big{)}\|^{2} =\sum_{i,j=0}^{n}\overline{c_{i}}c_{j}\langle\lambda(m_{i}+1,g_{i} ),\;\lambda(m_{j}+1,g_{j})\rangle\] \[=\sum_{i,j=0}^{n}\overline{c_{i}}c_{j}\langle g_{i},\;A_{m_{i}+m _{j}+2}(g_{j})\rangle\] \[\leq\sum_{i,j=0}^{n}\overline{c_{i}}c_{j}\langle g_{i},\;A_{m_{i} +m_{j}}(g_{j})\rangle\] \[=\big{\|}\sum_{i=0}^{n}c_{i}\lambda(m_{i},g_{i})\big{\|}^{2}.\]
This implies that \(B\) extends to a linear contraction. It is easy to see that it is self-adjoint. Moreover, for every \(g,h\in\mathcal{H}\),
\[\langle\lambda(0,g),P_{\mathcal{H}}B^{n}|_{\mathcal{H}}\lambda(0,h)\rangle= \langle\lambda(0,g),\;\lambda(n,h)\rangle=\langle g,A_{n}h\rangle.\]
Hence \(B\) is a self-adjoint dilation of \(\{A_{n}\}_{n\geq 0}\). \(\Box\)
If a sequence \(\{A_{n}:n\geq 0\}\) with \(A_{0}=I\) admits a self-adjoint dilation and suppose \(A_{2}=A_{1}^{2}\). Then from the characterizations above it is possible to see that \(A_{n}=A_{1}^{n}\) for every \(n\geq 1\). In fact, much stronger results are known now (see [27] for more details).
### Positive dilations
Here it is convenient to have the following standard definition.
**Definition 2.5**.: _A sequence \(\{A_{n}\}_{n\geq 0}\) of bounded operators on a Hilbert space \(\mathcal{H}\) is called completely monotone if \((-1)^{k}(\Delta^{k}A)_{n}\geq 0\) for every \(n,k\geq 0,\) where_
\[(\Delta^{k}A)_{n}=\sum_{i=0}^{k}\binom{k}{i}(-1)^{i}A_{i+n}.\]
In the classical setup, the Hausdorff moment problem has a solution if and only if the given sequence is completely monotone. The same result holds true for operator sequences also. We prove the result here by defining an appropriate positive definite kernel.
**Theorem 2.6**.: _Let \(\{A_{n}\}_{n\geq 0}\) be a sequence of positive operators in \(\mathcal{B}(\mathcal{H})\) with \(A_{0}=I\). Then it is completely monotone if and only if it admits a positive contraction dilation._
Proof.: Though the existence of such positive contraction is clear from the Theorem 2.1 when \(X=[0,1]\), we provide an explicit proof for the construction of \(B\). Suppose that \(\{A_{n}:\;n\geq 0\}\) is completely monotone and let \(M:=\mathbb{Z}_{+}\times\mathcal{H}\). Define the map \(k\colon M\times M\to\mathbb{C}\) by
\[k((m,g),(n,h))=\langle g,\;A_{m+n}h\rangle,\;\text{ for every }m,n\in\mathbb{Z}_{+},\;g,h\in\mathcal{H}. \tag{7}\]
Now, we show that \(k\) is a positive definite kernel. Equivalently, it is enough to show that
\[H_{n}=\begin{bmatrix}I&A_{1}&A_{2}&\cdots&A_{n}\\ A_{1}&A_{2}&A_{3}&\cdots&A_{n+1}\\ \vdots&\vdots&\vdots&\cdots&\vdots\\ A_{n}&A_{n+1}&A_{n+2}&\cdots&A_{2n}\end{bmatrix}\geq 0\;\text{for every }n\geq 1.\]
Let us define the map \(\Phi(x^{n})=A_{n}\) for every \(n\). By mathematical induction it can be shown that every polynomial \(q(x)\) of degree \(n\) with real coefficients over \([0,1]\) can be written using Bernstein polynomial (see [4, Page 76]) as follows:
\[\sum_{i=0}^{N}\binom{N}{i}x^{i}(1-x)^{N-i}q\Big{(}\frac{i}{N}\Big{)}=q(x)+ \sum_{j=1}^{n-1}\frac{u_{j}(x)}{N^{j}}\;\text{for every }N\geq 1, \tag{8}\]
where \(u_{j}(x)\) is polynomial of degree less than or equal to \(n\) and is independent of \(N.\) Since the sequence is completely monotone, for every \(k,n\geq 0,\) we see that
\[\Phi(x^{n}(1-x)^{k})=\Phi\Big{(}\sum_{i=0}^{k}\binom{k}{i}(-1)^{i}x^{i+n} \Big{)}=\sum_{i=0}^{k}\binom{k}{i}(-1)^{i}\Phi(x^{i+n})=\sum_{i=0}^{k}\binom{ k}{i}(-1)^{i}A_{i+n}\geq 0.\]
If \(q(x)\) is a positive polynomial over \([0,1]\), it follows from Equation (8) that
\[\Phi(q(x)) =\sum_{i=0}^{N}\binom{N}{i}\;\Phi(x^{i}(1-x)^{N-i})q\Big{(}\frac {i}{N}\Big{)}-\sum_{j=1}^{n-1}\frac{\Phi(u_{j}(x))}{N^{j}}\] \[\geq 0-\sum_{j=1}^{n-1}\frac{\Phi(u_{j}(x))}{N^{j}}\] \[\to 0,\;\text{ as }N\to\infty.\]
This shows that \(\Phi\) is a positive map. Since \(\pm q\leq\|q\|_{\infty}\), it implies that \(\Phi(q)\leq\|q\|_{\infty}\Phi(1)\) and thus \(\Phi\) is continuous on the space of polynomials over \([0,1].\) Therefore \(\Phi\) is a positive map on \(C([0,1])\) and so it is completely positive map. For every \(n\geq 0\), we have \(H_{n}=\Phi_{n+1}(L^{*}L)\geq 0,\)
where
\[L=\begin{bmatrix}1&x&x^{2}&\cdots&x^{n}\\ 0&0&0&\cdots&0\\ 0&0&0&\cdots&0\\ \vdots&\vdots&\vdots&\cdots&\vdots\\ 0&0&0&\cdots&0\end{bmatrix}.\]
Hence \(k\) is a positive definite kernel. Furthermore, if we denote (for \(n\geq 0\))
\[H_{n}^{(1)}:=\begin{bmatrix}A_{1}&A_{2}&A_{3}&\cdots&A_{n+1}\\ A_{2}&A_{3}&A_{4}&\cdots&A_{n+2}\\ \vdots&\vdots&\vdots&\cdots&\vdots\\ A_{n+1}&A_{n+2}&A_{n+3}&\cdots&A_{2n+1}\end{bmatrix};\;H_{n}^{(2)}:=\begin{bmatrix} A_{2}&A_{3}&A_{4}&\cdots&A_{n+2}\\ A_{3}&A_{4}&A_{5}&\cdots&A_{n+3}\\ \vdots&\vdots&\vdots&\cdots&\vdots\\ A_{n+2}&A_{n+3}&A_{n+4}&\cdots&A_{2n+2}\end{bmatrix} \tag{9}\]
then
\[H_{n}^{(1)}=\Phi_{n+1}(F^{*}F)\geq 0\text{ and }H_{n}-H_{n}^{(2)}=\Phi_{n+1}(G^{*}G)\geq 0, \tag{10}\]
where
\[F=\begin{bmatrix}x^{\frac{1}{2}}&x^{\frac{3}{2}}&\cdots&x^{\frac{2n+1}{2}}\\ 0&0&\cdots&0\\ 0&0&\cdots&0\\ \vdots&\vdots&\cdots&\vdots\\ 0&0&\cdots&0\end{bmatrix},\;G=\begin{bmatrix}(1-x^{2})^{\frac{1}{2}}&x(1-x^{2} )^{\frac{1}{2}}&\cdots&x^{n}(1-x^{2})^{\frac{1}{2}}\\ 0&0&\cdots&0\\ 0&0&\cdots&0\\ \vdots&\vdots&\cdots&\vdots\\ 0&0&\cdots&0\end{bmatrix}.\]
Let \(V\) be a vector space of all complex functions on \(M\) which is zero except for finitely many points of \(M\). Then \(V\) is a semi-inner product space with respect to:
\[\langle\xi,\eta\rangle:=\sum_{x,y\in M}\overline{\xi(x)}\eta(y)k(x,y)\text{ for every }\xi,\eta\in V.\]
Let us take \(\mathcal{N}=\{\xi\in V:\;\langle\xi,\xi\rangle=0\}\). Thus by Cauchy-Schwarz inequality, it follows that \(\mathcal{N}=\{\xi:\;\langle\xi,\eta\rangle=0,\text{ for every }\eta\in V\}\). Take \(\mathcal{K}\) as the Hilbert space obtained by the completion of the quotient space \(V/\mathcal{N}\). Define \(\lambda\colon M\to\mathcal{K}\) by
\[\lambda(m,g)=\delta_{(m,g)}+\mathcal{N},\;\text{for all}\;(m,g)\in M.\]
Since \(\langle\lambda(0,g),\lambda(0,g)\rangle=\|g\|^{2}\) for every \(g\in\mathcal{H}\), we see that \(\mathcal{H}\) can be identified as a subspace of \(\mathcal{K}\) via the map \(g\mapsto\lambda(0,g).\) Moreover, \(\mathcal{K}=\overline{\operatorname{span}}\{\lambda(m,g):\;m\geq 0,\;g\in \mathcal{H}\}\). Define \(B(\lambda(m,g))=\lambda(m+1,g)\) for every \((m,g)\in M\). Then
\[\Big{\langle}\sum_{i=0}^{n}c_{i}\lambda(m_{i},g_{i}),\;B\Big{(} \sum_{i=0}^{n}c_{i}\lambda(m_{i},g_{i})\Big{)}\Big{\rangle} =\Big{\langle}\sum_{i=0}^{n}c_{i}\lambda(m_{i},g_{i}),\;\sum_{i=0 }^{n}c_{i}\lambda(m_{i}+1,g_{i})\Big{\rangle}\] \[=\sum_{i,j=0}^{n}\overline{c_{i}}c_{j}\langle g_{i},\;A_{m_{i}+m_ {j}+1}(g_{j})\rangle\] \[\geq 0\;[\text{ since }H_{n}^{(1)}\geq 0\text{ by Equation (\ref{eq:H_n})}]\]
and
\[\|B\Big{(}\sum_{i=0}^{n}c_{i}\lambda(m_{i},g_{i})\Big{)}\|^{2} =\sum_{i,j=0}^{n}\overline{c_{i}}c_{j}\langle\lambda(m_{i}+1,g_{i }),\;\lambda(m_{j}+1,g_{j})\rangle\] \[=\sum_{i,j=0}^{n}\overline{c_{i}}c_{j}\langle g_{i},\;A_{m_{i}+m_ {j}+2}(g_{j})\rangle\] \[\leq\sum_{i,j=0}^{n}\overline{c_{i}}c_{j}\langle g_{i},\;A_{m_{i }+m_{j}}(g_{j})\rangle\;[\text{ since }H_{n}\geq H_{n}^{(2)}\text{ by Equation (\ref{eq:H_n})}]\]
\[=\big{\|}\sum_{i=0}^{n}c_{i}\lambda(m_{i},g_{i})\big{\|}^{2}.\]
This implies that \(B\) defines a contractive and positive operator. Finally, for every \(g,h\in\mathcal{H},\) we see that
\[\langle\lambda(0,g),P_{\mathcal{H}}B^{n}|_{\mathcal{H}}\lambda(0,h)\rangle= \langle\lambda(0,g),\;\lambda(n,h)\rangle=\langle g,A_{n}h\rangle.\]
Therefore, every monotone sequence admits a contractive positive dilation. Conversely, suppose that there is a positive contraction \(B\) satisfying, \(A_{n}=P_{\mathcal{H}}B^{n}|_{\mathcal{H}}\) for \(n\geq 0\) then
\[\big{\langle}g,\;\sum_{i=0}^{k}\binom{k}{i}(-1)^{i}A_{i+n}(g) \big{\rangle} =\big{\langle}g,\;\sum_{i=0}^{k}\binom{k}{i}(-1)^{i}B^{i+n}(g) \big{\rangle}\] \[=\big{\langle}g,\;B^{n}(I-B)^{k}(g)\big{\rangle}\] \[=\langle(I-B)^{\frac{k}{2}}g,\;B^{n}(I-B)^{\frac{k}{2}}g\rangle\] \[\geq 0\]
for every \(k,n\geq 0.\) Therefore, \(\{A_{n}\}_{n\geq 0}\) is a completely monotone sequence.
### Concrete self-adjoint dilations
Now we turn our discussion to a concrete construction of self-adjoint dilation. Before that, let us recall some known facts from the literature that a bounded self-adjoint operator say \(B\) defined on a separable Hilbert space with a unit cyclic vector \(v\) can be represented by a tridiagonal matrix with respect to the basis obtained by Gram-Schmidt process from the set \(\{v,Bv,B^{2}v,\cdots\}\). Suppose the matrix \(B\) is expressed as,
\[B=\begin{bmatrix}a_{0}&b_{0}&0&0&\cdots\\ b_{0}&a_{1}&b_{1}&0&\cdots\\ 0&b_{1}&a_{2}&b_{2}&\cdots\\ 0&0&b_{2}&a_{3}&\ddots\\ \vdots&\vdots&\ddots&\ddots&\ddots\end{bmatrix} \tag{11}\]
then it is related to a family of monic orthogonal polynomials given by \(p_{0}(x)=1,p_{1}(x)=(x-a_{0}),\)
\[p_{n}(x)=(x-a_{n-1})p_{n-1}(x)-b_{n-1}^{2}p_{n-2}(x),\;\text{for all}\;n\geq 2 \tag{12}\]
Note that here equation (12) is obtained by expanding the determinant of \((xI-B_{n})\) using the last row, where \(B_{n}\) is the \(n\times n\) truncated matrix of \(B\). More information on this can be found in the well known classic on Hilbert space linear transformations [37]. Some applications of this idea to quantum theory with worked out examples can be seen in [12].
Let \(\mu(\cdot)=\left\|E(\cdot)v\right\|^{2}\) be the probability measure defined on the Borel \(\sigma-\)field of the real line, where \(E\) is the spectral measure associated to \(B\). Clearly it is supported on the spectrum of \(B\). Then \(\{p_{n}(x):\;n\geq 0\}\) is an orthonormal basis of \(L^{2}(\mu)\) (for details, see [37]). Moreover, \(m\)-th moment of \(\mu\) is given by
\[\langle v,\;B^{m}v\rangle=\int\limits_{-\infty}^{\infty}\lambda^{m}\;d\langle v,\;E_{B}(\lambda)v\rangle,\;\text{for all}\;m\geq 0.\]
Conversely, given any compactly supported probability measure \(\mu\) on the real line we can take \(B\) as the operator'multiplication by \(x\)', on \(L^{2}(\mu)\) and cyclic vector \(v\) as the constant function \(1\). We can observe the tridiagonal form of \(B\) on the basis normalized orthogonal polynomials. The coefficients \(\{a_{n},b_{n}:n\geq 0\}\) are known as Jacobi parameters of the measure \(\mu.\) Here \(B\) is a self-adjoint dilation of the moment sequence of the probability measure. This motivates us to construct such tridiagonal operator matrix \(B\) for a self-adjoint dilation of an operator sequence \(\{A_{n}:\;n\geq 0\}.\) Such tri-diagonal blocks, known as generalized Jacobi \(3\)-diagonal relations are well known in quantum theory (See [1]).
**Lemma 2.7**.: _Let \(B\) be a bounded operator on some Hilbert space \(\mathcal{K}\). Let \(\mathcal{H}\) be a closed subspace of \(\mathcal{K}.\) Assume_
\[\mathcal{K}=\overline{span}\{B^{n}h:h\in\mathcal{H},n\in\mathbb{Z}_{+}\}. \tag{13}\]
_Then \(\mathcal{K}\) decomposes as a direct sum of Hilbert spaces_
\[\mathcal{K}=\mathcal{H}_{0}\oplus\mathcal{H}_{1}\oplus\mathcal{H}_{2}\oplus\cdots\]
_where \(\mathcal{H}_{0}=\mathcal{H}\) and with respect to this decomposition the operator \(B\) has the 'upper Hessenberg' form:_
\[B=\begin{bmatrix}B_{00}&B_{01}&B_{02}&B_{03}&\cdots\\ B_{10}&B_{11}&B_{12}&B_{13}&\cdots\\ 0&B_{21}&B_{22}&B_{23}&\cdots\\ 0&0&B_{32}&B_{33}&\cdots\\ \vdots&\vdots&\vdots&\ddots&\ddots\end{bmatrix}\]
_where \(\mathcal{H}_{n}=\overline{B_{n(n-1)}(\mathcal{H}_{n-1})}\) for every \(n\). Conversely any such \(B\) satisfies Equation (13)._
Proof.: Take \(\mathcal{H}_{n}]=\overline{\operatorname{span}}\{B^{m}h:0\leq m\leq n-1,h \in\mathcal{H}\}\) and \(\mathcal{H}_{n}=\mathcal{H}_{n}]\bigcap\mathcal{H}_{(n-1)]}^{\perp}\) for \(n\geq 2\) with \(\mathcal{H}_{1]}=\mathcal{H}_{1}.\) Then clearly \(\mathcal{K}=\oplus_{n\geq 1}\mathcal{H}_{n},\) and as \(B(\mathcal{H}_{n]})\subseteq\mathcal{H}_{(n+1)]},\)\(B(\mathcal{H}_{n})\subseteq\oplus_{m=0}^{n+1}\mathcal{H}_{m+1}.\) Consequently, the operator \(B\) has the form described above. The range and condition and the converse statements are easy to see.
**Theorem 2.8**.: _Let \(\{A_{n}\}_{n\geq 0}\) be a sequence of self-adjoint operators in \(\mathcal{B}(\mathcal{H})\) with \(A_{0}=I\), admitting a minimal self-adjoint dilation \(B\) in \(\mathcal{B}(\mathcal{K})\) for some Hilbert space \(\mathcal{K}\). Then the space \(\mathcal{K}=\mathcal{H}_{0}\oplus\mathcal{H}_{1}\oplus\cdots\) ( \(\mathcal{H}_{0}=\mathcal{H}\)), so that the operator \(B\) has the tridiagonal form:_
\[B=\begin{bmatrix}B_{00}&B_{01}^{*}&0&0&\cdots\\ B_{10}&B_{11}&B_{21}^{*}&0&\cdots\\ 0&B_{21}&B_{22}&B_{32}^{*}&\cdots\\ 0&0&B_{32}&B_{33}&\cdots\\ \vdots&\vdots&\vdots&\ddots&\ddots\end{bmatrix}, \tag{14}\]
_and \(\mathcal{H}_{n}=\overline{B_{n(n-1)}(\mathcal{H}_{n-1})}\) for all \(n\geq 1.\)_
Proof.: From the previous lemma, \(B\) has upper Hessenberg form. But since \(B\) is self-adjoint, \(B_{ij}=B_{ji}^{*}=0\) for \(j>(i+1).\)
Consider the set up of this theorem. We try to determine the blocks using the sequence \(\{A_{n}\}_{n\geq 0}\) and the dilation property. It is done recursively and to do this we need to invert some operators. In general, it is quite possible that some of these operators are not invertible and we may have to modify the construction. For the moment we assume invertibility of concerned operators as and when required.
Since \(B\) is a self-adjoint dilation for the moment sequence \(\{A_{n}\}_{n\geq 0}\) we have \((B^{n})_{00}=A_{n}\) for every \(n\geq 0\). Clearly, \(B_{00}=A_{1}\) and \(B_{10}\) is obtained as,
\[A_{2}=(B^{2})_{00}=B_{00}^{2}+B_{10}^{*}B_{10}=A_{1}^{2}+B_{10}^{*}B_{10}.\]
That is, \(B_{10}^{*}B_{10}=A_{2}-A_{1}^{2}\). Since \((A_{2}-A_{1}^{2})\geq 0\) (see Remark 2.2), we can choose \(B_{10}=(A_{2}-A_{1}^{2})^{1/2}.\) Firstly, we explain the process to compute the diagonal block \(B_{11}\) and then establish the recurrence relation to obtain diagonal and lower diagonal blocks. From the Equation (14), we have
\[A_{3}=(B^{3})_{00}=\sum_{0\leq r_{1},r_{2}\leq 2}B_{0r_{1}}B_{r_{1}r_{2}}B_{r_{2 }0}=\sum_{0\leq r_{1},r_{2}\leq 1}B_{0r_{1}}B_{r_{1}r_{2}}B_{r_{2}0},\]
since \(B_{0r_{1}}B_{r_{1}r_{2}}B_{r_{2}0}=0\) when either \(r_{1}\) or \(r_{2}\) is \(2.\) This implies that
\[A_{3}=B_{00}^{3}+B_{00}B_{01}B_{10}+B_{01}B_{10}B_{00}+B_{01}B_{11}B_{10}\]
\[=B_{00}^{3}+B_{00}B_{10}^{*}B_{10}+B_{10}^{*}B_{10}B_{00}+B_{10}^{*}B_{11}B_{10}.\]
By substituting the first column information we get
\[B_{11}=(A_{2}-A_{1}^{2})^{-1/2}\Big{[}(A_{3}-A_{1}^{3})-A_{1}(A_{2}-A_{1}^{2})-( A_{2}-A_{1}^{2})A_{1}\Big{]}(A_{2}-A_{1}^{2})^{-1/2}.\]
The formulae for diagonal and off-diagonal blocks are as follows. Firstly, note that each diagonal block is a self-adjoint operator and it is computed by the compression of odd powers of \(B.\) Suppose that \((n-1)\) columns of \(B\) are known then \(B_{nn}\) is computed as follows:
\[A_{2n-1} = (B^{2n-1})_{00}\] \[= \sum_{0\leq r_{1},r_{2},\ldots,r_{2(n-1)}\leq 2n-2}B_{0r_{1}}B_{r_{ 1}r_{2}}\cdots B_{r_{2(n-1)}0}\] \[= \sum_{0\leq r_{1},r_{2},\ldots,r_{2(n-1)}\leq 2n-2\atop\&\ (r_{n-1},r_{n}) \neq(n,n)}B_{0r_{1}}B_{r_{1}r_{2}}\cdots B_{r_{2(n-1)}0}+B_{01}\cdots B_{(n-1 )n}B_{nn}B_{n(n-1)}\cdots B_{10}\] \[= \sum_{0\leq r_{1},r_{2},\ldots,r_{2(n-1)}\leq 2n-2\atop\&\ (r_{n-1},r_{n}) \neq(n,n)}B_{0r_{1}}B_{r_{1}r_{2}}\cdots B_{r_{2(n-1)}0}+B_{10}^{*}\cdots B_{n (n-1)}^{*}B_{nn}B_{n(n-1)}\cdots B_{10}.\]
This implies that
\[B_{nn}=\Big{(}\prod_{i=1}^{n}B_{i(i-1)}^{*}\Big{)}^{-1}\Big{[}A_{2n-1}\ -\sum_{1\leq r_{1},r_{2},\ldots,r_{2(n-1)}\leq 2n- \atop\&\ (r_{n-1},r_{n})\neq(n,n)}B_{1r_{1}}B_{r_{1}r_{2}}\cdots B_{r_{2(n-1)}1} \Big{]}\Big{(}\prod_{i=1}^{n}B_{i(i-1)}^{*}\Big{)}^{*-1}.\]
Now notice that the lower diagonal block in the first column is given by \(B_{21}=(A_{2}-A_{1}^{2})^{1/2}.\) These lower diagonal blocks can be obtained by the compression of even powers of \(B.\) Suppose that \((n-1)\) columns of \(B\) are known then \(B_{(n+1)n}\) is obtained as below:
\[= A_{2n}=(B^{2n})_{00}\] \[= \sum_{0\leq r_{1},r_{2},\ldots,r_{2n-1}\leq 2n-1}\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
well-defined bounded operators, then these formulae provide a self-adjoint minimal dilation with block tridiagonal form as above._
**Example 2.10**.: _Let \(T\in\mathcal{B}(\mathcal{H})\) be a self-adjoint operator. We define a moment sequence \(\{A_{n}\}_{n\geq 0}\) by_
\[A_{n}=\left\{\begin{array}{cl}2^{\frac{n-2}{2}}T^{n}&\text{ if $n$ is even}\\ 0&\text{ if $n$ is odd}\;.\end{array}\right.\]
_Then \(V\) acting on \(\mathcal{H}\oplus\mathcal{H}\oplus\mathcal{H}\),_
\[V=\begin{bmatrix}0&T&0\\ T&0&T\\ 0&T&0\end{bmatrix}\]
_is a self-adjoint dilation of \(\{A_{n}:n\geq 0\}\)._
This example can be generalized as follows. Suppose \(\{m_{n}:n\geq 0\}\) is a moment sequence of a compactly supported probability measure \(\mu\) on \(\mathbb{R}.\) Let \(T\) be a bounded self-adjoint operator on a Hilbert space \(\mathcal{H}\). Then \(\{m_{n}T^{n}:n\geq 0\}\) is a operator moment sequence admitting self-adjoint dilation. Take the dilation space \(\mathcal{K}=\mathcal{H}\otimes L^{2}(\mu)\) with \(\mathcal{H}\) identified as a subspace by identifying \(h\in\mathcal{H}\) with \(h\otimes 1\), where \(1\) is the constant function \(1\) in \(L^{2}(\mu).\) Let \(B\) be the tri-diagonal form of multiplication by '\(x\)' operator on \(L^{2}(\mu)\) with cyclic vector \(1\) as in Equation (11). Then \(T\otimes B\) is a self-adjoint dilation of \(\{m_{n}T^{n}:n\geq 0\}\):
\[T\otimes B=\begin{bmatrix}a_{0}T&b_{0}T&0&0&\cdots\\ b_{0}T&a_{1}T&b_{1}T&0&\ldots\\ 0&b_{1}T&a_{2}T&b_{2}T&\ldots\\ 0&0&b_{2}T&a_{3}T&\ddots\\ \vdots&\vdots&\ddots&\ddots&\ddots\end{bmatrix}.\]
An operator \(T\in\mathcal{B}(\mathcal{H})\) is called _quasinormal_ if \(T(T^{*}T)=(T^{*}T)T.\) We recall that \(T\) is quasinormal if and only if \((T^{*}T)^{k}=T^{*k}T^{k}\) for all \(k\in\mathbb{Z}_{+}\)[27].
**Example 2.11**.: _Let \(T\in\mathcal{B}(\mathcal{H})\) be a quasinormal operator. Then the moment sequence \(\{A_{n}\}_{n\geq 0}\) defined by_
\[A_{n}=\left\{\begin{array}{cl}2^{\frac{n-2}{2}}T^{*\frac{n}{2}}T^{\frac{n}{2 }}&\text{ if $n$ is even}\\ 0&\text{ if $n$ is odd}\end{array}\right.\]
_admits a self-adjoint dilation. In fact, \(V\) acting on \(\mathcal{H}\oplus\mathcal{H}\oplus\mathcal{H}\) defined by_
\[V=\begin{bmatrix}0&T^{*}&0\\ T&0&T^{*}\\ 0&T&0\end{bmatrix}\]
_is a self-adjoint dilation of \(\{A_{n}:n\geq 0\}\)._
Like before, this example also can be generalized to have self-adjoint dilations for operator moment sequences \(\{1,0,m_{2}T^{*}T,0,m_{4}(T^{*})^{2}T^{2},0,\ldots\}\), where \(\{m_{n}:n\geq 0\}\) is moment sequence of a compactly supported probability measure \(\mu\) on \(\mathbb{R}\), which is symmetric around \(0\). Such a symmetry ensures that odd moments and also diagonal Jacobi parameters of the measure are all equal to \(0\).
## 3. Unitary and isometric dilations
In this section, we mainly discuss necessary and sufficient conditions on operator sequences to admit unitary or isometric dilations. Let \(\{A_{n}\}_{n\geq 0}\) be a sequence in \(\mathcal{B}(\mathcal{H})\) with \(A_{0}=I\). Then \(\{A_{n}\}_{n\geq 0}\) is said to admit a unitary (respectively isometric) dilation if there is a Hilbert space \(\mathcal{K}\) containing \(\mathcal{H}\) and a unitary (respectively unitary) \(U\in\mathcal{B}(\mathcal{K})\) such that
\[A_{n}=P_{\mathcal{H}}U^{n}|_{\mathcal{H}},\text{ for all }\ n\geq 0. \tag{15}\]
Obviously every unitary dilation is an isometric dilation. Sz.-Nagy dilation theorem implies in particular that every isometry has a power dilation to a unitary. Therefore, if a sequence admits an isometric dilation it also admits a unitary dilation. Consequently, an operator sequence admits an isometric dilation if and only if it admits a unitary dilation. However, there is a difference in the notion of minimality. With notation as above, a unitary dilation \(U\) acting on \(\mathcal{K}\) is minimal if
\[\mathcal{K}=\overline{\text{span}}\{U^{n}h:n\in\mathbb{Z},h\in\mathcal{H}\}.\]
On the other hand, an isometric dilation \(U\) acting on \(\mathcal{K}\) is minimal if
\[\mathcal{K}=\overline{\text{span}}\{U^{n}h:n\in\mathbb{Z}_{+},h\in\mathcal{H}\}.\]
The classical Szego kernel and Poisson kernel [20] (also see [15] for more details on Poisson kernel) are denoted and defined by
\[S(w,z) =\frac{1}{1-\overline{w}z}\ for\ all\ z,w\in\mathbb{D}.\] \[P_{r}(\theta) =\frac{1-r^{2}}{1-2r\cos\theta+r^{2}}\ for\ all\ 0\leq r<1,0\leq\theta\leq 2\pi.\]
A Poisson kernel can be written using the Szego kernel:
\[P_{r_{1}r_{2}}(\theta-t)=S(w,z)+\overline{S(w,z)}-1\text{ for all }z,w\in \mathbb{D}\text{ and }z=r_{1}e^{i\theta},w=r_{2}e^{it}.\]
Moreover, \(P_{r}(\theta)\geq 0\) for all \(0\leq r<1\) and \(0\leq\theta\leq 2\pi\).
Keeping Szego and Poisson kernels in mind, we define an operator valued kernel function for a sequence of operators. Let \(\mathbf{A}=\{A_{n}\}_{n\geq 0}\) be a sequence of contractions in \(\mathcal{B}(\mathcal{H})\). The associated Szego kernel function \(S_{\mathbf{A}}:\mathbb{D}\rightarrow\mathcal{B}(\mathcal{H})\) is defined as
\[S_{\mathbf{A}}(z)=\sum_{n=0}^{\infty}z^{n}A_{n}^{*}\text{ for all }z\in\mathbb{D}. \tag{16}\]
We define the associated Poisson kernel function by
\[P_{\mathbf{A}}(z)=S_{\mathbf{A}}(z)+S_{\mathbf{A}}(z)^{*}-I\text{ for all }z\in\mathbb{D}. \tag{17}\]
F. H. Vasilescu introduced operator valued Poisson kernel functions [40] for a \(d\)-tuples of operators using defect operators to study Holomorphic functional calculus. Such Poisson kernel functions can not be extended to operator valued sequences as we do not have semigroup property (i.e. \(A_{n}A_{m}=A_{n+m}\) may not hold). In the next result, we discuss necessary and sufficient criteria for isometric and unitary dilations in terms of the Poisson kernel.
**Theorem 3.1**.: _Let \(\{A_{n}\}_{n\geq 0}\) be a sequence in \(\mathcal{B}(\mathcal{H})\) with \(A_{0}=I\). Then the following are equivalent:_
_(i) \(\{A_{n}\}_{n\geq 0}\) admits a unitary/isometric dilation._
_(ii)_
\[\sum_{\ell,k=0}^{n}\bar{c}_{\ell}c_{k}A_{k-\ell}\geq 0, \tag{18}\]
_where \(A_{-n}:=A_{n}^{*}\) for \(n\in\mathbb{N}\)._
_(iii) \(P_{\mathbf{A}}(z)\geq 0\) in \(\mathcal{B}(\mathcal{H})\) for all \(z\in\mathbb{D}\)._
Before coming to the proof of this result, we point out that there is a subtle point here. The condition in Equation (18) is a priori weaker than having complete positivity of the \(B(\mathcal{H})\) valued
kernel \(K:\mathbb{Z}\times\mathbb{Z}\to B(\mathcal{H})\) defined by:
\[K(k,\ell)=A_{(\ell-k)},\ k,\ell\in\mathbb{Z} \tag{19}\]
as that would mean,
\[\sum_{\ell,k=0}^{n}\langle g_{k},A_{\ell-k}g_{l}\rangle\geq 0,\ \text{for all}\ g_{0},g_{1},\ldots,g_{n}\in\mathcal{H},n\in\mathbb{N}.\]
Proof.: \((i)\implies(ii)\) Suppose assume that \((i)\) holds true. Then the Equation(15) holds true. For every \(N\geq 0\) and \(g\in\mathcal{H}\), we see that
\[\left\langle g,\ \sum_{\ell,k=0}^{N}\overline{c_{\ell}}c_{k}A_{(k- \ell)}g\right\rangle =\left\langle\begin{bmatrix}c_{0}g\\ c_{1}g\\ \vdots\\ c_{n}g\end{bmatrix},\ \begin{bmatrix}I&U&\cdots&U^{N}\\ U^{*}&I&\cdots&U^{N-1}\\ \vdots&\vdots&\ddots&\vdots\\ U^{*N}&U^{*N-1}&\cdots&I\end{bmatrix}\begin{bmatrix}c_{0}g\\ c_{1}g\\ \vdots\\ c_{n}g\end{bmatrix}\right\rangle\] \[=\left\langle\sum_{\ell=0}^{N}c_{\ell}U^{\ell}g,\ \sum_{\ell=0}^{N}c_{\ell}U^{\ell}g\right\rangle\] \[\geq 0.\]
\((ii)\implies(i)\) Consider the operator system \(\mathcal{S}=\Big{\{}\sum\limits_{k=-N}^{N}c_{k}e^{ik\theta}:\ N\geq 0\Big{\}} \subset C(\mathbb{T})\). Define \(\Phi\colon\mathcal{S}\to\mathcal{B}(\mathcal{H})\) by \(\Phi(e^{in\theta})=A_{n}\) for every \(n\in\mathbb{Z}\). Suppose that \(f\in\mathcal{S}\) is strictly positive then by Fejer-Riesz theorem (see [28, Lemma 2.5]), we see that \(f(e^{i\theta})=\sum\limits_{i,j=0}^{N}\overline{c_{\ell}}c_{k}e^{i(k-\ell)\theta}\) for some \(c_{0},c_{1},\ldots,c_{N}\) and hence
\[\Phi(f(e^{i\theta}))=\sum_{\ell,k=0}^{N}\overline{c_{\ell}}c_{k}\Phi(e^{i(k- \ell)\theta})=\sum_{\ell,k=0}^{N}\overline{c_{\ell}}c_{k}A_{k-\ell}\geq 0.\]
Further, if \(g\in\mathcal{S}\) is non-negative, then \(g+\epsilon\cdot 1\) is strictly positive for every \(\epsilon>0\) and so \(\Phi(g+\epsilon\cdot 1)=\Phi(g)+\epsilon I\geq 0\) for every \(\epsilon>0\). Thus \(\Phi(g)\geq 0\), it follows that \(\Phi\) is a positive map and hence it is bounded. By Stone-Weierstrass theorem, \(\mathcal{S}\) is dense in \(C(\mathbb{T})\). So \(\Phi\) can be extended to a positive map on \(C(\mathbb{T})\) by [28, Exercise 2.2]. Again we denote it by \(\Phi\). Therefore, it follows from [28, Theorem 3.11] that \(\Phi\) is completely positive map on \(C(\mathbb{T})\).
Now by Stinespring's theorem there exists a triple \((\mathcal{K},\pi,V)\) where \(\mathcal{K}\) is a Hilbert space, \(\pi:C(\mathbb{T})\to B(\mathcal{K})\) is a unital \(*\)-homomorphism, \(V\colon\mathcal{H}\to\mathcal{K}\) a bounded linear map satisfying
\[\Phi(f)=V^{*}\pi(f)V,\ \forall f\in C(\mathbb{T}). \tag{20}\]
Since \(\Phi\) is unitary, \(V\) is an isometry and we may consider \(\mathcal{H}\) as a subspace of \(\mathcal{K}\) by identifying \(h\in\mathcal{H}\) with \(Vh\) in \(\mathcal{K}\). Then the Equation (20) reads as
\[\Phi(f)=P_{\mathcal{H}}\pi(f)|_{\mathcal{H}},f\in C(\mathbb{T}).\]
Since the function \(f_{0}(z)=z\) is a unitary, \(\pi(f_{0})\) is a unitary in \(\mathcal{B}(K)\). Taking \(U=\pi(f_{0})\) and considering \(f_{0}^{n}(z)=z^{n}\), for \(n\geq 0\), we get
\[A_{n}=P_{\mathcal{H}}U^{n}|_{\mathcal{H}},n\geq 0.\]
\((i)\implies(iii)\) Suppose \(\{A_{n}:n\geq 0\}\) admits a unitary dilation. Then there is a unitary operator \(U\in\mathcal{B}(\mathcal{K})\) for some Hilbert space \(\mathcal{K}\supseteq\mathcal{H}\) such that
\[A_{n}= P_{\mathcal{H}}U^{n}|_{\mathcal{H}}\ \text{for all}\ n\geq 0. \tag{21}\]
First notice that for each \(z\in\mathbb{D}\), \((1-zU^{*})\) is invertible and \((1-zU^{*})^{-1}=1+zU^{*}+z^{2}U^{*2}+\cdots\) and
\[\|I+zU^{*}+z^{2}U^{*2}+\cdots\|\leq 1+|z|+|z|^{2}+\cdots=\frac{1}{1-|z|}.\]
Now a simple computation ensures that \((I-zU^{*})^{-1}+(I-\overline{z}U)^{-1}-I=(1-|z|^{2})(I-\overline{z}U)(I-\overline{ z}U)^{*}.\) Also, it is immediate to see that \((I-zU^{*})^{-1}+(I-\overline{z}U)^{-1}-I\geq 0\) as \(|z|<1.\) It implies that \(P_{\mathcal{H}}\Big{(}(I-zU^{*})^{-1}+(I-\overline{z}U)^{-1}-I\Big{)}|_{ \mathcal{H}}\geq 0.\) Using the hypothesis of dilation Equation (21), we see that
\[\|I\|+\|zA_{1}^{*}\|+\|z^{2}A_{2}^{*}\|+\cdots+\|z^{n}A_{n}^{*}\|\leq 1+|z|\|A_{1}^{*}\|+|z^{2}|\|A_{2}^{*}\|+\cdots+|z^{n}|\|A_{n}^{*}\|\] \[\leq 1+|z|\|U^{*}\|+|z^{2}|\|U^{*2}\|+\cdots+|z^{n}|\|U^{*n}\|\] \[= 1+|z|+|z|^{2}+\cdots+|z|^{n}\] \[\leq \frac{1}{1-|z|}\text{ for all }z\in\mathbb{D}.\]
Thus the series \(\sum_{n=0}^{\infty}z^{n}A_{n}^{*}\) converges in norm absolutely for all \(z\in\mathbb{D}.\) Therefore the map \(S_{\mathbf{A}}(z):\mathbb{D}\to\mathcal{B}(\mathcal{H})\) is well defined. Further, we compute that
\[S_{\mathbf{A}}(z)+S_{\mathbf{A}}(z)^{*}-I =\sum_{n=0}^{\infty}z^{n}A_{n}^{*}+\sum_{m=0}^{\infty}\overline{z }^{m}A^{m}-I\] \[=\sum_{n=0}^{\infty}z^{n}P_{\mathcal{H}}U^{*n}|_{\mathcal{H}}+ \sum_{m=0}^{\infty}\overline{z}^{m}P_{\mathcal{H}}U^{m}|_{\mathcal{H}}-I\] \[=P_{\mathcal{H}}\Big{(}\sum_{n=0}^{\infty}z^{n}U^{*n}\Big{)}|_{ \mathcal{H}}+P_{\mathcal{H}}\big{(}\sum_{m=0}^{\infty}\overline{z}^{m}U^{m} \big{)}|_{\mathcal{H}}-I\] \[=P_{\mathcal{H}}\Big{(}(I-zU^{*})^{-1}+(I-\overline{z}U)^{-1}-I \Big{)}|_{\mathcal{H}}.\]
Since \(P_{\mathcal{H}}\Big{(}(I-zU^{*})^{-1}+(I-\overline{z}U)^{-1}-I\Big{)}|_{ \mathcal{H}}\geq 0,\) therefore we have \(P_{\mathbf{A}}(z)\geq 0\) for all \(z\in\mathbb{D}.\)
\((iii)\implies(i)\) Assume that \(P_{\mathbf{A}}(z)\geq 0\) for all \(z\in\mathbb{D}.\) Let \(0\leq r<1,\) then for each \(l\in\mathbb{Z},\) we have
\[\frac{1}{2\pi}\int_{0}^{2\pi}e^{il\theta}P_{\mathbf{A}}(re^{i \theta})d\theta =\frac{1}{2\pi}\int_{0}^{2\pi}e^{il\theta}\big{(}\sum_{n=0}^{ \infty}r^{n}e^{in\theta}A_{n}^{*}+\sum_{m=0}^{\infty}r^{m}e^{-im\theta}A_{m} \big{)}d\theta\] \[=\left\{\begin{array}{rl}r^{I}A_{l}&\text{ if }l\geq 0\\ \\ r^{-I}A_{-l}^{*}&\text{ if }l<0.\end{array}\right. \tag{22}\]
Let us define a map \(\varphi\) from an operator system \(\mathcal{S}\) to \(\mathcal{B}(\mathcal{H})\) by
\[\varphi\big{(}\sum_{n=0}^{N}p_{n}e^{in\theta}+\sum_{m=0}^{N}\overline{q_{m}} e^{-im\theta}\big{)}=\sum_{n=0}^{N}p_{n}A_{n}+\sum_{m=0}^{N}\overline{q_{m}}A_{m}^ {*}\]
for all \(\sum_{n=0}^{N}p_{n}e^{in\theta}+\sum_{m=0}^{N}\overline{q_{m}}e^{-im\theta} \in\mathcal{S}.\) Now it is enough to show that the map \(\varphi:\mathcal{S}\to\mathcal{B}(\mathcal{H})\) is a positive map. Now, we observe the following
\[\frac{1}{2\pi}\int_{0}^{2\pi}\Big{(}\sum_{n=0}^{N}p_{n}e^{in \theta}\Big{)}P_{\mathbf{A}}(re^{i\theta})d\theta =\sum_{n=0}^{N}p_{n}\frac{1}{2\pi}\int_{0}^{2\pi}e^{in\theta}P_{ \mathbf{A}}(re^{i\theta})d\theta\] \[=\sum_{n=0}^{N}p_{n}r^{n}A_{n}.\]
Similarly, we have
\[\frac{1}{2\pi}\int_{0}^{2\pi}\Big{(}\sum_{m=0}^{N}\overline{q_{m}}e^{-im \theta}\Big{)}P_{\mathbf{A}}(re^{i\theta})d\theta=\sum_{m=0}^{N}\overline{q_{m} }r^{m}A_{m}^{*}.\]
Combining all, we finally obtain that
\[\varphi\big{(}\sum_{n=0}^{N}p_{n}e^{in\theta}+\sum_{m=0}^{N}\overline {q_{m}}e^{-im\theta}\big{)} =\sum_{n=0}^{N}p_{n}A_{n}+\sum_{m=0}^{N}\overline{q_{m}}A_{m}^{*}\] \[=\lim_{r\to 1-}\big{(}\sum_{n=0}^{N}p_{n}r^{n}A_{n}+\sum_{m=0}^{N} \overline{q_{m}}r^{m}A_{m}^{*}\big{)}\] \[=\lim_{r\to 1-}\Big{(}\frac{1}{2\pi}\int_{0}^{2\pi}\big{(}\sum_{n= 0}^{N}p_{n}e^{in\theta}+\sum_{m=0}^{N}\overline{q_{m}}e^{-im\theta}\big{)}P_{ \mathbf{A}}(re^{i\theta})d\theta\Big{)}. \tag{23}\]
Notice that for all \(0\leq\theta\leq 2\pi\) and \(0\leq r<1,\) we have \(P_{\mathbf{A}}(re^{i\theta})\geq 0.\) It follows that \(\int_{0}^{2\pi}\big{(}\sum_{n=0}^{N}p_{n}e^{in\theta}+\sum_{m=0}^{N}\overline {q_{m}}e^{-im\theta}\big{)}P_{\mathbf{A}}(re^{i\theta})d\theta\geq 0\) whenever \(\sum_{n=0}^{N}p_{n}e^{in\theta}+\sum_{m=0}^{N}\overline{q_{m}}e^{-im\theta} \geq 0.\) These completes the proof as \(\varphi\) is a positive map.
Let \(V\) be minimal isometric dilation on some Hilbert space \(\mathcal{K}\) for a sequence of contractions \(\{A_{n}\}_{n\geq 0}\) on \(\mathcal{H}.\) In particular, \(\mathcal{K}=\overline{\operatorname{span}}\{V^{n}\mathcal{H}:n\geq 0\}\). Then by Lemma 2.7, \(\mathcal{K}\) decomposes as
\[\mathcal{K}=\mathcal{H}_{0}\oplus\mathcal{H}_{1}\oplus\cdots,\]
where \(\mathcal{H}_{0}=\mathcal{H}\) and with respect to this decomposition the operator \(V\) has the block form:
\[V=\begin{bmatrix}V_{00}&V_{01}&V_{02}&\cdots&V_{0n}&\cdots\\ V_{10}&V_{11}&V_{12}&\cdots&V_{1n}&\cdots\\ 0&V_{21}&V_{22}&\cdots&V_{2n}&\cdots\\ 0&0&V_{32}&\cdots&V_{3n}&\cdots\\ \vdots&\ddots&\ddots&\vdots&\vdots&\vdots\end{bmatrix}. \tag{24}\]
We wish to construct \(V_{ij}\)'s using the given sequence \(\{A_{n}\}.\) Recall that \(\mathcal{H}_{n]}=\overline{span}\{V^{m}h:0\leq m\leq n-1,h\in\mathcal{H}\}\) and \(\mathcal{H}_{n}=\mathcal{H}_{n]}\bigcap\mathcal{H}_{(n-1)]}^{\perp}\) for \(n\geq 1\) with \(\mathcal{H}_{0]}=\mathcal{H}_{0}.\)
We try to determine the blocks using the sequence \(\{A_{n}\}_{n\geq 0}\) and the dilation property. Like in the case of self-adjoint dilation we assume invertibility of concerned operators as and when required. Since \(V\) is an isometric dilation for the moment sequence \(\{A_{n}\}_{n\geq 0}\) we have \(P_{\mathcal{H}}V^{n}\big{|}_{\mathcal{H}}=A_{n}\). In other words, \((V^{n})_{00}=A_{n},\) for \(n\geq 0.\) Let us compute the first column of \(V\). Clearly, \(V_{00}=A_{1}\) and since \(V\) is an isometry, we get
\[I=(V^{*}V)_{00}=V_{00}^{*}V_{00}+V_{10}^{*}V_{10}=A_{1}^{*}A_{1}+V_{10}^{*}V_{ 10}.\]
One can choose that \(V_{10}=(I-A_{1}^{*}A_{1})^{1/2}\) as \(A_{1}\) is a contraction. The first row of \(V\) is given by the following recursive relation: For every \(n\in\mathbb{N},\) we have
\[A_{n} =(V^{n})_{00}\] \[=\sum_{0\leq r_{1},r_{2},\ldots,r_{(n-1)}\leq n-1}V_{0r_{1}}V_{r_ {1}r_{2}}\cdots V_{r_{(n-1)}0}\] \[=\sum_{0\leq r_{1},r_{2},\cdots,r_{n-1}\leq n-1\ \&\ r_{1}\neq n-1}V_{0r_{1}}V_{r_{1}r_{2}}\cdots V_{r_{(n-1)}0}+\sum_{0\leq r _{2},\cdots,r_{n-1}\leq n-1}V_{0(n-1)}V_{(n-1)r_{2}}\cdots V_{r_{(n-1)}0}.\]
Since \(V_{ij}=0\) whenever \(i>j+1,\) it implies that
\[\sum_{0\leq r_{2},\cdots,r_{n-1}\leq n-1}V_{0(n-1)}V_{(n-1)r_{2}}\cdots V_{r_{(n -1)}0}=V_{0(n-1)}V_{(n-1)(n-2)}V_{(n-2)(n-3)}\cdots V_{10}.\]
Therefore,
\[V_{0n}=\Big{[}A_{n}-\sum_{0\leq r_{1},r_{2},\cdots,r_{n-1}\leq n-1\ \&\ r_{1}\neq n-1}\hskip-14.226378ptV_{0r_{1}}V_{r_{1}r_{2}}\cdots V_{r_{(n-1)} 0}\Big{]}\Big{(}V_{(n-1)(n-2)}V_{(n-2)(n-3)}\cdots V_{10}\Big{)}^{-1}.\]
Next, we use the isometric property of \(V\) to obtain expression for the \(n^{\text{th}}\) column of \(V\). For a fixed \(n\in\mathbb{N},\) suppose the first \((n-1)\) columns are known then \(V_{1n}\) is given by considering the
inner product of the first column with the \(n^{\text{th}}\) column which is zero. That is, \(V_{00}^{*}V_{0n}+V_{10}^{*}V_{1n}=0,\) and therefore
\[V_{1n}=-(I-A_{1}^{*}A_{1})^{-1/2}A_{1}^{*}V_{0n}.\]
Indeed \(V_{kn}\) for \(0<k\leq n\) is given by the inner product of the \((k-1)^{\text{th}}\) column with the \(n^{\text{th}}\) column which is again zero. That is, \(\sum\limits_{i=0}^{k}V_{i(k-1)}^{*}V_{in}=0.\) Equivalently, \(V_{k(k-1)}^{*}V_{kn}+\sum\limits_{i=0}^{k-1}V_{i(k-1)}^{*}V_{in}=0.\) This implies that
\[V_{kn}=-\big{(}V_{k(k-1)}^{*}\big{)}^{-1}\sum\limits_{i=0}^{k-1}V_{i(k-1)}^{*} V_{in}.\]
For \(k=n+1,\) we have \(\sum\limits_{i=0}^{n+1}V_{in}^{*}V_{in}=I\) and equivalently, \(V_{(n+1)n}^{*}V_{(n+1)n}+\sum\limits_{i=0}^{n}V_{in}^{*}V_{in}=I.\) It implies that
\[V_{(n+1)n}^{*}V_{(n+1)n}=I-\sum\limits_{i=0}^{n}V_{in}^{*}V_{in}.\]
One can choose that
\[V_{(n+1)n}=\big{[}I-\sum\limits_{i=0}^{n}V_{in}^{*}V_{in}\big{]}^{1/2}.\]
As a result, the recurrence relations are given by \(V_{00}=A_{1},\;V_{10}=(I-A_{1}^{*}A_{1})^{1/2},\)
\[V_{0n}=\Big{[}A_{n}-\sum\limits_{0\leq r_{1},r_{2},\cdots,r_{n-1}\leq n-1\; \&\;r_{1}\neq n-1}V_{0r_{1}}V_{r_{1}r_{2}}\cdots V_{r_{(n-1)}0}\Big{]}\Big{(} V_{(n-1)(n-2)}V_{(n-2)(n-3)}\cdots V_{10}\Big{)}^{-1}\]
and
\[V_{kn}=\left\{\begin{array}{ccc}-\big{(}V_{k(k-1)}^{*}\big{)}^{-1}\sum \limits_{i=0}^{k-1}V_{i(k-1)}^{*}V_{in}&\text{ if }\;0<k\leq n\\ \\ \Big{[}I-\sum\limits_{i=0}^{n}V_{in}^{*}V_{in}\Big{]}^{1/2}&\text{ if }\;k=n+1\end{array}\right.\]
for \(n>0.\)
The computations above lead to the following result.
**Theorem 3.2**.: _Let \(\{A_{n}\}_{n\geq 0}\) be a sequence of contractions on \(\mathcal{H}\) and \(A_{0}=I\) admitting an isometric dilation. If the inverses appearing in the recurrence relations above are well defined bounded operators, then these formulae provide a minimal isometric dilation with the blocks of \(V\) described as above._
Now we look at unitary dilations.
**Theorem 3.3**.: _Suppose \(U\) is a unitary on a Hilbert space \(\mathcal{K}\) and \(\mathcal{H}\) is a closed subspace of \(\mathcal{K}\), satisfying \(\mathcal{K}=\overline{\operatorname{span}}\{U^{n}h:h\in\mathcal{H},n\in \mathbb{Z}\}.\) Then \(\mathcal{K}\) decomposes as_
\[\mathcal{K}=\cdots\oplus\mathcal{H}_{-1}\oplus\mathcal{H}_{0}\oplus\mathcal{H} _{1}\oplus\cdots,\]
_where \(\mathcal{H}_{0}=\mathcal{H}\) so that with respect to this decomposition \(U\) has the form:_
\[U=\left[\begin{array}{cccc|cccc}\ddots&\vdots&\vdots&\vdots&\vdots&\vdots& \vdots&\vdots&\\ &I&0&0&0&0&0&\\ \ldots&0&I&0&0&0&0&\ldots\\ \hline\cdots&0&0&U_{0(-1)}&U_{00}&U_{01}&U_{02}&\ldots\\ \ldots&0&0&U_{1(-1)}&U_{10}&U_{11}&U_{12}&\ldots\\ \ldots&0&0&U_{2(-1)}&0&U_{21}&U_{22}&\ldots\\ \ldots&0&0&U_{3(-1)}&0&0&U_{32}&\ldots\\ &\vdots&\vdots&\vdots&\vdots&\vdots&\ddots&\ddots\end{array}\right].\]
Proof.: Take \(\mathcal{H}_{0}]=\mathcal{H}_{0}\) and \(\mathcal{H}_{n]}=\overline{\mathrm{span}}\{U^{m}h:0\leq m\leq n-1,h\in\mathcal{H}\}\) and \(\mathcal{H}_{n}=\mathcal{H}_{n]}\bigcap\mathcal{H}_{(n-1)]}^{\perp}\) for \(n\geq 1\). \(\mathcal{K}_{+}=\bigvee\limits_{n=0}^{\infty}U^{n}\mathcal{H}\), and \(\mathcal{K}_{-}=\mathcal{K}_{+}^{\perp}\). Then clearly \(\mathcal{K}_{+}=\oplus_{n\geq 0}\mathcal{H}_{n}\) and \(U(\mathcal{H}_{n]})\subseteq\mathcal{H}_{(n+1)]}\). It implies that \(n\)-th column of the block matrix satisfies the desired property for all \(n\geq 0\). Notice that \(U^{*}(\mathcal{K}_{-})\subseteq\mathcal{K}_{-}\) as \(\mathcal{K}_{-}=\mathcal{K}_{+}^{\perp}\) and \(U(\mathcal{K}_{+})\subseteq\mathcal{K}_{+}\). We claim that
\[\bigcap\limits_{n=0}^{\infty}U^{*n}(\mathcal{K}_{-})=\{0\}.\]
Let \(x\in\bigcap\limits_{n=0}^{\infty}U^{*n}(\mathcal{K}_{-})=\{0\}\). Let \(n\geq 0\), and \(h\in\mathcal{H}\) notice that \(x=U^{*n}x_{n}\) for some \(x_{n}\in\mathcal{K}_{-}\). Then \(\langle x,U^{*n}h\rangle=\langle U^{*n}x_{n},U^{*n}h\rangle=\langle x_{n},h \rangle=0\), and \(\langle x,U^{m}h\rangle=0\) for all \(m\geq 0\). It follows that \(x=0\). Hence \(U^{*}:\mathcal{K}_{-}\rightarrow\mathcal{K}_{-}\) is a shift with wandering subspace
\[\mathcal{H}_{-1}:=\mathcal{K}_{-}\ominus U^{*}\mathcal{K}_{-}.\]
Therefore \(\mathcal{K}_{-}\) can be decomposed as
\[\mathcal{K}_{-1}=\cdots\oplus U^{*2}\mathcal{H}_{-1}\oplus U^{*}\mathcal{H}_{ -1}\oplus\mathcal{H}_{-1}.\]
Now, it is enough to see that \(U(\mathcal{H}_{-1})\subseteq\mathcal{K}_{+}\). Consider \(x\in\mathcal{H}_{-1}\), then for any \(y\in\mathcal{K}_{-}\), we see that \(\langle Ux,y\rangle=\langle x,U^{*}y\rangle=0\), as \(x\perp U^{*}\mathcal{K}_{-}\). This completes the proof.
## 4. \(\mathcal{C}_{A}\)-class operators
Inspired by Sz.-Nagy's dilation theorem for contractions, Berger and Stamfli[11], obtained the following interesting theorem. Suppose \(T\in\mathcal{B}(\mathcal{H})\). Then the numerical radius \(w(T)\leq 1\) if and only if the operator sequence \(\{\frac{1}{2}T^{n},\ n\geq 1\}\) admits a unitary dilation. This led to the following definition. Fix \(\rho>0\). Then a bounded operator \(T\in\mathcal{B}(\mathcal{H})\) is said to be in \(\mathcal{C}_{\rho}\)-class if there is a unitary operator \(U\) on some Hilbert space \(\mathcal{K}\supset\mathcal{H}\) such that \(T^{n}=\rho P_{\mathcal{H}}U^{n}\big{|}_{\mathcal{H}}\) for \(n\geq 1\) (see page 43 of [25]). Thanks to Sz.-Nagy's dilation theorem, \(\mathcal{C}_{1}\)-class is precisely the set of all contractions in \(\mathcal{B}(\mathcal{H})\). Similarly from the Berger-Stamfli's Theorem [11] we know that \(\mathcal{C}_{2}\)-class is the set of all operators in \(\mathcal{B}(\mathcal{H})\) whose numerical radius is less than equal to one. An operator \(T\) is in \(\mathcal{C}_{\rho}\)-class (see Theorem 11.1 of [25]) if and only if
\[(\rho-2)\|(I-zT)h\|^{2}+2\text{Re}\big{\langle}h,\ (I-zT)h\big{\rangle}\geq 0, \ \text{for}\ \ |z|\leq 1,\ h\in\mathcal{H}. \tag{25}\]
M. A. Dritschel, H. J. Woerdeman [23] have developed a model theory for the \(\mathcal{C}_{2}\)-class. Over the past few decades there has been an extensive study of \(\mathcal{C}_{\rho}\)-class operators. To mention some of them we refer [5, 10, 11, 22, 17, 18, 24] and references therein. Generalizing this notion in a natural way H. Langer introduced the \(\mathcal{C}_{A}\)-class. Let \(A\in\mathcal{B}(\mathcal{H})\) be a positive invertible operator. Then an operator \(T\in\mathcal{B}(\mathcal{H})\) is said to be of \(\mathcal{C}_{A}\)-class if the operator valued sequence \(\{A_{n}\}_{n\geq 1}\), where
\[A_{n}:=A^{-\frac{1}{2}}T^{n}A^{-\frac{1}{2}},n\geq 1, \tag{26}\]
admits unitary dilation. The only reference we could find regarding this notion is [25]. In particular we couldn't find part (iii) of the following Theorem in the literature.
**Theorem 4.1**.: _Let \(A\in\mathcal{B}(\mathcal{H})\) be a positive invertible operator and the operator \(T\in\mathcal{B}(\mathcal{H})\). Then the following are equivalent:_
1. \(T\in\mathcal{C}_{A}\)_-class._
2. _For every_ \(N\in\mathbb{N}\) _and_ \(c_{1},\cdots,c_{N}\in\mathbb{C}\)_, we have_ \[\sum_{\ell,k=0}^{N}\overline{c_{\ell}}c_{k}\zeta_{A}(k-\ell)\geq 0\ \text{for each}\ N\in\mathbb{N},\] (27)
_where_ \(\zeta_{A}(n)\) _is defined as follows:_
\[\zeta_{A}(n):=\left\{\begin{array}{ll}A^{-\frac{1}{2}}T^{n}A^{-\frac{1}{2}}& \mbox{if }\,n>0;\\ I&\mbox{if }n=0;\\ A^{-\frac{1}{2}}T^{*-n}A^{-\frac{1}{2}}&\mbox{if }n<0.\end{array}\right.\]
* \(T\) _satisfies the following:_ \[\langle(I-zT)h,(A-2I)(I-zT)h\rangle+2\mathrm{Re}\langle h,(I-zT)h\rangle\geq 0 \mbox{ for all }|z|<1,h\in\mathcal{H}.\] (28)
Proof.: In view of Theorem 3.1, it is immediate to see (i) and (ii) are equivalent. Now, we claim that (i) and (iii) are equivalent.
(i) \(\Longrightarrow\) (iii): Let \(\mathbf{A}=\{A_{n}\}_{n\geq 0},\) where \(A_{n}=A^{-\frac{1}{2}}T^{n}A^{-\frac{1}{2}}\) for \(n\geq 1\) and \(A_{0}=I.\) Assume that \(T\in\mathcal{C}_{A}.\) Then by Theorem 3.1, \(P_{\mathbf{A}}(\overline{z})\geq 0\) for all \(z\in\mathbb{D}.\) In other words,
\[(I+zA^{-\frac{1}{2}}TA^{-\frac{1}{2}}+z^{2}A^{-\frac{1}{2}}T^{2}A^{-\frac{1}{2 }}+\ldots\infty)+(I+\overline{z}A^{-\frac{1}{2}}T^{*}A^{-\frac{1}{2}}+ \overline{z}^{2}A^{-\frac{1}{2}}T^{*2}A^{-\frac{1}{2}}+\ldots\infty)-I\geq 0.\]
It follow that
\[(I-zT)^{-1}+(I-\overline{z}T^{*})^{-1}+A-2I\geq 0.\]
Let \(h\in\mathcal{H},\) then for all \(z\in\mathbb{D},\) we have
\[\big{\langle}(I-zT)h,\big{(}(I-zT)^{-1}+(I-\overline{z}T^{*})^{-1}\big{)}(I-zT )h\big{\rangle}+\langle(I-zT),(A-2I)(I-zT)h\rangle\geq 0.\]
Since \(\big{\langle}(I-zT)h,\big{(}(I-zT)^{-1}+(I-\overline{z}T^{*})^{-1}\big{)}(I-zT )h\big{\rangle}=2Re\langle(I-zT)h,h\rangle,\) therefore
\[\langle(I-zT)h,(A-2I)(I-zT)h\rangle+2\mathrm{Re}\langle(I-zT)h,h\rangle\geq 0 \mbox{ for all }|z|<1,h\in\mathcal{H}.\]
(iii) \(\Longrightarrow\) (i): Assume that \(\langle(I-zT)h,(A-2I)(I-zT)h\rangle+2\mathrm{Re}\langle(I-zT)h,h\rangle\geq 0\) for all \(|z|<1,h\in\mathcal{H}.\) Then by reverse computation, we can show that \(P_{\mathbf{A}}(\overline{z})\geq 0\) for all \(z\in\mathbb{D}.\) Hence by Theorem 3.1, \(T\in\mathcal{C}_{A}.\)
In the next result, employing standard techniques we provide another necessary and sufficient criterion in terms of positive maps. As an immediate application of Theorem 4.2, we recover a result of V. Istratescu [16] that if \(0\leq B\leq A,\) and \(A,B\) are invertible, then \(\mathcal{C}_{B}\subseteq\mathcal{C}_{A}.\)
**Theorem 4.2**.: _Let \(A\in\mathcal{B}(\mathcal{H})\) be a positive invertible operator. Then an operator \(T\in\mathcal{B}(\mathcal{H})\) is in \(\mathcal{C}_{A}\)-class if and only if the map \(\varphi:\mathcal{S}\subseteq C(\mathbb{T})\to\mathcal{B}(\mathcal{H})\) defined by_
\[\varphi_{A}\big{(}p(e^{i\theta})+\overline{q(e^{i\theta})}\big{)}:=p(T)+q(T)^{ *}+(A-I)\big{(}p(0)+\overline{q(0)}\big{)}I.\]
_is a positive map._
Proof.: Let \(T\in\mathcal{C}_{A}.\) Now, let \(f(e^{i\theta})=\sum\limits_{n=-N}^{N}a_{n}e^{in\theta}\in\mathcal{S}\) be strictly positive. Then applying Riesz-Fejer theorem, we see that \(f(e^{i\theta})=\sum\limits_{0\leq\ell,k\leq N}\overline{c_{\ell}}c_{k}e^{i(k- \ell)\theta}\) for some \(c_{0},\ldots,c_{N}\in\mathbb{C}.\) This implies that
\[\varphi_{A}(f(e^{i\theta})) =\sum\limits_{\ell=0}^{N}|c_{\ell}|^{2}+\sum\limits_{0\leq\ell \neq k\leq N}\overline{c_{\ell}}c_{k}T_{(k-\ell)}+(A-I)\sum\limits_{\ell=0}^{N }|c_{\ell}|^{2}\] \[=\sum\limits_{\ell=0}^{N}|c_{\ell}|^{2}A+\sum\limits_{0\leq l\neq k \leq N}\overline{c_{\ell}}c_{k}T_{(k-\ell)}\] \[=A^{\frac{1}{2}}\big{(}\sum\limits_{\ell=0}^{N}|c_{\ell}|^{2}I+ \sum\limits_{0\leq\ell\neq k\leq N}\overline{c_{\ell}}c_{k}A^{-\frac{1}{2}}T_{(k -\ell)}A^{-\frac{1}{2}}\big{)}A^{\frac{1}{2}}\] \[=A^{\frac{1}{2}}\Big{(}\sum\limits_{\ell,k=0}^{N}\overline{c_{\ell }}c_{k}\zeta_{A}(k-\ell)\Big{)}A^{\frac{1}{2}}.\]
As \(T\in\mathcal{C}_{A},\) by Theorem 4.1, we have \(\sum\limits_{\ell,k=0}^{N}\overline{c_{\ell}}c_{k}\zeta_{A}(k-\ell)\geq 0.\) Therefore \(\varphi_{A}(f(z))\geq 0\) whenever \(f(z)\) is strictly positively element in \(\mathcal{S}.\) Now, let \(g\in\mathcal{S}\) be positive. Then \(g+\varepsilon\cdot 1\) is strictly positive for any \(\epsilon>0.\) Then \(\varphi_{A}(g)+\varepsilon A=\varphi_{A}(g+\varepsilon\cdot 1)\geq 0\) for all \(\varepsilon>0.\) Taking limit \(\epsilon\to 0,\) we obtain \(\varphi_{A}(g)\geq 0.\) Hence the map \(\varphi_{A}\) is positive.
Conversely, let \(\varphi_{A}\) be a positive map. In view of Theorem 4.1, it is enough to prove that \(\sum\limits_{\ell,k=0}^{N}\overline{c_{\ell}}c_{k}\zeta_{A}(k-\ell)\geq 0.\) Notice that \(\sum\limits_{\ell,k=0}^{N}\overline{c_{\ell}}c_{k}\zeta_{A}(k-l)=A^{-\frac{1} {2}}\varphi_{A}(\sum\limits_{0\leq\ell,k\leq N}\overline{c_{\ell}}c_{k}e^{i(k- \ell)\theta})A^{-\frac{1}{2}}.\) Since \(\varphi_{A}\) is positive and \(\sum\limits_{0\leq\ell,k\leq N}\overline{c_{\ell}}c_{k}e^{i(k-\ell)\theta}\geq 0,\) therefore \(\sum\limits_{\ell,k=0}^{N}\overline{c_{\ell}}c_{k}\zeta_{A}(k-l)\geq 0.\) This completes the proof.
## 5. Concrete isometric and unitary dilations for a subclass of \(\mathcal{C}_{A}\)-class operators
Let \(A\in\mathcal{B}(\mathcal{H})\) be a positive invertible operator. Recall that an operator \(T\) is in \(\mathcal{C}_{A}\)-class means that the sequence \(\{A_{n}\}_{n\geq 1}\) given by
\[A_{n}:=A^{-\frac{1}{2}}T^{n}A^{-\frac{1}{2}},n\geq 1 \tag{29}\]
admits unitary dilation. We do not know how to write down the dilation of \(\mathcal{C}_{A}\)-class operators in general. Here we do it for a subclass.
First we write down an isometric dilation for \(\{A^{-\frac{1}{2}}T^{n}A^{\frac{1}{2}}\}_{n\geq 1}\). Let \(A,C\in\mathcal{B}(\mathcal{H})\) be commuting pair such that \(A\geq 0\) with \(\|C\|\leq 1.\) Suppose,
\[T=A[I+A(A-2I)C^{*}C]^{-\frac{1}{2}}(I-C^{*}C)^{\frac{1}{2}}C.\]
Here \(I+A(A-2I)C^{*}C\geq 0\) as \(C\) is a contraction and \(I+A(A-2I)C^{*}C=I-C^{*}C+(A-I)^{2}C^{*}C.\) We are assuming that it is invertible. In such a case, we will see that \(T\) is in \(\mathcal{C}_{A}\)-class and we can explicitly write down isometric and unitary dilations of \(T\).
Consider \(A,C\) as above. It is convenient to have some notation. Take \(B=(I+A(A-2I)C^{*}C)^{-\frac{1}{2}},D=(I-C^{*}C)^{\frac{1}{2}},D_{*}=(I-CC^{*}) ^{\frac{1}{2}}.\) Therefore, \(T=ABDC.\) Now we take the following sequence \(\{T_{n}\}_{n\geq 0}\) of bounded operators defined by
\[T_{0}=I,\text{ and }T_{n}=A^{-\frac{1}{2}}T^{n}A^{-\frac{1}{2}}=A^{n-1}(BDC)^{n} \text{ for all }n\geq 1. \tag{30}\]
Our aim is to show that the sequence \(\{T_{n}\}_{n\geq 0}\) admits an isometric dilation. In other words, there exists an isometry \(V\) on some Hilbert space \(\mathcal{K}\supseteq\mathcal{H}\) such that \(T_{n}=P_{\mathcal{H}}V^{n}|_{\mathcal{H}}\) for all \(n\geq 0.\) In addition, we want to find an \(V\) with explicit block structure.
We observe that since \(A\) is positive and commutes with \(C\), it also commutes with \(C^{*}\). It follows that \(A\) commutes with also \(B,D,D_{*}\). Consequently we get \(BD=DB\). On the other hand, we see that
\[D_{*}^{2}C=(I-CC^{*})C=C(I-C^{*}C)=CD^{2}.\]
As both \(D\) and \(D_{*}\) are positive operators, \(D_{*}C=CD.\) Furthermore,
\[B^{2}(D^{2}+(A-I)^{2}C^{*}C) =(I+A(A-2I)C^{*}C)^{-1}\big{(}I-C^{*}C+A(A-I)C^{*}C-(A-I)C^{*}C \big{)}\] \[=(I+A(A-2I)C^{*}C)^{-1}\big{(}I-A(A-I)C^{*}C-AC^{*}C\big{)}\] \[=(I+A(A-2I)C^{*}C)^{-1}(I+A(A-2I)C^{*}C)\] \[=I.\]
Therefore,
\[(BD)^{2}=I-B^{2}(A-I)^{2}C^{*}C. \tag{31}\]
We first obtain a partial isometric dilation on \(\mathcal{H}\oplus\mathcal{H}.\)
**Lemma 5.1**.: _Define \(R\) on \(\mathcal{H}\oplus\mathcal{H}\) by:_
\[R=\begin{bmatrix}BDC&BDD_{*}\\ (A-I)CBC&(A-I)CBD_{*}\end{bmatrix} \tag{32}\]
_Then \(R\) is a partial isometry and_
\[T_{n}=P_{\mathcal{H}}R^{n}P_{\mathcal{H}}\text{ for all }n\geq 1.\]
Proof.: Making use of the commutation relations observed above, by direct computation we see that
\[R^{*}R=\begin{bmatrix}C^{*}C&C^{*}D_{*}\\ D_{*}C&D_{*}^{2}\end{bmatrix}.\]
Clearly, \(R^{*}R\) is a self-adjoint operator and
\[(R^{*}R)^{2} =\begin{bmatrix}C^{*}C&C^{*}D_{*}\\ D_{*}C&D_{*}^{2}\end{bmatrix}\begin{bmatrix}C^{*}C&C^{*}D_{*}\\ D_{*}C&D_{*}^{2}\end{bmatrix}\] \[=\begin{bmatrix}(C^{*}C)^{2}+C^{*}D_{*}^{2}C&C^{*}CC^{*}D_{*}+C^{ *}D_{*}D_{*}^{2}\\ D_{*}CC^{*}C+D_{*}^{2}D_{*}C&D_{*}CC^{*}D_{*}+D_{*}^{4}\end{bmatrix}\] \[=\begin{bmatrix}C^{*}C+C^{*}(1-CC^{*})C&C^{*}CC^{*}D_{*}+C^{*}(1- CC^{*})D_{*}\\ D_{*}CC^{*}C+D_{*}(1-CC^{*})C&D_{*}^{2}CC^{*}+D_{*}^{2}(1-CC^{*})\end{bmatrix}\] \[=\begin{bmatrix}C^{*}C&C^{*}D_{*}\\ D_{*}C&D_{*}^{2}\end{bmatrix}.\]
Therefore \(R^{*}R\) is a projection and \(R\) is a partial isometry. Next, we compute \(R^{2}\) as follows:
\[R^{2} =\begin{bmatrix}BDC&BDD_{*}\\ (A-1)CBC&(A-1)CBD_{*}\end{bmatrix}\begin{bmatrix}BDC&BDD_{*}\\ (A-1)CBC&(A-1)CBD_{*}\end{bmatrix}\] \[=\begin{bmatrix}(BDC)^{2}+(A-1)BDD_{*}CBC&*\\ (A-1)CBCBDC+(A-1)^{2}CBD_{*}CBC&*\end{bmatrix}\] \[=\begin{bmatrix}(BDC)^{2}+(A-1)BDCDCBC&*\\ (A-1)CBCBDC+(A-1)^{2}CBDBC&*\end{bmatrix}\] \[=\begin{bmatrix}A(BDC)^{2}&*\\ A(A-1)CBCBDC&*\end{bmatrix}.\]
Therefore, \(P_{\mathcal{H}}R^{2}|P_{\mathcal{H}}=A(BDC)^{2}=A^{-\frac{1}{2}}T^{2}A^{-\frac {1}{2}}.\) Hence by induction, we conclude that \(P_{\mathcal{H}}R^{n}|_{\mathcal{H}}=A^{-\frac{1}{2}}T^{n}A^{-\frac{1}{2}}.\)
Let \(D_{R}=(1-R^{*}R)^{\frac{1}{2}}\) be the defect operator associate the partial isometry. Since \((1-R^{*}R)\) is a projection, \((1-R^{*}R)^{\frac{1}{2}}=(1-R^{*}R).\) Therefore, \(D_{R}=\begin{bmatrix}D^{2}&-C^{*}D_{*}\\ -D_{*}C&CC^{*}\end{bmatrix}.\)
**Remark 5.2**.: _Using the Schaffer construction of \(R,\) and Proposition 5.1, we can provide directly explicit form of isometric dilation of the moment sequence \(\{A^{-\frac{1}{2}}T^{n}A^{-\frac{1}{2}}\}\) associated to the \(\mathcal{C}_{A}\)-class. Let \(\tilde{V}:\ell^{2}(\mathcal{H}\oplus\mathcal{H})\rightarrow\ell^{2}(\mathcal{ H}\oplus\mathcal{H})\) be the isometry given by_
\[\tilde{V}=\begin{bmatrix}R&0&0&0&0&\dots\\ D_{R}&0&0&0&0&\dots\\ \hline 0&I&0&0&0&\dots\\ 0&0&I&0&0&\dots\\ 0&0&0&I&0&\dots\\ \vdots&\vdots&\vdots&\vdots&\ddots&\ddots\end{bmatrix},\]
_Recalling Schaffer construction of \(R\), it is clear that \(P_{\mathcal{H}}\tilde{V}^{n}|_{\mathcal{H}}=P_{\mathcal{H}}R^{n}|_{\mathcal{H} }=A^{-\frac{1}{2}}T^{n}A^{-\frac{1}{2}}.\)_
Here is an alternative construction of an isometric dilation which is simpler looking.
**Theorem 5.3**.: _Take \(\mathcal{K}=\mathcal{H}^{\otimes 3}\oplus\ell^{2}(\mathcal{H}).\) Consider \(V\) on \(\mathcal{K}\) defined by_
\[V=\begin{bmatrix}BDC&BDD_{*}&0&0&0&0&\dots\\ (A-I)CBC&(A-I)CBD_{*}&0&0&0&0&\dots\\ D&-C^{*}&0&0&0&0&\dots\\ \hline 0&0&I&0&0&0&\dots\\ 0&0&0&I&0&0&\dots\\ 0&0&0&0&I&0&\dots\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ddots&\ddots\end{bmatrix}.\]
_Then \(V\) is an isometric dilation of \(\{A_{n}\}_{n\geq 0}.\)_
Proof.: The dilation property follows from the Lemma 5.1. The isometric property of \(V\) is clear once we observe that
\[D_{R}=(I-R^{*}R)=\left[\begin{array}{c}D\\ -C\end{array}\right]\left[\begin{array}{c}D\\ D\end{array}\right.\]
In the last theorem there is no claim of minimality. To get the minimal isometric dilation we need to identify the minimal dilation space \(\bigvee\limits_{n=0}^{\infty}V^{n}\mathcal{H}.\) This we do here as a proposition.
**Proposition 5.4**.: _Let \(V\) be the minimal isometric dilation defined in Theorem 5.3. Then the minimal dilation space \(\bigvee\limits_{n=0}^{\infty}V^{n}\mathcal{H}\) is of the form_
\[\bigvee\limits_{n=0}^{\infty}V^{n}\mathcal{H}=\bigoplus_{n=0}^{\infty} \mathcal{H}_{n},\]
_where \(\mathcal{H}_{0}:=\mathcal{H},\mathcal{H}_{1}:=V\mathcal{H}\ominus\mathcal{H}.\) Moreover,_
\[\mathcal{H}_{1}=\{(0\oplus(A-I)CBCh\oplus Dh\oplus\cdots):h\in\mathcal{H}\}\]
_and_
\[\mathcal{H}_{n}:=V^{n}\mathcal{H}\ominus\bigvee\limits_{m=0}^{n-1}V^{m} \mathcal{H}=\{(0\oplus(n\text{ copies })\oplus 0\oplus(I-A)BCPh\oplus Dh\oplus 0 \cdots):h\in\mathcal{H}\}\]
_for \(n\geq 2,\) where \(P\) is the projection onto \(\ker(I-A)BDC.\)_
Proof.: First note that \(\mathcal{H}\bigvee V\mathcal{H}=\mathcal{H}\oplus(V\mathcal{H}\ominus\mathcal{ H}).\) Let \(h\in\mathcal{H},\) then notice that
\[Vh =\big{(}BDCh\oplus(A-I)CBCh\oplus Dh\oplus 0\oplus\cdots\big{)},\] \[V^{2}h =\big{(}A(BDC)^{2}h\oplus A(A-I)CBCBDCh\oplus DBDCh-(A-I)C^{*}CBCh \oplus Dh\oplus 0\oplus\cdots\big{)}.\]
Then it is immediate to see that \(V\mathcal{H}\ominus\mathcal{H}=\{(0\oplus(A-I)CBCh\oplus Dh\oplus 0\oplus 0 \oplus\cdots):h\in\mathcal{H}\}.\) Next, we shall find \(V^{2}\mathcal{H}\ominus(\mathcal{H}\bigvee V\mathcal{H}).\) First, we write \(V^{2}h\) as
\[V^{2}h =\big{(}A(BDC)^{2}h\oplus 0\oplus\cdots\big{)}+\big{(}0\oplus(A-I) CBC(ABDCh)\oplus DABDCh\oplus 0\oplus\cdots\big{)}\] \[+\big{(}0\oplus 0\oplus(I-A)BCh\oplus Dh\oplus 0\oplus\cdots\big{)}.\]
Obviously, the first and second terms belong to \(\mathcal{H}\) and \(V\mathcal{H}\ominus\mathcal{H}\) respectively. We want to understand the last term of this equation. We claim that
\[V^{2}\mathcal{H}\ominus(\mathcal{H}\lor V\mathcal{H})=\big{\{}\big{(}0\oplus 0 \oplus(I-A)BCPh\oplus Dh\oplus 0\oplus\cdots\big{)}:h\in\mathcal{H}\big{\}}.\]
Notice that \(\big{\langle}\big{(}0\oplus(A-I)CBCg\oplus Dg\oplus 0\oplus 0\oplus\cdots \big{)},\big{(}0\oplus 0\oplus(I-A)BCPh\oplus Dh\oplus 0\oplus\cdots\big{)} \big{\rangle}=0\) as \((I-A)BDCPh=0.\) Therefore, we have
\[\big{\{}\big{(}0\oplus 0\oplus(I-A)BCPh\oplus Dh\oplus 0\oplus\cdots\big{)}:h\in \mathcal{H}\big{\}}\subseteq V^{2}\mathcal{H}\ominus(\mathcal{H}\lor V \mathcal{H}).\]
Now, it is sufficient to prove that \(\big{(}0\oplus 0\oplus(I-A)BC(I-P)h\oplus 0\oplus\cdots\big{)}\in V\mathcal{H} \ominus\mathcal{H}\) for all \(h\in\mathcal{H}.\) To see this, suppose
\[\big{\langle}\big{(}0\oplus 0\oplus(I-A)BC(I-P)h\oplus 0\oplus\cdots\big{)}, \big{(}0\oplus(A-I)CBCg\oplus Dg\oplus 0\oplus\cdots\big{)}\big{\rangle}=0, \tag{33}\]
for all \(g\in\mathcal{H}.\) This implies that for all \(g\in\mathcal{H},\)
\[0=\big{\langle}(I-A)BC(I-P)h,Dg\big{\rangle}=\big{\langle}(I-A)BDC(I-P)h,g \big{\rangle}=\big{\langle}(I-A)BDCh,g\big{\rangle},\]
as \((I-A)BDC(I-P)h=0.\) It follows that \((I-A)BDCh=0\) and \(h\in\)range\(P.\) Therefore, we have \(\big{(}0\oplus 0\oplus(I-A)BC(I-P)h\oplus 0\oplus\cdots\big{)}=0\) under the condition defined Equation (33). Consequently, we get \(\big{(}0\oplus 0\oplus(I-A)BC(I-P)h\oplus 0\oplus\cdots\big{)}\in V\mathcal{H }\ominus\mathcal{H}\) for all \(h\in\mathcal{H}.\) Hence, we have \(\big{\{}\big{(}0\oplus 0\oplus(I-A)BCPh\oplus Dh\oplus 0\oplus\cdots\big{)}:h \in\mathcal{H}\big{\}}=V^{2}\mathcal{H}\ominus(\mathcal{H}\lor V\mathcal{H}).\) Now it is easy to see that
\[\big{\{}\big{(}0\oplus 0\oplus 0\oplus(I-A)BCPh\oplus Dh\oplus 0\oplus\cdots \big{)}:h\in\mathcal{H}\big{\}}=V^{3}\mathcal{H}\ominus(\mathcal{H}\lor V \mathcal{H}\lor V^{2}\mathcal{H}).\]
By induction, we can prove that
\[\big{\{}\big{(}0\oplus(n\text{ copies })\oplus(I-A)BCPh\oplus Dh\oplus 0 \oplus\cdots\big{)}:h\in\mathcal{H}\big{\}}=V^{n}\mathcal{H}\ominus(\bigvee \limits_{m=0}^{n-1}V^{m}\mathcal{H}).\]
Now we explicitly write down a unitary dilation for the sequence \(\{T_{n}\}_{n\geq 0}\) as a
**Theorem 5.5**.: _Let \(\mathcal{K}=\ell^{2}(\mathcal{H})\oplus\mathcal{H}^{\oplus 3}\oplus\ell^{2}( \mathcal{H})\) and \(U\) acting on \(\mathcal{K}\) is defined by_
\[U=\begin{bmatrix}\ddots&&&&&\\ &I&&&&\\ \hline&I&0&0&0&0&\\ &&-(A-I)BC^{*}&\mathbf{BDC}&BDD_{*}&0&\\ &&B_{*}D_{*}&(A-I)CBC&(A-I)CBD_{*}&0&\\ &&0&D&-C^{*}&0&\\ \hline&&0&0&0&I&\\ &&&&&&I&\\ &&&&&&\ddots&\end{bmatrix}. \tag{34}\]
_where \(BDC\) that appear in the bold font is the \((00)\) entry of \(U,\) and the \(B_{*}\) is given by_
\[B_{*}=(I+A(A-2I)CC^{*})^{-\frac{1}{2}}.\]
_Then \(U\) is a unitary dilation of \(\{T_{n}\}_{n\geq 0}.\)_
Proof.: Consider the block operator matrix \(M\) defined by:
\[M=\begin{bmatrix}0&0&0&0&0\\ &-(A-I)BC^{*}&\mathbf{BDC}&BDD_{*}&0\\ &B_{*}D_{*}&(A-I)CBC&(A-I)CBD_{*}&0\\ &0&D&-C^{*}&0\end{bmatrix}. \tag{35}\]
It is straight forward to verify that
\[M^{*}M=\begin{bmatrix}I&0&0&0\\ 0&I&0&0\\ 0&0&I&0\\ 0&0&0&0\end{bmatrix},\ MM^{*}=\begin{bmatrix}0&0&0&0\\ 0&I&0&0\\ 0&0&I&0\\ 0&0&0&I\end{bmatrix}.\]
This clearly implies that \(U^{*}U=UU^{*}=I.\) Finally, we obtain the desired claim
\[P_{\mathcal{H}}U^{n}|_{\mathcal{H}}=P_{\mathcal{H}}M^{n}|_{\mathcal{H}}=T_{n} \text{ for all }n\geq 0.\]
These completes the proof.
As we discussed earlier, minimal dilation space of the unitary dilation is of the form \(\mathcal{K}=\bigvee\limits_{n=-\infty}^{\infty}U^{n}\mathcal{H}.\) The minimal dilation space can be written as \(\mathcal{K}_{+}\bigvee\mathcal{K}^{-}=\mathcal{K},\) where \(\mathcal{K}_{+}=\bigvee\limits_{n=0}^{\infty}U^{n}\mathcal{H},\) and \(\mathcal{K}^{-}=\bigvee\limits_{n=0}^{-\infty}U^{n}\mathcal{H}.\) Since \(U|_{\mathcal{K}_{+}}\) is the isometry \(V\) constructed before, the decomposition of \(\mathcal{K}_{+}\) is clear.
**Remark 5.6**.: \(\mathcal{K}_{-}\) _can be decomposed as_
\[\mathcal{K}_{-}=\bigoplus_{n=1}^{\infty}\mathcal{H}_{-n}\]
_where \(\mathcal{H}_{-1}:=U^{*}\mathcal{H}\ominus\mathcal{H},\mathcal{H}_{-n}:=U^{*n }\mathcal{H}\ominus\bigvee\limits_{m=0}^{n-1}U^{*m}\mathcal{H}\) for \(n\geq 2.\) Moreover,_
\[\mathcal{H}_{-1}:=\big{\{}\big{(}\cdots\oplus-(A-I)CBh\oplus\boldsymbol{0} \oplus D_{*}DBh\oplus 0\oplus\cdots\big{)}:h\in\mathcal{H}\big{\}},\]
\[\mathcal{H}_{-n}=\big{\{}\big{(}\cdots\oplus-(A-I)CBCh\oplus B_{*}^{-\frac{1} {2}}DBQh\oplus 0\oplus(-n+2)\text{ times }\oplus\boldsymbol{0}\oplus\cdots\big{)}:h\in \mathcal{H}\big{\}}\]
_and \(Q\) is the projection onto \(\ker(A-1)B^{\frac{1}{2}}C^{*}DB.\)_
### Special case (\(\mathcal{C}_{\rho}\)-class)
The case when \(A\) is a positive scalar \(\rho,\) corresponds to the \(\mathcal{C}_{\rho}\)-class of operators. This class is very rich and widely studied. However, as far as we know explicit block operator description of isometric and unitary dilations of operators this class is not found in the literature except for \(\rho=2\). An operator \(T\) is in \(\mathcal{C}_{2}\) class if and only if \(w(T)\leq 1,\) where \(w(T)\) denote the numerical range of \(T\). Unitary dilation of this class was exhibited by T. Ando [6] and coincides with the following construction (with \(\rho=2\)). We recall from Durzst [14] that \(T\in\mathcal{C}_{\rho}(\rho>0)\) if and only if \(T\) is of the form:
\[T=\rho(1+\rho(\rho-2)C^{*}C)^{-\frac{1}{2}}DC,\]
where \(\rho>0,C\) is a contraction and \(D=(I-C^{*}C)^{\frac{1}{2}}.\) In this case we may write down isometric and unitary dilations as above. We observe that \(B=(I+\rho(\rho-2)C^{*}C)^{-\frac{1}{2}},B_{*}=(I+\rho(\rho-2)CC^{*})^{-\frac{1 }{2}}\) and \(T_{n}=\rho^{n-1}(BDC)^{n}=\frac{(\rho BDC)^{n}}{\rho}=\frac{T^{n}}{\rho}.\) Moreover, for the case \(\rho=2\), being \(B=B^{*}=I,\) the operators \(B,B^{*}\) will not play any role in unitary and isometric dilation of \(\mathcal{C}_{2}\)-class.
**Question 5.7**.: _How to write block decompositions for isometric and unitary dilations of general \(\mathcal{C}_{A}\)-class operators?_
_Acknowledgements:_ The first author is funded by the J C Bose Fellowship JBR/2021/000024 of SERB(India). The second author is supported by the NBHM postdoctoral fellowship, Department of Atomic Energy (DAE), Government of India (File No. 0204/1(4)/2022/ R&D-II/1198). The third author is funded by the Startup Research Grant (File No. SRG/2022/001795) of SERB, India. We also acknowledge Stat-Math Unit of the Indian Statistical Institute Bangalore Centre for providing excellent research environment.
|
2305.16774
|
How to Understand Limitations of Generative Networks
|
Well-trained classifiers and their complete weight distributions provide us
with a well-motivated and practicable method to test generative networks in
particle physics. We illustrate their benefits for distribution-shifted jets,
calorimeter showers, and reconstruction-level events. In all cases, the
classifier weights make for a powerful test of the generative network, identify
potential problems in the density estimation, relate them to the underlying
physics, and tie in with a comprehensive precision and uncertainty treatment
for generative networks.
|
Ranit Das, Luigi Favaro, Theo Heimel, Claudius Krause, Tilman Plehn, David Shih
|
2023-05-26T09:35:16Z
|
http://arxiv.org/abs/2305.16774v2
|
# How to Understand Limitations of Generative Networks
###### Abstract
Well-trained classifiers and their complete weight distributions provide us with a well-motivated and practicable method to test generative networks in particle physics. We illustrate their benefits for distribution-shifted jets, calorimeter showers, and reconstruction-level events. In all cases, the classifier weights make for a powerful test of the generative network, identify potential problems in the density estimation, relate them to the underlying physics, and tie in with a comprehensive precision and uncertainty treatment for generative networks.
###### Contents
* 1 Introduction
* 2 Testing generative networks
* 3 Distribution-shifted jets
* 4 Calorimeter simulation
* 4.1 Tails of weights
* 4.2 Phase space clustering
* 5 Event generation
* 5.1 Standard generator and mass peak
* 5.2 State-of-the-art generator and feature scan
* 5.3 Bayesian generators and pull
* 6 Conclusions
* A Classifier calibration
* B Additional kinematic distributions
Introduction
Like all of society, LHC physics is currently undergoing a transformation driven by modern data science. The experimental and theoretical methods of LHC physics have always been numerical in nature, with the goal to quantitatively, systematically, and comprehensively understand data in terms of fundamental theory. Generative networks are an exciting concept of modern machine learning (ML), combining unsupervised density estimation in an interpretable phase space with fast and flexible sampling and simulations [1]. Currently, the most promising architectures for precision generation are normalizing flows and their invertible network (INN) variants, but we will see that diffusion models and generative transformers might offer an even better balance of precision and expressivity.
The range of tasks for generative networks in LHC simulations and analysis is extensive. Given the modular structure of LHC simulations, it starts with phase space integration and sampling [2, 3, 4, 5, 6, 7], for instance of ML-encoded transition amplitudes. More LHC-specific tasks include event subtraction [8], event unweighting [9, 10], or super-resolution enhancement [11, 12]. Generative networks working on physics phase spaces have been developed and tested as event generators [13, 14, 15, 16, 17, 18], parton showers [19, 20, 21, 22, 23], and detector simulations [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48]. These networks should be trained on first-principle simulations, easy to handle, efficient to ship, powerful in amplifying the training samples [49, 50], and -- most importantly -- precise. Going beyond forward generation, conditional generative networks can also be applied to probabilistic unfolding [51, 52, 53, 54, 55, 56], inference [57, 58], or anomaly detection [59, 60, 61, 62, 63, 64], reinforcing the precision requirements.
For all the above tasks, normalizing flows or INNs have reached the level of precision, stability, and control required by LHC physics. Methods to control the performance of these generative networks include Bayesian network setups [18, 65], classifier-reweighting [18, 66, 67, 68], and conditional training on augmented data [18]. Building on these developments, LHC physics needs methods to systematically evaluate the performance and the precision of generative networks [69], for example to quantify possible gains through new architectures [70, 71, 39, 72].
In this paper we will explore the merits of a classifier-based evaluation of generative networks in particle physics. We will start by defining the goals of such a systematic evaluation and then introduce the classifier metric in Sec. 2. We will present our approach for jet generators [69] in Sec. 3, and discuss it in more details for a calorimeter simulation similar to Ref. [33] in Sec. 4. Finally, we will show how to use event weights to track progress between two versions of an ML-event generator [18] in Sec. 5. We will also illustrate how a systematic scan over kinematic distributions of events with anomalous weights can identify issues of a trained network and how Bayesian networks help us identify the reason for this discrepancy.
All three applications combined illustrate how the distribution of learned control weights over phase space is a reliable measure of the quality if the generative networks and that its shape provides a powerful "explainable AI" (xAI) tool which allows us to systematically search for failure modes of generative models, identify the underlying physics cause, and improve the tested networks efficiently.
## 2 Testing generative networks
Given a generative model trained on some reference data, we would like to know how well it reproduces the data in the full phase space. This includes correct reproduction of critical high-level features, such as transverse momenta and invariant masses in the case of event generation, or shower profiles and MIP peaks in the case of calorimeter simulation. But it
also includes all the multi-dimensional correlations between all the features throughout phase space, which might not be visible at the level of histograms of pre-defined high-level features.
We know some typical failure modes of generative networks [18], including features completely removed by a fit-like density estimation, washed-out features with poor resolution, underpopulated kinematic tails, or wrongly learned phase space boundaries. Comparing kinematic distributions of generated and training events allows us to identify many of these issues, making use of the fact that phase space is interpretable and we can typically derive phase space distributions using first principles in quantum field theory or detector design. However, looking at pre-defined phase space distributions runs the risk that we miss a problem, for example when it only affects complex correlations.
Clearly, sensitive metrics are needed to assess the quality of a generative model throughout all of phase space. These metrics should be both multi-variate (capturing all correlations) and interpretable (offering a way to diagnose which critical high-level features are most discrepant). Ideally, these metrics could also offer a systematic way of improving the generative model.
An optimal binary classifier, trained to distinguish generated from reference data in the full phase space, fits the bill in every respect. By the Neyman-Pearson (NP) lemma, this classifier is the most powerful discriminant between generative model and reference data. It is already well-established that one can use the classifier to _reweight_ the generative model and bring it closer to the reference data [18, 66, 67, 68]. By examining the generated and reference data as a function of the cut on the classifier, one can zoom in on the most anomalous regions of phase space, _i.e._ those that are worst-reproduced by the generative model. This facilitates the interpretability of the classifier metric, which could be further enhanced using recent xAI techniques developed in HEP such as Refs. [73, 74].
Studies that have used the classifier metric to judge the quality of generative models have tended to focus exclusively on single numbers [33, 35, 41, 42, 48, 48, 69], like the AUC, the loss, or the accuracy of the classifier. While these aggregate measures certainly have their uses, there is much more useful information to be gleaned from the classifier than a single number [18, 43, 75]. For example, a global integral measure such as the AUC will not detect discrepancies in tails of distributions. Also, the AUC becomes less and less informative the closer the generated and reference samples become. Finally, declaring the model with the highest AUC as the "best" model is oversimplistic, because the definition of the "best" generative network depends on what we actually require from the generative network and how we want to use its output.
In this work, we will explore what the distribution of classifier outputs tells us about the quality of the generative model. We will choose to work in terms of _weights_ which can be obtained from the classifier outputs \(C\) as
\[w(x)=\frac{p_{\text{data}}(x)}{p_{\text{model}}(x)}=\frac{C(x)}{1-C(x)}\qquad \text{with}\qquad C(x)=\frac{p_{\text{data}}(x)}{p_{\text{data}}(x)+p_{\text{ model}}(x)}. \tag{1}\]
The assumption is that the NP-classifier learns the density ratio. For a good generative model and an optimal classifier, the weight distribution will typically peak near one, with tails to the left (\(w\ll 1\)) and right (\(w\gg 1\)), corresponding to regions of phase space where the generative model is overproducing and underproducing the reference data, respectively. On general grounds, the NP classifier should have an excess of generated events as a small-weights tail of the distribution, and an excess of reference events as a large-weight tail. Indeed this is a general pattern we will observe in the different examples we consider in this work. Having it the other way around, an excess of true events on the left tail and an excess of generated events on the right, would generate a ROC curve below the diagonal, indicating an anti-classifier. A
renaming of the classes would then solve the problem in principle by switching the weights of true and generated events. However, finding an anti-classifier after training would lead to a troubleshooting and retraining of the classifier in practice.
Since phase space is interpretable, we can study patterns and clustering of anomalous weights to learn more about the generative network. For instance, a positive feature or tail missed by the generative training will be resurrected through large weights \(w(x_{i})\gg 1\), clustered in phase space. A wrongly modelled phase space boundary will lead to small weights \(w(x_{i})\ll 1\) or even \(w(x_{i})=0\), also clustered in phase space.
Local features in the weight distribution, not necessarily along the tails, also carry useful information about the performance of the generative model. A simple example is the smearing of a peak in phase space, at \(x_{\text{max}}\), corrected by universal weights \(w>1\) around the peak. If the smeared phase space feature dominates the total rate, a maximum in the weight distribution appears at
\[w(x)\approx\frac{p_{\text{data}}(x_{\text{max}})}{p_{\text{model}}(x_{\text{ max}})}>1. \tag{2}\]
Depending on the exact shape in the training data and the kind of smearing, the weights enhancing the tails of the smeared peak can, but do not have to produce a second maximum in the weight distribution. We will discuss all of these patterns in the following sections.
The practical reason why we can measure the performance of generative networks with classifiers is the typical precision of the two networks. For LHC events with a relatively small number of particles in the final state, we know that generative networks reach a precision around
\[\frac{p_{\text{data}}(x)}{p_{\text{model}}(x)}-1\sim\begin{cases}1 \%&\text{INN [18]}\\ 10\%&\text{GAN [16]}\end{cases}. \tag{3}\]
Classifiers are not fundamentally different from regression networks, so we expect them to learn the density ratio at the sub-percent level [75, 76],
\[w(x)-\frac{p_{\text{data}}}{p_{\text{model}}}(x)\sim 0.1\%. \tag{4}\]
Thus it should be possible to obtain classifiers which are precise enough to be sensitive to the failure modes of the generative model.
Because the weights are constructed from a classifier, we can use standard methods, such as calibration curves, to ensure the classifier is trained properly. We can also reweight generated samples with the learned classifier and see if they become closer to the reference data; this will be another sign that the classifier approximates well the likelihood ratio.
In our study of the weight distributions for generative models, we can draw inspiration from a similar approach to supervised amplitude regression [75, 76, 77, 78]. There, the weights can be constructed directly from the regression task, because the "generated" amplitude is learned directly from the known theoretical calculation. There, as here, tails of the weight distribution will be induced by stochastic training data, a lack of expressivity of the network, or over-training [75]. For a well-motivated statistical test we can use the fact that many networks are trained on likelihood losses. Those losses include an uncertainty estimate \(\sigma_{i}\), for instance from a Bayesian regression network, so we can supplement the weight distributions by a pull and analyse both [75],
\[w_{i}=\frac{A_{i,\text{data}}}{A_{i,\text{model}}}\qquad\text{and}\qquad t_{i} =\frac{A_{i,\text{model}}-A_{i,\text{data}}}{\sigma_{i}}=\frac{A_{i,\text{ model}}}{\sigma_{i}}\left(1-w_{i}\right). \tag{5}\]
The pull should follow a standard Gaussian for uncorrelated stochastic deviations. The combination of weights and pulls it is extremely useful for testing regression networks, so we will try to generalize it to generative networks.
Finally, we make the (obvious) observation that the AUC of a classifier can be extracted from weight samples \(w(x_{i})\) evaluated on training and generated configurations. As a function of the signal efficiency, the ROC curve is a step-wise, monotonically increasing function. Its integral can be estimated by the sum of bins with width \(1/N_{\text{gen}}\), the inverse size of the generated dataset. The height of each rectangle is the fraction of weights in the true, or training dataset, which are larger than a given \(w_{\text{cut}}\), normalized to \(N_{\text{true}}\). The AUC is then given by the sum,
\[\text{AUC}=\frac{1}{N_{\text{gen}}N_{\text{true}}}\sum_{w_{i}\in\text{gen}}| \{w|w\in\text{true and }w>w_{i}\}|\, \tag{6}\]
where \(|\{S\}|\) denotes the cardinality of the set \(S\). Therefore, by focusing on the weight distribution of the classifier, we are not missing any information otherwise contained in the AUC.
In this paper we will use three standard LHC applications of generative models to develop a common strategy to quantify the performance of the networks. To test the specific generative networks introduced in the following sections, we use a very generic classifier for Secs. 4 and 5, described in Tab. 1. In Sec. 3 we apply the state-of-the-art in jet classification, ParticleNetLite[79].
## 3 Distribution-shifted jets
As a first application, we consider the JetNet example recently used in Ref. [69] to illustrate different metrics for generative models. They distort jets generated by Pythia at the particle level and at the distribution level. In the particle level distortions, each particle in the jet is altered in some way. While this is a realistic scenario, the amount of distortion in Ref. [69] was taken so large, that it makes the classification task almost trivial. For the distribution-level distortions, a single distribution like the jet mass is modified. Jets are reweighted so that all other features and correlations are identical to the reference data, only the one distribution is
\begin{table}
\begin{tabular}{l|l|l} \hline \hline Parameter & Calorimeter & Events \(Z+\{1,2,3\}\) jets \\ \hline Optimizer & Adam & Adam \\ Learning rate & 0.001 & 0.001 \\ LR schedule & reduce on plateau & reduce on plateau \\ Decay factor & 0.1 & 0.1 \\ Decay patience (epochs) & 5 & 5 \\ Batch size & 1000 & 1024 \\ Epochs & 150 & 50 \\ Number of layers & 3 & 5 \\ Hidden nodes & 512 & 256 \\ Dropout & 10\% & 10\% \\ Activation function & leaky ReLU & leaky ReLU \\ Training samples & 60k & 2.7M / 750k / 210k \\ Validation samples & 20k & 300k / 80k / 20k \\ Testing samples & 20k & 3.0M / 830k / 240k \\ \hline \hline \end{tabular}
\end{table}
Table 1: Hyperparameters of the classifier network applied to the calorimeter simulation and event generation datasets.
modified. This is a highly unrealistic toy scenario, and we would not advertize it as physics-motivated, but it provides an interesting challenge for metrics to detect small differences.
In Ref. [69] it was pointed out that the AUC of a classifier metric trained on distribution-level distortion versus reference data is not very sensitive, and metrics such as FID and MMD can detect the flaw in the generative model more sensitively. In line with the general philosophy outlined in Sec. 2, we argue that the AUC is indeed the wrong metric, and examining the distribution of classifier weights, especially the behavior on the tails, is a much more sensitive probe and does detect all distribution-level distortions introduced into the toy generative models.
We perform three distortions on the jet mass, extracted from the relative polar coordinates provided in the JetNet dataset [69]:
1. "Tail cut": remove the tail with an acceptance cut \(M<0.17\);
2. "Smear": smear the distribution by multiplying with a Gaussian with \(\mu=1.0\) and \(\sigma=0.25\).
3. "Shift": shift the distribution by multiplying with a Gaussian with \(\mu=1.1\) and \(\sigma=0.05\);
For each distortion, we train the classifier on 100000 distorted and the same number of undistorted jets. The validation set consists of 50000 jets each. In the interest of computation time, we use ParticleNet-Lite instead of the full ParticleNet classifier [79] used in Ref. [69]. This only has a minimal effect on the results. For extremely similar datasets and using limited training time we expect a certain variability of the classifier output. To avoid cherry-picking, we combine the five independent trainings of ParticleNet-Lite, and for from each training select the models with the five lowest validation losses. We ensemble these 25 classifier outputs and
Figure 1: From top left to bottom right: jet mass distribution for the tailcut distortion; ROC curve from the trained ensemble of classifiers; learned weight distribution; jet mass distribution for jets in different classifier weight ranges, to identify clustering.
verify (by doing it all over again) that this produced a stable, robust result. Evidence that the ensembled classifiers are well-calibrated, and hence learned the likelihood ratio, is provided in Appendix A.
Figure 1 shows the results for the tail-cut case. Ignoring the stochastic nature of the training data, the NP-optimal classifier should be quite singular: all jets with \(m<0.17\) should have classifier weights given by a delta function at \(w=1-\epsilon\), where \(\epsilon\) is the fraction of jets removed from the tail. In addition, there should be a delta function only for reference jets at \(w=\infty\). A realistic classifier will transform these two features as a smooth weight distribution. In Fig. 1 we see the smooth weight distribution from our classifier, with nearly all jets populating a sharp peak near one, and a long tail extending to larger values of \(w\), solely in the reference sample. This is an example for a large-weight tail expected for a generative model that is missing a tail or a feature. Since the small number of tail jets are soaked up by the bulk, the weight distribution barely changes and only a tail at large weights appears.
We also see that for the tail cut distortion the AUC is basically 0.5, indeed a terrible metric for the quality of the generative model. However, the tail of the weight distribution gives us all relevant information. Cutting on the tail of the weight distribution, we correctly identify the discrepancy in the tail of the jet mass distribution. In Sec. 5 we will see how we can even use these weights to recover such a missing feature for a quantitative analysis. This example illustrates nicely how the classifier gives both a sensitive metric of generative model quality and enables interpretability by allowing us to identify in which physics aspect the generative model is wrong.
Figure 2: From top left to bottom right: jet mass distribution for the smeared distortion; ROC curve from the trained ensemble of classifiers; learned weight distribution; jet mass distribution for jets in different classifier weight ranges, to identify clustering.
In Fig. 2, we show the weight distribution for the smeared distortion. The weight distribution has a maximum at \(w>1\) and is dominated by a small-weight tail. This is expected from the general discussion in Sec. 2: the smearing in this case, multiplicative in the jet mass, has a reduced effect at small jet mass and an outsize effect at large mass, and ends up heavily overpopulating the tail of the large-mass regime. Correspondingly, there is hardly any tail with large weights and a very large tail with small weights. Cutting on the small-weight tail correctly reveals the excess of generated jets, now appearing on both ends of the jet mass distribution.
Finally, in Fig. 3, we see the weight distribution for the shifted distortion, again leading to an unhelpful ROC curve and an AUC close to 0.5. Since the distortion is small enough to not significantly overpopulate or underpopulate the tails of the jet mass distribution, the effect on weight distribution is mild and symmetric. We also see the characteristic tilted weight distribution that indicates a well-calibrated classifier, with generated jets above (below) training jets on the small-weight (large-weight) side. Cutting on the two tails of the weight distribution correctly reveals that the over-population and under-population of generated jets come from the low and high ends of the jet mass distribution, respectively.
Figure 3: From top left to bottom right: jet mass distribution for the shifted distortion; ROC curve from the trained ensemble of classifiers; learned weight distribution; jet mass distribution for jets in different classifier weight ranges, to identify clustering.
## 4 Calorimeter simulation
As a second example of how to use weights over phase space to understand the performance of a generative model, we turn to the classic calorimeter simulation [24, 26, 33, 35], but with a slightly modified INN architecture [80]. We study weight distributions for positron, photon, and pion showers in a simplified calorimeter. The classifier defined in Tab. 1 is trained on voxels, energy, and layer energies in unnormalized shower data. We focus on the classifier with unnormalized preprocessing in this work because it appears to be better calibrated and shows less propensity for overfitting. For more discussion, see Appendix A. As a more realistic scenario, learned calorimeter showers allow us to discussion some aspects of learned weight distributions in more detail.
### Tails of weights
In Fig. 4 we show ROC curves and weight distributions for \(e^{+}\), \(\gamma\), and \(\pi^{+}\) showers. The top row confirms that positron and photon showers are easier to generate than pion showers. The question is which potential failures are related to this performance difference.
Figure 4: Left to right: calorimeter showers for \(e^{+}\), \(\gamma\), and \(\pi^{+}\). Top to bottom: ROC curve, weight distribution on a linear scale, and weight distribution on a logarithmic scale. The weights are evaluated separately on the Geant dataset used for generator training and the generated dataset.
In the second row we show the weight distributions. First, we observe that they are not symmetric, because the reweighting now compensates features. The limit \(w(x)=0\), most visible for the pion shower, marks phase space points where the generator has learned a finite density \(p_{\text{model}}(x)\), where the correct density is \(p_{\text{data}}(x)=0\), one of the typical failure mode of generative models discussed in Sec. 2. We will see this more clearly for LHC events in the next section, but mention here that it is not catastrophic if we can enforce corresponding phase space boundaries during generation.
In the third row of Fig. 4 we show the same curves on a logarithmic scale to see the tails. As expected, they are different when evaluated on Geant and generated showers. Already for positrons, the generated data includes many more showers with \(w(x)\ll 1\) than the training data. These are showers for which the generator overpopulates phase space, so they appear preferably in the generated dataset. This tail connects to showers with weight zero.
In contrast, showers with \(w(x)\gg 1\) appear more frequently in training dataset. These under-populated regions of phase space correspond, for instance, to features or tails which the network does not learn. This serious failure mode can be identified by evaluating showers with anomalous weights on the training data.
### Phase space clustering
The simpler structure of photon showers allows for a detailed study of the clustered observables. By cutting on the weight values and looking at the distribution of the remaining photon showers, we identify three characteristic failure modes highlighted with different colors Fig. 5.
1. In orange, we isolate the large-weights tail with \(w>1.6\) and no energy deposited in layer 2 (\(E_{2}<0.1\,\text{MeV}\)), as shown in Figs. 5(c) and 5(f). As shown in Figs. 5(g) and 5(h), these showers have higher sparsity* in layers 0 and 1 than the typical shower. Additionally they have lower energy, shown by the \(E_{1}\) histogram in Fig. 5(e), since on average most of the energy is deposited in layer 1. Overall, these showers consist of just a few activated, low-energy voxels in layers 0 and 1, and exactly none in layer 2. This sub-population of showers exists in the GEANT data, but it is not sufficiently generated by the network. Footnote *: Here, we are redefining sparsity compared to previous literature [26, 33]: sparsity(here)=1-sparsity(there). This way, higher sparsity means more sparse showers (i.e. showers with only a few voxels activated) while lower sparsity indicates less sparse showers (i.e. showers with many voxels activated).
2. In blue, we isolate the small-weights tail with \(w<0.6\). Fig. 5(c) shows that this failure mode is characterized by a single voxel carrying all the energy in layer 2, and Fig. 5(e) shows that this energy is lower than the average energy deposition. Blue and orange agree in every feature that we looked at in layers 0 and 1; they only differ in layer 2. Since these are showers overproduced by the generator, we interpret this as the compensation of the generator for the underproduction of the orange showers; the compensation is only needed in layer 2. We think the reason for both the orange and blue failure modes is due to the low energy and the large number of zero voxels in these showers: this causes them to be especially sensitive to the noise we add during training, since a single voxel is being activated and it either falls just under or just over the minimum energy threshold. The vicinity of these showers to the noise threshold makes it harder for the generator to perfectly model this region of phase space.
3. Finally, in green we isolate again the large-weights tail with \(w>1.6\) that _does_ deposit energy in layer 2 (\(E_{2}>0.1\,\text{MeV}\)). These showers are also underproduced by the generator but they are distinct from the previous two classes. According to Fig. 5(d)-(f), these have very low energy in layer 0 (even lower energy than the orange showers), and higher-than-typical energy in layer 2. In layer 1 their energy is closer to the typical shower. We also see in the sparsity that these photons deposit very little energy and activity in layer 0,
while in layers 1 and 2 they are fairly typical. These are showers which develop late in the calorimeter, leaving little or no energy in layer 0. Interestingly, physics tells us that these late-developing showers are possible for photons but not likely for positrons. At high energies, the latter interact continuously with the material through Bremsstrahlung, while the former need to convert to \(e^{+}e^{-}\) first [81]. This leads to showers fully absorbed deeper in the calorimeter, therefore with more energy deposited in layer 2. We see this difference in the physics clearly reflected comparing with the green showers for the positron case (see App.B). The positrons have energy deposited in layer 0, unlike the photons.
The situation becomes much more complicated when looking at pions, where the more complex physics through the nuclear interaction and the poorer generative model make it harder to identify failure modes with kinematic or physics features. In line with the sobering AUC value given in Fig. 4, we see in Fig. 6 that the generator requires correction weights essentially all over phase space. The first distinctive failure mode is corrected by small weights in the energy distributions, for instance in layer 0, which suppress the generated showers to reproduce the sharp lower edge of the energy deposition. In addition, the network produces too many showers with exactly zero energy deposition in layers 1 and 2 (see App.B). They
Figure 5: Relevant distributions for \(\gamma\) showers in the small-weights (blue) and large-weights regions (orange and green). We show the energy depositions, the fraction of the energy deposited in the leading voxel, and the sparsity in the three layers of the calorimeter.
are included in an overflow bin in the energy histograms, but appear as a failure mode in the energy fraction of the brightest voxel, for example in layer 2. Finally, we see showers with large weights cluster at low sparsities. Here the generator has a systematic bias towards simpler showers with fewer voxels. The full set of studied observables for \(e^{+}\), \(\gamma\), and \(\pi+\) can be found in Appendix B.
## 5 Event generation
The third network we analyze using learned classifier weights generates events for the process
\[pp\to(Z\to\mu^{+}\mu^{-})+1,2,3\text{ jets} \tag{7}\]
at the reconstruction level, using the precision INN architecture described in detail in Ref. [18]. We first use the published version and then the current state of the art, for which we remove the PCA preprocessing, as it introduces correlations between different jet multiplicities which make the training harder. The convergence of the updated Bayesian version is improved by initializing the standard deviations of the trainable weights with a small value, bringing its performance close to the deterministic version.
As in Ref. [18], we train a classifier on the same observables as the generator. Because the classifier does not have an invertibility constraint, we can add more features as network inputs. For LHC events, the generator will wash out intermediate mass peaks and the \(\Delta R\) distribution
Figure 6: Relevant distributions for \(\pi^{+}\) showers in the small-weights (blue) and large-weights regions (orange). We show the energy deposition and the sparsity in layer 0, and the brightest voxel energy and the sparsity in layer 2.
between jets, so we provide the classifier with
\[\left\{p_{T,i},\eta_{i},\Delta\phi_{i,i-1},M_{i}\right\}\cup\left\{M_{\mu\mu} \right\}\cup\left\{\Delta R_{i_{1},i_{2}}\right\}\cup\left\{\Delta R_{i_{2},i_ {3}},\Delta R_{i_{1},i_{3}}\right\}, \tag{8}\]
where \(M_{i}\) is only present for muons and there is no \(\Delta\phi\) for the first particle. In addition, to help the network focus on small \(\Delta R\), we take the inverse of this observable and apply a cutoff as a preprocessing step.
### Standard generator and mass peak
We start the discussion of potential failure modes of the event generator with the old network setup from Ref. [18], and with the weight distributions shown in Fig. 7. This network encounters difficulties in reproducing the \(Z\)-peak, where the learned width turns out too large for two and three jets. An example of this is shown in Fig. 8 for \(Z+2\) jets. In the upper sub-panels we show the ratio of generated to truth density as a function of \(M_{\mu\mu}\), the most discrepant distribution for this generator. We see a characteristic shape in the density ratio aligned with the \(M_{\mu\mu}\) distribution. The ratio shows a dip where the model underpopulates the true distributions due to the smearing and two massive shoulders on either side of the peak, where the smearing cause an overpopulation of generated events relative to truth. The trained classifier compensates this density ratio with values as large as \(w\thicksim 1.5\) on the \(M_{\mu\mu}\) resonance and \(w=0.6\)... \(0.8\) on its shoulders.
The corresponding distribution of trained classifier weights is shown in Fig. 7. In this case, the main peak is shifted to \(w<1\), driven by the overpopulated wings of the smeared \(M_{\mu\mu}\) distribution. A secondary peak/shoulder appears around \(w\thicksim 1.5\), corresponding to the underpopulated \(M_{\mu\mu}\) resonance. It is interesting to compare this weight distributions from the smeared \(M_{\mu\mu}\) resonance and the smeared distortion of the JetNet data in Fig. 2. Although both are driven by a smearing, the weight distributions are very different. For the smeared jets the maximum of the weight distribution appears at \(w>1\), representing the actual peak configurations, while for the LHC events the maximum of the weight distribution is shifted to \(w<1\), driven by the shoulders of the smeared peak. This reflects the clear differences in the form of the smeared phase space feature and the details of the actual smearing. The NP-classifier does not identify the smearing mechanism in the sense of a Wasserstein-distance, but tracks the density ratio over phase space and requires an interpretation of the entire weight distribution and the corresponding interpretable phase space.
Figure 7: Left to right: ROC curve, weight distribution on a linear scale, and weight distribution on a logarithmic scale for \(Z+2\) jets events, using the outdated standard generator. The weights are evaluated separately on the true, training dataset for the generator and the generated dataset.
### State-of-the-art generator and feature scan
Next we turn to an improved version of the \(Z+\)jets event generator, where the \(Z\) mass peak is much improved, and the main failure mode shifts elsewhere. In Fig. 9 we show the same weight distributions as in Fig. 7, but for the updated version of the INN event generator and one to three jets. The central peaks are much more narrow, and the distributions for one and two jets are now almost identical. However, we still observe distinctive tails of the weight distributions. They should be evaluated on generated events, if we are interested in small weights, and on training events, if we are interested in large weights. Even for three jets the maximum of the weight distribution remains at one, indicating that for the updated generator the mass peak is no longer a serious problem. On the other hand, the tail towards large weights is sizeable, indicating that we should look for missing sub-leading features in the generated event sample.
Consequently, we search for phase space clustering of \(Z+3\) jets events with anomalous weights in Fig. 10, similar to Fig. 8. We see the effect of small statistics in the otherwise accurately learned \(p_{T}\)-tail, and the \(Z\)-mass peak with hardly any reweighting required. The angular correlations between the jets is the one distribution that is not described well. While reweighting is not needed to describe the maximum around \(R_{jj}\sim 3\), the collinear enhancement in the range \(R_{jj}=0.4\)... \(1.5\) only appears after reweighting with large weights, while the phase space boundaries for \(R_{jj}<0.4\) requires very small, potentially zero weights. We can confirm this by looking at the events in the leftmost bins in the central row of Fig. 9. These correspond to weights \(0<w<0.06\), and we have confirmed that for two and three jets at least 95% of these events have one \(\Delta R_{jj}<0.4\).
Finally, we can use event weights to identify unknown issues for a given trained network. In App. B we show a large set of kinematic \(Z+\)jets correlations for events in the tails of the weight distributions. Two kinematic distribution stick out as poorly described -- the rapidity of the softest jet and its jet mass, both shown in Fig. 11. While \(\eta_{j_{h}}\) is part of the standard set of distributions to check, its jet mass is not usually used to benchmark this kind of network [18]. However, it becomes important when we combine event-level with jet-level analysis tools.
In the lower panels of Fig 11 we show the corresponding distributions, to confirm that the reweighted generated events reproduce the truth and the classifier output is correct. The reason for the poor performance on the third jet is, most likely, the small size of the training
Figure 8: \(Z\)-peak distributions for \(Z+2\) jets events from the outdated standard generator. We show the agreement between the generated events with the truth or training data (left) and evens in different weight ranges (right). The events with small weights are taken from the generated distribution, the events with large weights are taken from the truth distribution.
sample. For a standard, deterministic network the source of such a failure is hard to determine, so we resort to a Bayesian version of the same network for this purpose.
### Bayesian generators and pull
While weight distributions encode a wealth of information about the generative model, we do not know from first principles what shape to expect for well-trained networks. A way out is to supplement it by pull distributions, introduced in Eq.(5), which should approach a standard Gaussian. Deterministic generative networks do not provide us with the necessary information, but a Bayesian generative network returns a density as well as an uncertainty estimate on this density [18, 65]. We can then define the mixed ratio
\[t(x_{i})=\frac{\mu(x_{i})[1-w(x_{i})]}{\sigma(x_{i})}\,, \tag{9}\]
where the mean of the estimated density \(\mu(x_{i})\) and its uncertainty \(\sigma(x_{i})\) are provided by the Bayesian generator, and \(w(x_{i})\) is the classifier output.
Figure 9: Left to right: \(Z+\{1,2,3\}\) jets using the state-of-the-art generator. Top to bottom: ROC curve, weight distribution on a linear scale, and weight distribution on a logarithmic scale. The weights are evaluated separately on the true, training dataset for the generator and the generated dataset.
To extract an error on the likelihood for a specific event, we fix the network weights to the maximum of their posterior distribution and generate a dataset. Next, we use the network as a density estimator and extract a distribution of likelihoods for each event by sampling from the network weight distribution. The width of this distribution should give an estimate of the uncertainty. However, we have to be careful in this interpretation, as we cannot treat the event-wise likelihoods as uncorrelated.
In Fig. 12 we first look at the correlation between the classifier weight \(w(x)\) and the relative error \(\sigma(p_{\text{model}})/\mu(p_{\text{model}})\) and the median of the Bayesian INN estimate as a function of \(w\). For events with one and two jets there is a clear correlation, while for \(Z+3\) jet events missing features start to dominate the classifier, and the two uncertainty estimates lose their correlation. In the lower panels of Fig. 12 we show the pull distributions, normalizing the deviation of the generated from the true density (encoded in the classifier) by the uncertainty from the generative network. While we obtain a roughly Gaussian shape, its width is much smaller than we would expect. The reason for this is the problem of assigning an uncertainty to individual phase space points without taking their correlation into account.
We can understand the conservative uncertainty estimate of the Bayesian network from the kinematic observables. We use the same distribution of likelihoods as for reweighted distributions. Turning each distribution into a histogram taking the bin-wise means and standard deviations, we can also define an error bar for each histogram bin. In Fig. 13 we first show the
Figure 10: Kinematic distributions for \(Z+3\) jets events from the state-of-the-art generator in different weight ranges, to see if events with large corrections cluster in phase space. The bottom panels show two jet masses, which are not part of the standard requirements testing the \(Z+\)jets kinematics. The events with small weights are taken from the generated distribution, the events with large weights are taken from the truth distribution.
same four distributions as in Fig. 10, but now with a Bayesian network uncertainty. For the smooth rapidity and momentum distributions the event reweighting only has a minor effect, corresponding to the observation that events with anomalous weights do not cluster in these distributions. The BNN uncertainty estimate is over-conservative in that it easily covers the deviation of the model from the truth and also the effect of event reweighting.
The situation changes for the \(Z\)-peak, where the network does well, the reweighting does not lead to a significant improvement of the sharp mass peak, but the uncertainty estimate there is too small. For \(\Delta R_{jj}\) we see what happens if the (Bayesian) generative network ignores a feature altogether -- in this case the missing collinear enhancement is not accounted for in density estimation and also not in the uncertainty estimation for the density. This suggests that the implicit bias of the generative networks does not allow it to capture the structure. On the other hand, the classifier identifies this failure mode, and the reweighted distribution reproduces the truth with high precision.
Finally, in the lower two panels of Fig. 13 we show a Bayesian uncertainty estimate for the challenging cases from Fig. 11. We already know that the classifier identifies the problematic phase space region correctly, and the reweighted events reproduce the truth distribution. The question is where this problem comes from. The Bayesian network output tells us if this problem is related to a lack of training data or to the network structure. We see that similar to the first two distributions the Bayesian uncertainty estimate easily covers the difference between generated events and truth, as well as the difference between generated and reweighted events. This clearly points towards a limitation in the training data, most likely just the size
Figure 11: Critical kinematic distributions and for \(Z+3\) jets events from the state-of-the-art generator in different weight ranges (upper) and comparing generated data with truth (lower). The events with small weights are taken from the generated distribution, the events with large weights are taken from the truth distribution.
of the 3-jet dataset. As a side remark, Fig. 13 would not even have flagged these two distributions as problematic, this potentially crucial piece of information requires a dedicated study of events with anomalous weights.
## 6 Conclusions
Generative networks play an important role in the ML-transformation of LHC physics. They can be used for many tasks in event generation, simulation, and advanced analysis. This comes with the requirement to control their precision in the density estimation over phase space systematically. A classifier is perfectly suited to control generative models, as motivated by established classifier reweighting. In addition to a single AUC value it provides us with a wealth of information on the strengths, weaknesses, and failure modes of the generative model.
We have applied a performance test based on the classifier weight distributions for three different generative tasks. First, we have studied a not very realistic, but challenging modification of generated jet configurations, to find that these modifications can be identified and even corrected for by looking at jets with anomalous weights. Our second case were calorimeter showers, where the weight distributions identified types of showers that the generative model did not learn well, pointing us towards possible improvements of the generative setup. Finally, we have looked at an event generator for \(Z+\)jets events, for which the classifier weights again allow us to identify and understand the problems of the generative network training. For this
Figure 12: Top: correlation between the classifier weights and the relative standard deviation of the event weights from the Bayesian generator; Center: medians over the \(w\)-bins. Bottom: pulls combining the standard deviation of the event weight distribution with the error estimate from the Bayesian generator.
case we also showed how our diagnostic can be embedded in a comprehensive precision and uncertainty framework for generative events.
Some standard failure modes appearing in our three applications and diagnosed by the weight distributions are: (i) missing features or missing tails in the generated events, leading to a tail of large weights \(w\gg 1\) clustered in phase space; (ii) wrongly learned phase space boundaries or sharp cliffs, leading to a tail towards small weights \(w\ll 1\), clustered in phase space; (iii) sharp features learned with reduced resolution, leading to a shift of peak of the weight distribution to values \(w<1\) and a compensating enhancement at finite \(w>1\), related to the amount of missing resolution and also clustered in phase space.
The clustering of anomalous weights in the interpretable phase space has, in all cases,
Figure 13: Kinematic distributions for \(Z+3\) jets events from the Bayesian state-of-the-art generator, with bin-wise error bars on the generated events. The distribution include the set of Fig. 10 as well as the challenging distributions from Fig. 11.
allowed us to identify the physics reason behind the poorly performing generative network. Moreover, reweighting the events with the classifier weights over phase space allows us to improve the network and make sure that the weighted events do reproduce all key features.
Our study shows that a trained classifier can and should be used to analyze the performance of generative networks; the weight distributions not only tests the performance of the generator, it also allows us to identify failure modes, correct for shortcomings, and defines a key ingredient to the development of precision generators for particle physics.
### Acknowledgements
We would like to thank Anja Butter and Ramon Winterhalder for many useful discussions and for the close coordination with Ref. [68]. CK and TP would like to thank the Baden-Wurttemberg-Stiftung for financing through the program _Internationale Spitzenforschung_, project _Uncertainties - Teaching AI its Limits_ (BWST_IF2020-010). RD and DS are supported by the U.S. Department of Energy under Award Number DOE-SC0010008. TH is funded by the Carl-Zeiss-Stiftung through the project _Model-Based AI: Physical Models and Deep Learning for Imaging and Cancer Treatment_. This research is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant 396021762 - TRR 257: Particle Physics Phenomenology after the Higgs Discovery and through Germany's Excellence Strategy EXC 2181/1 - 390900948 (the _Heidelberg STRUCTURES Excellence Cluster_).
Classifier calibration
To gauge whether the classifiers used in our study have been well-trained (not overfitted, reasonably close to optimal), one important check is to inspect their calibration curves. The idea of the calibration curve is that a properly learned and optimal classifier \(C(x)\) should return the probability that \(x\) is class 1, and \(1-C(x)\) the probability that \(x\) is class 0. Therefore, if we took all events \(x\) in the training data (assumed to be balanced) for which \(C(x)=C\), a fraction \(C\) of them should be class 1. The differential way to write this is
\[\frac{\frac{\mathrm{d}N_{1}}{dC}}{\frac{\mathrm{d}N_{1}}{dC}+\frac{\mathrm{d}N _{0}}{dC}}=C. \tag{10}\]
As in the main body of this paper, we will look at calibration curves in terms of the weights \(w\). Using Eq.(1), we can turn Eq.(10) into a statement about the weights,
\[\frac{\mathrm{d}N_{1}}{\mathrm{d}w}=\frac{\mathrm{d}N_{0}}{\mathrm{d}w}\,w. \tag{11}\]
Equation (11) implies an equivalent way of plotting a calibration curve in weight space: divide the combined weight distribution in bins and calculate the ratio \(N_{\mathrm{truth}}/N_{\mathrm{gen}}\) for each bin. According to Eq.(11), for a well-calibrated classifier these should agree. We show calibration curves, calculated following this method, for our classifiers in Fig. 14. We see that the classifiers are for the most part very well-calibrated. One possible exception is for \(e^{+}\) at lower weights, but one should keep in mind this is one of the better generative models considered in this work (AUC=0.536), so nearly all the events are in the well-calibrated part of the calibration curve (with \(w\approx 1\)). Also, as we see in the discussion in Sec. 4.2 and in Fig. 17, even if the tails of the classifier are mis-calibrated, it can still extract poorly modeled regions of phase space and assign, if anything, too extreme weights to them. However, attention is needed when using them for reweighting.
To confirm that our calibration curves in Fig. 14 are indeed reasonably well-calibrated, we consider the case of an _overfitted_ classifier obtained by training on two statistically identical Geant samples. In Fig. 15 (left) we show the weight distribution for the test dataset after epochs 10 and 150. The middle panel shows the training and validation losses when training without learning rate scheduler. The right panel shows the calibration curve for epoch 150. The network learns to distinguish between the two Geant samples and reweights one noise into the other. This guarantees that the weight distribution is symmetric around the maximum at \(w=1\). However, the weight distributions broaden during training. Because the difference between the two datasets is feature-less, this broadening is the same on the classifier training and test datasets. At the same time, Fig. 15 and Eq.(6) illustrate the benefit of studying the weight distributions: the AUC evaluated on the test dataset is stable at 0.5 during the entire training, but the weight distribution shows that the classifier is moving away from optimality. All in all, we see how all three diagnostics -- weight distribution, training/validation losses, and calibration curve -- indicate a poorly trained classifier, in stark contrast to the classifiers considered in this paper.
Figure 14: Calibration plots in weight space for the different discriminator models. Top to bottom: (i) jets with shifted, smeared and tailcut distortions; (ii) normalized showers for \(e^{+}\), \(\gamma\), and \(\pi^{+}\); (iii) \(Z+\{1,2,3\}\) jets using the state-of-the-art generator.
Figure 15: Left: weight distributions of test set for a classifier trained on two different Geant samples for pion showers. Center: BCE loss function for training and validation. Right: calibration curve in weight space at epoch 150.
## Appendix B Additional kinematic distributions
Figure 17: Clustering plots for \(e^{+}\): similar pattern of \(\gamma\) showers, expected given the similar physics and data structure.
Figure 18: Clustering plots for \(\pi^{+}\): (i) for the different energies the INN finds all features, but the balance between feature and continuum is not perfect; (ii) in both tails corrections at all energies are applied; (iii) the generator over-samples showers with no energy deposition in layer-1 and layer-2; (iv) large sparsity values are underestimated by the INN.
Figure 19: Set of kinematic distributions for \(Z+3\) jets events from the state-of-the-art generator in different weight ranges. The events with small weights are taken from the generated distribution, the events with large weights are taken from the truth distribution.
|
2305.01208
|
Effect of finite nuclear size on the electric quadrupole hyperfine
operator
|
We present an expression for the operator of the electric quadrupole
hyperfine interaction which takes into account finite nuclear size. We compare
the results obtained with the use of this operator with those obtained in the
standard approach which ignores finite nuclear size. We found that the effect
of changing operators on the hyperfine constant $B$ is small in hydrogen-like
systems. There is a very significant enhancement of the effect in many-electron
atoms caused by the contribution of the large $s_{1/2}-d_{3/2},d_{5/2}$ and
$p_{1/2}-p_{3/2}$ off diagonal matrix elements to the core polarisation,
correlation and configuration interaction corrections. Similar enhancement
takes place for transition amplitudes induced by the electric quadrupole
hyperfine interaction.
|
V. A. Dzuba, V. V. Flambaum
|
2023-05-02T05:27:58Z
|
http://arxiv.org/abs/2305.01208v1
|
# Effect of finite nuclear size on the electric quadrupole hyperfine operator
###### Abstract
We present an expression for the operator of the electric quadrupole hyperfine interaction which takes into account finite nuclear size. We compare the results obtained with the use of this operator with those obtained in the standard approach which ignores finite nuclear size. We found that the effect of changing operators on the hyperfine constant \(B\) is small in hydrogen-like systems. There is a very significant enhancement of the effect in many-electron atoms caused by the contribution of the large \(s_{1/2}-d_{3/2},d_{5/2}\) and \(p_{1/2}-p_{3/2}\) off diagonal matrix elements to the core polarisation, correlation and configuration interaction corrections. Similar enhancement takes place for transition amplitudes induced by the electric quadrupole hyperfine interaction.
## I Introduction
The study of the hyperfine structure (hfs) in heavy and superheavy atomic systems is a valuable tool for obtaining the information about nuclei [1; 2]. Comparing experimental hfs with theoretical calculations of the magnetic dipole hfs constants \(A\) and electric quadrupole hfs constant \(B\) allows extraction of the nuclear magnetic moment \(\mu\) and nuclear electric quadrupole moment \(Q\). Electric quadrupole \(Q\) is strongly enhanced in deformed nuclei. This may serve as a guidance in the search for the nuclear stability island since the nuclei in the vicinity of the island are expected to be spherical (see, e.g. [3]).
Effect of the finite nuclear size for the magnetic hfs constant \(A\) has been extensively studied in numerous publications. It is sufficient to mention the Bohr-Weisscopf effect where constant \(A\) is not exactly proportional to the nuclear magnetic moment since it also depends on the distribution of magnetization inside the nucleus - see e.g. [4; 5; 6; 7; 8; 9; 10; 11; 12] and references therein. On the other hand, we are not aware of such study for the electric quadrupole hfs constant.
## II Electric quadrupole operator
Standard operator of the electric quadrupole hfs interaction (the \(Q_{20}\) component) has the form [1; 2]
\[\hat{Q}=Y_{20}/r^{3}, \tag{1}\]
where \(Y_{20}\) is the spherical function, \(r\) is the distance to the nucleus. We omit here a coefficient which does not play any role in further discussion since we will discuss relative corrections to the hfs constant \(B\), i.e. \(\delta B/B\). From the symmetry of the problem we conclude that the quadrupole electric field in the centre of the nucleus vanishes, \({\bf E}=-\nabla\phi=0\). The vanishing gradient means that the quadrupole electrostatic potential near \(r=0\) is \(\phi\propto r^{2}Y_{20}\). This leads us to a simple analytical form of the quadrupole operator which takes into account the finite nuclear radius \(R\):
\[\hat{Q}=F(r)Y_{20}, \tag{2}\] \[F(r)=\left\{\begin{array}{ll}r^{2}/R^{5},&r\leq R\\ 1/r^{3},&r>R\end{array}\right.\]
## III Qualitative consideration of the finite nuclear size effect
Let us start from a qualitative consideration of the dependence of \(B\) on the nuclear radius \(R\). Integrals in the matrix elements of the singular quadrupole operator (1) are dominated by small distances \(r\) from the nucleus where we can neglect energy of an electron compare to the Coulomb potential and screening of the nuclear Coulomb potential by electrons. Solution of the radial Dirac equation in the Coulomb field for zero energy is expressed in terms of the Bessel functions \(J_{\gamma_{j}}(x)\) and \(J_{\gamma_{j}-1}(x)\), where \(\gamma_{j}=\sqrt{(j+1/2)^{2}-Z^{2}\alpha^{2}}\), \(j\) is the electron angular momentum, \(Z\) is the nuclear charge, \(\alpha\) is the fine structure constant, \(x=(8Zr/a)^{1/2}\) is the dimensionless distance variable, \(a\) is the Bohr radius- see e.g. Refs. [13; 14]. Therefore, the radial dependence of the charge density of an electron may be presented as
\[\rho(r)=\frac{C_{nlj}}{r^{2}}f(x), \tag{3}\]
where the normalisation constant \(C_{nlj}\) may be omitted since it cancels out in the ratio \(\delta B/B\), dimensionless function \(f(x)\) is expressed as products of Bessel functions. We can present matrix element of the operator (1) as
\[B\sim\int_{0}^{\infty}\frac{\rho(r)}{r^{3}}f(x)r^{2}dr=C_{nlj} \left(8Z/a\right)^{2}I, \tag{4}\] \[I=2\int_{0}^{\infty}\frac{f(x)}{x^{3}}dx\sim 1\]
Near the nucleus Bessel functions \(J_{\gamma_{j}}(x)\) may be expanded for \(x\ll 1\), and we have [13; 14]
\[\rho(r)=C_{nlj}\left(8Z/a\right)^{2\gamma_{j}}r^{2(\gamma_{j}-1)}. \tag{5}\]
If we use operator (2) instead of the singular operator (1), the contribution of the area inside the nucleus is significantly suppressed and this effect produces the change in the hfs constant \(B\),
\[\delta B\sim-C_{nlj}\int_{0}^{R}\frac{\rho(r)}{r^{3}}f(x)r^{2}dr,\] \[\frac{\delta B}{B}\sim-\left(\frac{RZ}{a}\right)^{2(\gamma_{j}-1)}. \tag{6}\]
We should note that Eq. (5) for the density \(\rho(r)\) is valid outside the nucleus where the nuclear Coulomb potential is equal to \(V(r)=-Ze^{2}/r\). However, practically all numerical calculations of hfs have actually taken into account finite size of the nucleus in the electron wave functions. Analytical calculations of the electron wave function in the finite size nucleus potential \(V(r)\) (instead of the point-like potential \(V(r)=-Ze^{2}/r\)) have been done in Refs. [13; 14]. The main difference is that in the leading term \(\rho(r)\propto r^{2j-1}\) (instead of \(r^{2(\gamma_{j}-1)}\); the difference in power of \(r\) is \(\sim Z^{2}\alpha^{2}/(j+1/2)\)). Such modification of \(\rho(r)\) inside the nucleus produces coefficient \(\sim 1\) in the estimate of \(\delta B/B\) and does not change any conclusions. This is easy to explain since the Coulomb wave function for \(r>R\) provides boundary condition at \(r=R\) for the solution inside the nucleus, therefore, the factor \(\left(\frac{RZ}{a}\right)^{2(\gamma_{j}-1)}\) in the estimate of \(\delta B/B\) appears in any case.
The \(s_{1/2}\) and \(p_{1/2}\) electronic states have zero value of \(B\). Simple estimates for the states with total angular momentum \(j>1/2\) gives \(\delta B/B\) equal to a small fraction of per cent. Indeed, power of the small parameter \(RZ/a\) in Eq. (6) is positive, from 2 for small \(Z\alpha\) to 1.46 for \(Z\)=137. However, this naive estimate is only valid for the hydrogen-like single electron atoms.
In many-electron atoms the core polarization corrections and other correlation corrections contain large non-diagonal matrix elements of the hyperfine interaction such as \(\langle s_{1/2}|\hat{Q}|d_{3/2}\rangle\), \(\langle s_{1/2}|\hat{Q}|d_{5/2}\rangle\) and \(\langle p_{1/2}|\hat{Q}|p_{3/2}\rangle\). These large non-diagonal matrix elements may strongly enhance effects of configuration mixing on \(B\). They are also responsible for the transition amplitudes induced by electric quadrupole hyperfine interaction - see, for example Ref. [15], where probabilities of E3 and M2 atomic clock transitions, which are transformed to E1 by the hfs operators, have been calculated. Electron wave functions \(s_{1/2}\) and \(p_{1/2}\) tend to infinity for point-like nucleus and this significantly increases the sensitivity to the nuclear size:
\[\frac{\delta B}{B}\sim\left(\frac{RZ}{a}\right)^{\gamma_{1/2}+\gamma_{3/2}-2} \tag{7}\]
Power of the small parameter \(RZ/a\) becomes negative for \(Z>132\) (this means "infinite" \(\delta B/B\) for \(R=0\)). However, the ratio \(\delta B/B\) may exceed 1% already for \(Z>80\). Therefore, we should use a more accurate electric quadrupole operator (2) inside the nucleus. Below we complement our rough estimates by the accurate numerical calculations.
## IV Hydrogen-like systems
We start our study from the hydrogen-like systems. We use the Fermi distribution of the electric charge over the nuclear volume with \(R=1.2A^{1/3}\), here \(A\) is the number of nucleons in the nucleus. The same nuclear radius is used in (2). We perform calculations of the allowed diagonal and non-diagonal matrix elements of operator \(\hat{Q}\) for the \(3s\), \(3p_{1/2}\), \(3p_{3/2}\), \(3d_{3/2}\) and \(3d_{5/2}\) states. Note that all single-electron states of the same symmetry are proportional to each other on short distances, therefore, states with any principal quantum number \(n\) can be used in the study. We choose \(n=3\) just for the convenience. The calculations are done for a set of different values of nuclear charge \(Z\) and for two forms of the operator \(\hat{Q}\), (1) and (2). The results are compared in Table 1. The results are presented as a difference in per cent between the two forms of the operator. One can see that the difference is small for the diagonal matrix elements. It reaches \(\sim 10^{-3}\) for \(Z=120\). Note also that the effect is practically zero for states with \(j>3/2\). However, the effect is much larger for the off-diagonal matrix elements involving \(s_{1/2}\) or \(p_{1/2}\) states. This is because these states penetrate inside the nuclei. The effect reaches \(\sim 1\%\) for \(Z=83\) (Bi atom) and becomes even larger for higher \(Z\) (see Table 1).
## V Many-electron atoms
Next, we study the effect of changing the electric quadrupole operator \(\hat{Q}\) from (1) to (2) on the diagonal matrix elements (i.e. on the electric quadrupole constants \(B\)) in many-electron atoms. As examples, we consider heavy atoms or ions with a relatively simple electronic structure, one electron above closed shells. In atoms with several valence electrons the effect may be even bigger due to the large configuration mixing, which involves the \(s_{1/2}-d_{3/2},d_{5/2}\) and \(p_{1/2}-p_{3/2}\) matrix elements of \(\hat{Q}\). The calculations are done in the \(V^{N-1}\) approximation, which means that the initial relativistic
Figure 1: First order core polarisation correction. Cross stands for the electric quadrupole operator, \(v\) is valence state, \(c\) is a state in the core, \(m\) is the virtual state above the core.
Hartree-Fock (RHF) calculations are performed for the closed-shell core, the states of external electron are calculated in the field of the frozen core. To calculate matrix elements of the \(\hat{Q}\) operator, we use the time-dependent Hartree-Fock method which is equivalent to the well-known random-phase approximation (RPA) - see e.g. [2]. The RPA equations can be written as (see e.g [16])
\[(\hat{H}_{0}-\epsilon_{c})\delta\psi_{c}=-(\hat{Q}+\delta V)\psi_{c}. \tag{8}\]
Here \(\hat{H}_{0}\) is the RHF operator, index \(c\) numerates states in the core, \(\psi_{c}\) is a single-electron wave function for a particular state in the core, \(\delta\psi_{c}\) is the correction to \(\psi_{c}\) caused by external field \(\hat{Q}\), \(\delta V\) is the correction to the self-consisted RHF potential caused by the change in all core wave functions. The RPA equations (8) are solved self-consistently for all states in the core to find \(\delta V\). Matrix elements for valence states \(v\) are then calculated as
\[\langle v|\hat{Q}+\delta V|v\rangle.\]
The results of calculations are presented in Table 2. We included Fr and U as heavy atoms of a broad experimental interest. We also included Fm and No as heaviest atoms for which atomic spectra measurements are in progress [17; 18; 19; 20; 21; 22]. We included E120 for illustration on how big the effect could be for very high \(Z\). We see from the table that the effect on \(B\) in many-electron atoms is significantly larger than that for the hydrogen-like systems (see Table 1). This is due to the contribution of the \(s_{1/2}-d_{3/2},d_{5/2}\) and \(p_{1/2}-p_{3/2}\) off-diagonal matrix elements into the core polarisation correction (see Fig. 1).
For better understanding of the role of the off-diagonal matrix elements in the core polarisation correction we present in Table 3 the decomposition of the correc
\begin{table}
\begin{tabular}{c c c c c c c} \(Z\) & Atom & \(v=np_{3/2}\) & \(v=(n-1)d_{3/2}\) & \(v=(n-1)d_{5/2}\) & \(v=(n-2)f_{5/2}\) & \(v=(n-2)f_{7/2}\) \\ \hline
87 & Fr I & 0.059 & 0.037 & 0.216 & 0.162 & 0.165 \\
92 & U VI & 0.041 & -0.020 & 0.070 & 0.073 & 0.088 \\
100 & Fm I & & & & 0.069 & 0.072 \\
102 & No II & 0.108 & -0.090 & 0.392 & & \\
120 & E120 II & 0.433 & -1.373 & 1.664 & 1.315 & 1.526 \\ \end{tabular}
\end{table}
Table 2: The effect of changing the electric quadrupole operator (in per cent) in the diagonal matrix elements for the valence single-electron wave functions of many-electron atoms and ions. The numbers in the last five columns are \(100(m_{0}/m_{1}-1)\), where \(m_{0}=\langle v|\hat{Q}|v\rangle\) is the matrix element calculated with the textbook formula (1) and \(m_{1}\) is the matrix element calculated with the textbook formula (2). \(n=7\) for Fr, U, Fm and No, \(n=8\) for E120.
\begin{table}
\begin{tabular}{c c c c c c c} \(Z\) & \(A\) & \(R\) & \(s_{1/2}-d_{3/2}\) & \(s_{1/2}-d_{5/2}\) & \(p_{1/2}-p_{3/2}\) & \(p_{3/2}-p_{3/2}\) & \(d_{3/2}-d_{3/2}\) \\ \hline
10 & 21 & 3.31071 & 0.0103 & 0.0647 & 0.0001 & 0.0000 & 0.0000 \\
20 & 43 & 4.20408 & 0.0340 & 0.1118 & 0.0005 & 0.0001 & 0.0000 \\
30 & 67 & 4.87386 & 0.0699 & 0.1664 & 0.0022 & 0.0004 & 0.0000 \\
40 & 91 & 5.39753 & 0.1220 & 0.2341 & 0.0066 & 0.0009 & 0.0000 \\
50 & 119 & 5.90242 & 0.1985 & 0.3296 & 0.0167 & 0.0020 & 0.0000 \\
60 & 145 & 6.30431 & 0.3067 & 0.4538 & 0.0371 & 0.0037 & 0.0001 \\
70 & 171 & 6.66060 & 0.4644 & 0.6306 & 0.0772 & 0.0067 & 0.0003 \\
80 & 199 & 7.00593 & 0.6936 & 0.8877 & 0.1537 & 0.0118 & 0.0007 \\
83 & 239 & 7.44699 & 0.8075 & 1.0685 & 0.1953 & 0.0150 & 0.0010 \\
92 & 235 & 7.40521 & 1.1079 & 1.3601 & 0.3354 & 0.0223 & 0.0018 \\
100 & 245 & 7.50879 & 1.4816 & 1.7606 & 0.5419 & 0.0326 & 0.0032 \\
120 & 295 & 7.98832 & 3.0962 & 3.6355 & 1.7239 & 0.0879 & 0.0128 \\ \end{tabular}
\end{table}
Table 1: The effect of changing the electric quadrupole operator (in per cent) in matrix elements for the hydrogen-like single-electron wave functions. The numbers in last five columns are \(100(m_{0}/m_{1}-1)\), where \(m_{0}\) is the matrix element calculated with the textbook formula (1) and \(m_{1}\) is the matrix element calculated with the corrected formula (2), \(R\) is the nuclear radius in fm.
\begin{table}
\begin{tabular}{c c c c c c} \(v=8p_{3/2}\) & \(v=7d_{3/2}\) & \(v=7d_{5/2}\) & \(v=6f_{5/2}\) & \(v=6f_{7/2}\) \\ \hline & \multicolumn{4}{c}{Relative CP correction (per cent)} \\ & 36 & 20 & 60 & 91 & 93 \\ \hline Channel & \multicolumn{4}{c}{Decomposition over channels (per cent)} \\ \(s_{1/2}\) & 20 & 304 & 82 & 59 & 52 \\ \(p_{1/2}\) & -158 & -1505 & -579 & -349 & -361 \\ \(p_{3/2}\) & 255 & 1487 & 652 & 439 & 450 \\ \(d_{3/2}\) & -32 & -373 & -117 & -79 & -73 \\ \(d_{5/2}\) & 15 & 186 & 63 & 29 & 31 \\ \(f_{5/2}\) & -3 & -31 & -10 & -5 & -5 \\ \(f_{7/2}\) & 3 & 32 & 9 & 6 & 6 \\ Total & 100 & 100 & 100 & 100 & 100 \\ \end{tabular}
\end{table}
Table 3: Relative values of the core polarisation correction to the matrix elements of the valence states of E120 II as well as their decomposition over different core channels (all numbers in per cent). Core polarisation correction \(\langle v|\delta V|v\rangle\) is related to the total matrix element \(\langle v|\hat{Q}+\delta V|v\rangle\). One channel is the sum over all core states of given type (\(s_{1/2}\), \(p_{1/2}\), etc.) and all possible excited states in the expression for core polarisation (see diagrams on Fig. 1).
tions to matrix element of different states of E120\({}^{+}\) over different channels in the core. One channel is the sum over all core states \(c\) of particular symmetry and all possible states \(m\) above the core. For example, \(s\)-channel contains terms with the \(\langle 1s|\hat{Q}+\delta V|nd_{3/2}\rangle\), \(\langle 1s|\hat{Q}+\delta V|nd_{5/2}\rangle\), \(\langle 2s|\hat{Q}+\delta V|nd_{3/2}\rangle\), etc. The \(s_{1/2}\) and \(p_{1/2}\) channels give non-zero contribution due to off-diagonal matrix elements only. The off-diagonal matrix elements contribute to other channels too. For example, the \(\langle 2p_{3/2}|\hat{Q}+\delta V|np_{1/2}\rangle\) matrix elements contribute to the \(p_{3/2}\) channel.
As can be seen from Table 3, the contribution of the off-diagonal matrix elements is huge (mostly, in channels \(s_{1/2}\) and \(p_{1/2}\)) Some contributions of the off-diagonal matrix elements exceed many times the final answer since there are partial cancellations of these big contributions. Large off-diagonal matrix elements are more sensitive to the form of the operator \(\hat{Q}\), see Table 1. These two facts lead to significant enhancement of the effect in many-electron atoms.
Calculations of the probabilities of clock transitions induced by the hyperfine interaction (see, for example Ref. [15]) can also benefit from considering correct form of the electric quadrupole operator \(\hat{Q}\). These transitions, forbidden as electric dipole transitions, are open by off-diagonal mixing of states with different electron angular momentum by the magnetic dipole or electric quadrupole interaction [15].
_Acknowledgements_ -- The work was supported by the Australian Research Council Grants No. DP230101058 and DP200100150.
|
2305.02701
|
Star cluster progenitors are dynamically decoupled from their parent
molecular clouds
|
The formation of stellar clusters dictates the pace at which galaxies evolve,
and solving the question of their formation will undoubtedly lead to a better
understanding of the Universe as a whole. While it is well known that star
clusters form within parsec-scale over-densities of interstellar molecular gas
called clumps, it is, however, unclear whether these clumps represent the
high-density tip of a continuous gaseous flow that gradually leads towards the
formation of stars, or a transition within the gas physical properties. Here,
we present a unique analysis of a sample of 27 infrared dark clouds embedded
within 24 individual molecular clouds that combine a large set of observations,
allowing us to compute the mass and velocity dispersion profiles of each, from
the scale of tens of parsecs down to the scale of tenths of a parsec. These
profiles reveal that the vast majority of the clouds, if not all, are
consistent with being self-gravitating on all scales, and that the clumps, on
parsec-scale, are often dynamically decoupled from their surrounding molecular
clouds, exhibiting steeper density profiles ($\rho\propto r^{-2}$) and flat
velocity dispersion profiles ($\sigma\propto r^0$), clearly departing from
Larson's relations. These findings suggest that the formation of star clusters
correspond to a transition regime within the properties of the self-gravitating
molecular gas. We propose that this transition regime is one that corresponds
to the gravitational collapse of parsec-scale clumps within otherwise stable
molecular clouds.
|
Nicolas Peretto, Andrew J. Rigby, Fabien Louvet, Gary A. Fuller, Alessio Traficante, Mathilde Gaudel
|
2023-05-04T10:19:16Z
|
http://arxiv.org/abs/2305.02701v2
|
Star cluster progenitors are dynamically decoupled from their parent self-gravitating molecular clouds
###### Abstract
The formation of stellar clusters dictates the pace at which galaxies evolve, and solving the question of their formation will undoubtedly lead to a better understanding of the Universe as a whole. While it is well known that star clusters form within parsec-scale over-densities of interstellar molecular gas called clumps, it is, however, unclear whether these clumps represent the high-density tip of a continuous gaseous flow that gradually leads towards the formation of stars, or a transition within the gas physical properties. Here, we present a unique analysis of a sample of 27 infrared dark clouds embedded within 24 individual molecular clouds that combine a large set of observations, allowing us to compute the mass and velocity dispersion profiles of each, from the scale of tens of parsecs down to the scale of tenths of a parsec. These profiles reveal that the vast majority of the clouds, if not all, are self-gravitating on all scales, and that the clumps, on parsec-scale, are often dynamically decoupled from their surrounding molecular clouds, exhibiting steeper density profiles (\(\rho\propto r^{-2}\)) and flat velocity dispersion profiles (\(\sigma\propto r^{\rho}\)), clearly departing from Larson's relations. These findings suggest that the formation of star clusters correspond to a transition regime within the properties of the self-gravitating molecular gas. We propose that this transition regime is one that corresponds to the gravitational collapse of parsec-scale clumps within stable molecular clouds.
keywords: stars: formation - ISM: kinematics and dynamics
## 1 Introduction
Only a few years after the first detection of interstellar carbon monoxide, Zuckerman & Evans (1974) showed that if all the gas within dense interstellar clouds were to be freely collapsing as a result of their self-gravity then the star formation rate in the Milky Way should be \(\sim 300\) M\({}_{\odot}\)/yr, two orders of magnitude larger than what it actually is (\(\sim 2\) M\({}_{\odot}\)/yr - e.g. Robitaille & Whitney, 2010). In other words, molecular clouds convert only \(\sim 1\)% of their mass into stars every cloud free-fall time, making star formation a very inefficient process (e.g. Krumholz & Tan, 2007). Despite five decades of star formation research, the physics behind this fundamental property of molecular clouds remain to be fully understood. Over the years, a number of competing theories have been developed to explain the low star formation efficiency of molecular clouds. The main differences between those models reside in both the fraction of the volume/mass of any molecular cloud that undergoes gravitational collapse, along with the dynamical state of the gas that does not. In one family of models, supersonic turbulence is the one mechanism responsible for defining the mass reservoirs accessible to individual protostars and, as a result, for setting the stellar initial mass function (e.g. Padoan et al., 1997, 2020; Krumholz & McKee, 2005; Hennebelle & Chabrier, 2008; Hopkins, 2012). In those models, the low star formation efficiency is explained by the fact that those mass reservoirs represent only a couple of percents of the molecular gas mass, the rest of the gas is either unbound or in quasi-static equilibrium and therefore does not directly participate to star formation. On the other hand, other models predict that the hierarchical gravitational collapse of molecular clouds is what drive their evolution (e.g. Hartmann & Burkert, 2007; Ballesteros-Paredes et al., 2011; Vazquez-Semadeni et al., 2017, 2019) and that massive star formation benefits from the favourable conditions generated by the global collapse of dense clumps (e.g. Bonnell & Bate, 2006; Peretto et al., 2007; Smith et al., 2009). In those models, what limits the efficiency of star formation is stellar feedback from young low- and high-mass stars, by stabilising or dispersing most of the molecular cloud's mass (e.g Nakamura & Li, 2007; Wang et al., 2010; Dale et al., 2012; Kim et al., 2018; Offner & Liu, 2018; Grudic et al., 2022). The controversy around which of these two very different scenarios of star formation describes reality best fuels the majority of the star formation research for the past 20 years or so.
A large number of studies have looked at the gravitational binding of molecular clouds and their sub-structures within, most often via the calculation of their virial parameters (e.g. Larson, 1981;
Solomon et al., 1987; Heyer et al., 2009; Roman-Duval et al., 2010; Kauffmann et al., 2013; Schuller et al., 2017; Miville-Deschenes et al., 2017; Rigby et al., 2019). Depending on the cloud sample that is being studied, the methods that are being used, and the interpretation of the data that is being made, the conclusions range from molecular clouds are: in hydro-static equilibrium, collapsing, or unbound. As a result, a consensus as yet to be found.
A possibly more insightful analysis of molecular clouds is the study of their internal virial ratio profiles. Indeed, if there is a scale/density threshold at which the gravitational binding of clouds change from unbound to bound as a result of, for instance, stellar feedback, then the virial ratio profiles of individual clouds should exhibit some breaks at that particular scale. No one has attempted yet to perform such analysis. Heyer et al. (2009) have measured the properties of clouds using one single tracer, i.e. \({}^{13}\)CO(1-0), at two different radii and based on those, argued that clouds are likely to be in quasi-static equilibrium. Traficante et al. (2018, 2020) have also measured cloud properties at two different radii using, this time, different tracers and argued that the size-velocity dispersion relationship with parsec-scale clumps is flatter than what is observed on larger scales.
In this paper, we propose to derive, in a uniform way, the virial ratio profiles of a sample of molecular clouds from scales of tenths of a parsec, up to tens of parsecs in order to determine their dynamical state. In Sec. 2 we present the source selection and observations. Section 3 explains how the profiles of individual cloud are built. Section 4 presents the models we use to determine the origin of the observed profile features. In Sec. 5 we discuss our results while conclusions are laid out in Sec.6.
Figure 1: Images of SDC18.888-0.476. (a): top - _Spitzer_ 8\(\mu\)m; middle- H\({}_{2}\) column density from _Herschel_ observations; bottom - N\({}_{2}\)H\({}^{+}\)(1-0) integrated emission. The contours are identical in all panels, and are those of the H\({}_{2}\) column density image. The three thicker white contours are those used to compute the average N\({}_{2}\)H\({}^{+}\)(1-0) spectra displayed in magenta in panel (c). (b): Multi-colour image of the molecular cloud hosting the SDC18.888-0.476 infrared dark clump (white: 3.6 \(\mu\)m, orange: 8 \(\mu\)m, yellow: 70 \(\mu\)m, orange: 350 \(\mu\)m, blue: 1.42 GHz, red: H\({}_{2}\) column density). The contours show the H\({}_{2}\) column density obtained from the Galactic Ring Survey \({}^{23}\)CO(1-0) data. The thicker white contours are those used to compute the \({}^{13}\)CO(1-0)-based spectra shown in green in panel (c). The plus symbol shows the central position of the IRDC. (c): Spectra averaged within the highlighted H\({}_{2}\) column density contours in panels (a) and (b). The radius of the region within which the spectra have been averaged are indicated in each panel. The vertical blue dashed lines show the systematic clump velocity as measured from N\({}_{2}\)H\({}^{+}\)(1-0). The compilation of the data presented in this figure summarises all the information used for each cloud in the study presented here. A similar figure for each remaining IRDC can be found in Appendix A.
## 2 Source selection and observations
### Sample
We selected a sample of 27 infrared dark clouds (IRDCs) from the Spitzer Dark Cloud catalogue of Peretto & Fuller (2009). Compared to other cloud samples, IRDCs have the advantage that their heliocentric distances are better constrained, with a large majority of IRDCs lying at the near kinematic distance solution provided by Galactic rotation models (Ellsworth-Bowers et al., 2013). In this paper, the adopted distances for all IRDCs are the near kinematic distance solutions from the Reid et al. (2009) model. The selection criteria for these IRDCs are: (a) The kinematic distance as estimated from \({}^{13}\)CO\((1-0)\) GRS data (Roman-Duval et al., 2010) should be \(d=4\)(\(\pm\)1) kpc; (b) Selected IRDCs should exhibit a range of aspect ratios, i.e. from circular to filamentary, as measured from _Herschel_ column density images (Peretto et al., 2016); (c) Selected IRDCs should exhibit a range of mass and size as estimated from Herschel column density images; (d) all IRDCs have to lie beyond \(l=15\)deg in order to be easily observed from the IRAM 30m telescope. Global properties of the 27 selected clouds can be found in Table 1. Note that kinematic distances have been recalculated using the dense gas data presented in this paper, leading in a few cases to a departure from condition (a). Figure 1a shows one of the selected IRDCs, images of the remaining 26 can be seen in Appendix A.
### Observations
In this study, we exploit four different datasets, each of which is tracing a specific density regime of molecular clouds and/or giving us access to different sets of information (mass versus kinematics). In the following, we describe each of these datasets.
#### 2.2.1 \(N_{2}h^{+}(1\)-0) data
We observed the 27 infrared dark clouds at the IRAM 30m between the 18th and 24th of June 2013, reaching a total of 42h of telescope time. The weather conditions were stable with an average sky opacity at 230 GHz of 0.2. We mapped each region using the 90 Hz EMIR receiver in conjunction with the FTS spectrometer at 50 kHz spectral resolution, providing a velocity resolution of 0.16 km/s. Primary pointing and focus were performed on Saturn. The pointing accuracy was \(<5\)''. In this study we focus on the N\({}_{2}\)H\({}^{+}\)(1-0) line, with an angular resolution of 28''. All data have been reduced using the CLASS package, and gridded into 9'' pixel-size cubes. The final noise range from 0.09 K to 0.2 K per velocity channel and pixel.
#### 2.2.2 Herschel data
We used the PACS (Poglitsch et al., 2010) and SPIRE (Griffin et al., 2010) _Herschel_(Pilbratt et al., 2010) data from the Hi-GAL survey (Molinari et al., 2010). The Hi-GAL data were reduced, as described in Traficante et al. (2011), using HIPE (Ott, 2010) for calibration and deglitching (SPIRE only), routines especially developed for Hi-GAL data reduction (drift removal, deglitching), and the ROMAGAL map-making algorithm. Post-processing on the maps was applied to help with image artefact removal (Piazzo et al., 2015). In this paper, we make use of the PACS 160 \(\mu\)m and SPIRE 250/450/500 \(\mu\)m data with a nominal angular resolution of 12'', 18'', 25'', and 36'', respectively. In addition, zero-flux levels for every Hi-GAL field have been recovered by correlating Herschel data with Planck and IRAS data (Bernard et al., 2010).
#### 2.2.3 \({}^{13}\)CO(1-0) and \({}^{12}\)CO(1-0) data
We used the FCRAO \({}^{13}\)CO(1-0) data from the Galactic Ring Survey (GRS - Jackson et al., 2006) along with the FCRAO UMSB \({}^{12}\)CO(1-0) data (Sanders et al., 1986; Clemens et al., 1986). The GRS data has an angular resolution of 44'', a velocity resolution of 0.21 km/s and a one \(\sigma\) noise of 0.13K (in \(T_{A}^{*}\) scale). The main beam efficiency of the FCRAO telescope at the \({}^{13}\)CO(1-0) frequency is 0.48. All clouds from our sample of 27 IRDCs are covered by the GRS.
The UMSB \({}^{12}\)CO(1-0) data has a nominal angular resolution of 44''. However, the data has been sampled on a 3'-grid, which effectively decreases the resolution. The velocity resolution is 1 km/s, and the one \(\sigma\) noise is 0.4K (in \(T_{R}^{*}\) scale). In order to be able to convert that into a main beam temperature one needs first to multiply by \(n_{ffs}=0.7\) which converts the unit back to \(T_{A}^{*}\)(Kutner & Ulich, 1981; Sanders et al., 1986) and then divide by the main beam efficiency 0.48, so effectively multiplying the UMSB dataset by a (0.7/0.48) factor.
## 3 Mass and velocity dispersion profiles
The goal of this paper is to determine how the ratio of kinetic to gravitational energy of clouds changes as a function of spatial scale. In order to observationally measure such ratio, one needs to determine three quantities: radius, mass, and velocity dispersion. While the cloud mass can reliably be determined via dust emission observations, no single molecular line can trace molecular gas velocity dispersion on all scales, either because of high optical depth or low abundance. We therefore need a combination of tracers to trace different parts of the cloud. Here, we use \({}^{13}\)CO(1-0) to trace the large scales, more diffuse parts of the clouds, and N\({}_{2}\)H\({}^{+}\)(1-0) to trace their densest parts. Figure 2 shows a simple sketch that illustrates what tracer we use for what purpose. In the following subsections we de
Figure 2: Sketch of molecular cloud configuration and relevant tracers. Diffuse gas (represented in green) is traced by \({}^{12}\)CO(1-0) and dust continuum. However, only the former is able to disentangle the emission of multiple clouds along the line of sight by segmenting them in velocity space. In this paper, we will use both tracers to constrain the mass and morphology of the clouds on the largest scales. Dense gas (represented in purple) is well probed by both dust continuum and molecular line tracers such as N\({}_{2}\)H\({}^{+}\)(1-0). It is very rare that two N\({}_{2}\)H\({}^{+}\)(1-0) cloud overlap (as the low frequency of multiple N\({}_{2}\)H\({}^{+}\)(1-0) velocity components is showing). Dust continuum can therefore also be used once a background contamination (from the diffuse gas) has been removed.
scribe how we computed the three required quantities for both the dense and diffuse regions of the clouds.
### Dense gas
#### 3.1.1 Herschel column density maps of IRDCs
For the purpose of this study, we computed H\({}_{2}\) column density maps using the method presented in Peretto et al. (2016, referred to as P16 hereafter). That method consists in using the ratio of the _Herschel_ 160 \(\mu\)m over 250 \(\mu\)m dust emission to measure the temperature of the dust, and then use it, in combination with the 250\(\mu\)m image to derive the column density of gas (assuming a dust to gas mass ratio of 1%) at an angular resolution of 18". For the purpose of the study presented here, we convolved the column density image to the same angular resolution as the N\({}_{2}\)H\({}^{*}\)(1-0) data, i.e. 28". The assumed specific dust opacity is \(\kappa_{\lambda}=0.1\left(\frac{\lambda}{300\mu\mathrm{H}}\right)^{\beta} \mathrm{cm^{2}\,g^{-1}}\)(Hildebrand, 1983), with \(\beta=1.8\)(e.g. Planck Collaboration et al., 2011; Sadavoy et al., 2016; Rigby et al., 2018).
When computing these maps, we make the assumption of a uniform temperature along the line of sight. This is of course incorrect but it is not completely clear though how wrong this assumption is for the structures we are studying. Since we might expect this assumption of a single temperature to be the most inaccurate towards the centre of each clump, we decided to compare the mass profiles of each clumps obtained with P16's method with that of PPMAP (Marsh et al., 2015), a bayesian code that derive, from _Herschel_ observations, the distribution of dust temperatures along the line of sight. Note that we do not use PPMAP in this paper as it can generate a number of artefacts around bright protostellar sources, it is computationally expensive, and arising issues are a lot less straightforward to identify than when using the P16's method.
On the y-axis of Fig. 3 we show the ratio of the PPMAP over the P16 masses, radially averaged. On the x-axis of the same figure we show the radial dispersion of that same ratio, i.e. how much it varies about the average value as a function of radius (i.e. 0% means that the ratio is radially uniform). One can see that while, on average, the PPMAP masses are about 20% larger than the P16 masses, the variations of the mass ratio as a function of radius are small, and remain below 5% for most clouds, with a maximum standard deviation of less than 8%. This shows that, while there might be a systematic uncertainty on the mass of 20%, the shape of the mass profiles derived from both methods are very much consistent with each other.
#### 3.1.2 N\({}_{2}\)H\({}^{*}\)(1-0) as a tracer of Herschel clumps
All 27 IRDCs are detected in N\({}_{2}\)H\({}^{*}\)(1-0). For 4 of them (\(\sim 15\%\)), multiple clouds with velocities differing by more than 20 km/s have been identified within the observed field of views. For one of this IRDC (SDC31.039+0.241), the N\({}_{2}\)H\({}^{*}\)(1-0) emission of the different clouds spatially overlap. This cloud is therefore excluded from the rest of the analysis as the origin of the corresponding dust continuum emission becomes very uncertain. Regarding the remaining three clouds (SDC22.724-0.269, SDC23.367-0.288, SDC24.630+0.151),
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Cloud ID & Name & Coordinates & Systemic velocity & Distance \\ & & (J2000) & (km/s) & (kpc) \\ \hline
1 & SDC18.624-0.070 & 18:25:10.0 –12:43:45 & +45.6 & 3.50 \\
2 & SDC18.787-0.286 & 18:26:19.0 –12:41:16 & +65.4 & 4.36 \\
3 & SDC18.888-0.476 & 18:27:09.7 –12:41:52 & +66.3 & 4.38 \\
4 & SDC21.211-0.139 & 18:30:32.1 –10:25:50 & +66.5 & 4.24 \\
5 & SDC22.373+0.446 & 18:30:24.5 –09:10:34 & +53.0 & 3.61 \\
6 & SDC22.724-0.269 & 18:33:33.3 –0.91:15:55 & +73.3 (+105.0) & 4.44 \\
7 & SDC23.066+0.049 & 18:33:08.2 –0.84:45:3 & +91.8 & 5.11 \\
8 & SDC23.367-0.288 & 18:34:53.8 –38:30.00 & +78.3 (+103.0; +58) & 4.60 \\
[MISSING_PAGE_POST]
0.045 & 19:28:34.4 +17:34:17 & +44.1 & 4.49 \\ \hline \end{tabular}
\end{table}
Table 1: Infrared dark cloud sample
we only consider the cloud for which the N\({}_{2}\)H\({}^{+}\)(1-0) integrated emission best matches the extinction feature seen in the mid-infrared. The corresponding velocities are provided in Table 1.
Another 4 IRDCs (SDC24.433-0.231, SDC24.630+0.151, SDC26.507+0.716, SDC35.527-0.269) show multiple velocity components with velocity differences lower than 3 km/s, only one of these also exhibits multiple clouds along the line of sight (SDC24.630+0.151). However, once averaged within column density contours (see Appendix A), the multiple velocity components are mostly washed out, and are therefore not a concern in the context of this study. Note that one of the multiple velocity component cloud, i.e. SDC35.527-0.269, has been extensively studied in the past at high angular resolution clearly revealing multiple velocity component structures (e.g. Henshaw et al., 2014).
The morphology of the N\({}_{2}\)H\({}^{+}\)(1-0) integrated intensity images are very similar to that of the H\({}_{2}\)_Herschel_ column density maps (see Fig. 1), qualitatively showing that N\({}_{2}\)H\({}^{+}\)(1-0) is a good tracer of the column density structure of star-forming clouds. In order to quantify the correlation between dust column density and N\({}_{2}\)H\({}^{+}\)(1-0) line emission we produced scatter plots for each cloud of the H\({}_{2}\) column density derived from _Herschel_, for which the background as defined by N\({}_{\rm N_{2}H^{+}}^{\rm edge}\) (see next section) has been subtracted, versus the integrated intensity of N\({}_{2}\)H\({}^{+}\)(1-0) (see Fig. 4 for four representative examples). One can see there is, indeed, a strong linear correlation between the two quantities, although the slopes can differ from cloud to cloud. We observe such correlations for all clouds for which there is enough dynamic range and independent points (i.e. 21/27 clouds). We have also checked whether the relation provided by Hacar et al. (2018) between H\({}_{2}\) column density, N\({}_{2}\)H\({}^{+}\)(1-0) integrated intensities, and temperature hold for our cloud sample. We can confirm that it does for most of the clumps, but some significant departures are observed, which can be explained by a variation of the N\({}_{2}\)H\({}^{+}\) abundance by a factor of 2 or so. In fact, the two branches seen for SDC28.333+0.063 in Fig. 4 likely correspond to two distinct regions within the IRDC exhibit different physical conditions. Nevertheless, from this comparison we can conclude that N\({}_{2}\)H\({}^{+}\) is a good tracer of the dense gas as traced with _Herschel_, and therefore that we can reasonably use it to trace the kinematics of _Herschel_ clumps for the (column) density range we are probing (i.e. N\({}_{\rm H_{2}}\geq 10^{22}\) cm\({}^{-2}\)). As such we do not expect the effect of using different tracers for mass and kinematics to be a significant issue in our study (see Traficante et al., 2018; Yuan et al., 2020).
#### 3.1.3 Mass and velocity dispersion estimates
The resulting H\({}_{2}\) column density maps (see Fig. 1) are contaminated by foreground and background interstellar structures which are not physically associated to the cloud. Removing such contributions is not an easy task (Peretto et al., 2010; Batterby et al., 2011). In the context of this study, we are mostly interested in the part of the cloud which is seen in N\({}_{2}\)H\({}^{+}\)(1-0) in the IRAM 30m data. Therefore we define the "edge" of the dense part of the clouds as being the column density contour, \(N_{\rm N_{2}H^{+}}^{\rm edge}\), that best matches the extent of the N\({}_{2}\)H\({}^{+}\)(1-0) integrated intensity map. This is done by computing the median (along with the 16\({}^{\rm th}\) and 84\({}^{\rm th}\) percentiles) column density value within a ring just outside the N\({}_{2}\)H\({}^{+}\)(1-0) integrated intensity contour of 0.5 K km/s, i.e. our detection limit. The value of \(N_{\rm N_{2}H^{+}}^{\rm edge}\) will then serve as the background column density of the clump that we will remove from any clump scale mass measurements (see Table 2 for the individual values of \(N_{\rm N_{2}H^{+}}^{\rm edge}\) and corresponding 16\({}^{\rm th}\) and 84\({}^{\rm th}\) percentiles).
We used the contour-based dendrogram tool from Peretto & Fuller (2009) on the _Herschel_ column density maps to estimate sizes and masses of connected groups of pixels lying above a certain column density. In order to be considered for the analysis those groups of pixels need to be larger than the number of pixels within an angular resolution element, and need to be part of a structure whose column density amplitude from local maximum to local minimum is larger than a predefined threshold, \(N_{\rm H_{2}}^{\rm th}\). The column density increment we used in our dendrogram analysis is \(\sigma_{N_{\rm H_{2}}}=2\times 10^{21}\) cm\({}^{-2}\) for all clouds, with \(N_{\rm H_{2}}^{\rm th}=5\sigma_{N_{\rm H_{2}}}\). The starting column density contour, \(N_{\rm N_{2}H^{+}}^{\rm part}\) (see Table 2) is set to be larger or equal to \(N_{\rm N_{2}H^{+}}^{\rm edge}\)
Figure 4: Scatter plots of the background subtracted H\({}_{2}\) column density versus the N\({}_{2}\)H\({}^{+}\)(1-0) integrated intensity (in T\({}_{\rm o}^{\rm th}\) scale) for four clumps. In each panel the same linear relation is displayed as a dashed black line. The correlation between the two quantities is obvious in all clouds, even though the slope differ from cloud to cloud.
Figure 3: Ratio of the PPMAP masses over the P16 masses averaged over their radial profiles as a function of their mass ratio standard deviation. The black dashed lines show mass ratios of 1 and 1.5. Each colour corresponds to a single IRDC whose ID number can be found at the top of the figure (see Table 1 for the corresponding IRDC name). Note that IRDC SDC31.039+0.241 (ID number 19) has been left out as a result of the presence of multiple dense clumps present along the line-of-sight (see Sec. 3.1.2).
and is determined by eye. The reason for not systematically having \(N_{\rm N_{\rm N_{\rm N_{\rm N_{\rm N_{\rm N_{\rm N_{\rm N_{\rm N_{\rm N_{\rm N_{\rm N _{\rm N}}}}}}}}}}^{\rm start}}\) is that the \(N_{\rm N_{\rm N_{\rm N_{\rm N_{\rm N{\rm N{\rm N{\rm N{\rm N{\rm N}}}}}}}}}^{ \rm end}\) contour can be more extended than the coverage of our N\({}_{2}\)H\({}^{\ast}\)(1-0) maps, and therefore, in such cases, the computed masses would be overestimated. The mass of any identified group of pixels is then given by:
\[M_{\rm N_{\rm N_{\rm N_{\rm N{\rm N{\rm N{\rm N{\rm N{\rm N{\rm N}}}}}}}}}= \rm{\cal N}_{\rm pix}^{N_{\rm N_{\rm N{\rm N{\rm N{\rm N{\rm N{\rm N{\rm N{\rm N \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ }}}}}}}}}}}}}}}}}^{ \rm start}}
tion (we also tested two other methods - see Appendix B). The mass \(M_{\rm{{}^{13}CO}}\) and velocity dispersion \(\sigma_{\rm{{}^{13}CO}}\) are then estimated using the following equations for the velocity dispersion:
\[\sigma_{\rm{{}^{13}CO}}=\sqrt{\sum_{i}w_{i}\left[(v_{i}-\bar{v})^{2}+\sigma_{i}^ {2}\right]} \tag{3}\]
where the sum is over the Gaussian components, and \(w_{i}\), \(v_{i}\), and \(\sigma_{i}\) are the weight, the central velocity and velocity dispersion of the \(i^{\rm th}\) component, respectively. The centroid velocity \(\bar{v}\) is obtained by:
\[\bar{v}=\sum_{i}w_{i}v_{i} \tag{4}\]
And the weights are defined by:
\[w_{i}=\frac{m_{i}}{\sum_{i}m_{i}} \tag{5}\]
where \(m_{i}\) is the mass resulting from the integration of each individual Gaussian component, and:
\[M_{\rm{{}^{13}CO}}=\sum_{i}m_{i} \tag{6}\]
The velocity dispersion calculated via Eq. (3) includes two terms, i.e. the velocity dispersion from individual Gaussian components, along with the component-to-component centroid velocity dispersion. This is justified by the fact that we are here interested in estimating the entire kinetic energy budget of the clouds we are analysing. Note also that only the Gaussian components that we believe belong to the cloud of interest are used for the determination of the mass and velocity dispersion. Those are identified by integrating, separately, each \({}^{13}\)CO(1-0) emission peak and visually evaluate what peak best matches the morphology of the embedded IRDC. It is possible though that different components that we consider as being part of different molecular clouds are physically interacting with each other via, e. g., cloud-cloud collision (as it has been argued for the SDC18.624-0.070 cloud - Dewangan et al., 2018). Such interactions can lead to the creation of intermediate velocity gas (Hawworth et al., 2015; Bisbas et al., 2017) for which it might become difficult to determine to which cloud it belongs, potentially leading to large uncertainties in the estimate of \(\sigma_{\rm{{}^{13}CO}}\).
The radius of each dendrogram's connect group of pixels is given by:
\[R_{\rm{{}^{13}CO}}=\sqrt{\frac{n_{\rm{pix}}^{{}^{13}CO}\Omega_{\rm{pix}}^{{}^{13 }CO}d^{2}}{\pi}} \tag{7}\]
where \(n_{\rm{pix}}^{{}^{13}CO}\) is the number of pixels within each connected group of
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline ID & \(N_{\rm{{}^{13}CO}}^{\rm{edge}}\) & \(N_{\rm{{}^{13}CO}}^{\rm{start}}\) & \(R_{\rm{{}^{13}CO}}^{\rm{start}}\) & \(M_{\rm{{}^{13}CO}}^{\rm{start}}\) & \(\sigma_{\rm{{}^{13}CO}}^{\rm{start}}\) & \(\sigma_{\rm{{}^{13}CO}}^{\rm{start}}\) & \(\sigma_{\rm{{}^{13}CO}}^{\rm{start}}\) & \(\sigma_{\rm{{}^{13}CO}}^{\rm{start}}\) & \(\sigma_{\rm{{}^{13}CO}}^{\rm{13}CO}\) & \(\alpha_{\rm{{}^{13}CO}}^{\rm{13}CO}\) \\ & (\(\times 10^{22}\) cm\({}^{-2}\)) & (\(\times 10^{22}\) cm\({}^{-2}\)) & (pc) & (M\({}_{\rm{{}^{13}CO}}\)) & (km/s) & & (pc) & (\(\times 10^{4}\) M\({}_{\odot}\)) & (km/s) & \\ \hline
[MISSING_PAGE_POST]
pixels and \(\Omega_{\rm pix}^{\rm 1/CO}\) is the solid angle subtended by a pixel. In parallel to these \({}^{13}\)CO-based mass estimates, we derive _Herschel_-based ones. For this we use the exact same connected groups of pixels as those used above, but this time we use our _Herschel_-based H\({}_{2}\) column density maps to obtain the masses via:
\[M_{Hers.}^{\rm unc}=\Omega_{\rm pix}^{\rm 1/CO}d^{2}\mu_{\rm mol}m_{\rm H}\sum_{i= 1}^{\rm 13_{CO}}N_{\rm H_{2},i} \tag{8}\]
where all parameters are identical to those presented in Eq. (1) and \(M_{Hers.}^{\rm unc}\) stands for _uncorrected Herschel_-based masses. The reason why those are uncorrected is due to the contamination of the mass estimates by the presence of multiple clouds along the line-of-sight. One can correct for this by estimating the fraction \(f_{\rm in}\) of the total mass of molecular clouds along the line-of-sight that is locked up within the cloud of interest. That can be achieved by integrating the \({}^{13}\)CO-based H\({}_{2}\) column density spectra across the entire GRS velocity range, along with integrating the best-fit Gaussian model for the cloud of interest. This can be formulated as:
\[f_{\rm los}=\frac{\int_{\rm mol}N_{\rm H_{2}}^{\rm 1/CO}dv}{\int_{\rm ln}N_{ \rm H_{2}}^{\rm 1/CO}dv} \tag{9}\]
This correction factor can be calculated for each dendrogram group of connected pixels and then be applied to the uncorrected masses via:
\[M_{Hers.}^{\rm corr}=f_{\rm los}M_{Hers.}^{\rm unc} \tag{10}\]
Figure 5 shows a comparison of the ratio of \(M_{\rm 1/CO}/M_{Hers.}^{\rm unc}\) versus \(M_{Hers.}^{\rm corr}\) and \(M_{\rm 1/CO}/M_{Hers.}^{\rm corr}\) versus \(M_{Hers.}^{\rm corr}\). This figure clearly shows the vast improvement in the mass agreement once the correction factor is being applied. After correction, the masses agree within less than a factor 2 and we see little evidence for significant \({}^{13}\)CO depletion, at least not on those scales. This excellent agreement also indicates that one can safely use the \({}^{13}\)CO(1-0) velocity dispersion measurements in conjunction with the _Herschel_-based cloud masses. In the rest of this paper we will be using the _Herschel_-based corrected masses.
### Combined profiles
In this paper, we adopt a top-down approach by which, for every column density contours, we only analyse the one group of connected pixels that covers the position of the IRDC _Herschel_-based column density peak. As a result, sibling clumps that might be part of the same molecular clouds as our IRDC sample are not separately analysed, even though they contribute to the mass and velocity dispersion of the dendrogram structures that encompass both them and the targeted IRDC.
Figure 6 shows the mass profiles \(m(r)\) and the velocity dispersion profiles \(\sigma_{\rm uv}(r)\) for the 26 clumps of our sample and their parent molecular clouds. For the measurement on clump scales (the purple lines) we have \(m(r)=M_{\rm N_{\rm 1/2}}\)-\(n\)\((R_{\rm N_{\rm 2/2}})\), while for the measurements on cloud scale (the green lines), we have \(m(r)=M_{\rm 1/CO}\)\((R_{\rm 1/CO})\). Also, the velocity dispersion \(\sigma_{\rm uv}\) is the total (thermal+turbulent) line-of-sight velocity dispersion of the gas and is estimated using the observed velocity dispersion via:
\[\sigma_{\rm uv}^{2}=\sigma_{\rm line}^{2}+k_{B}T\left(\frac{1}{\mu_{\rm mol }m_{\rm H}}-\frac{1}{m_{\rm mol}}\right) \tag{11}\]
where \(\sigma_{\rm line}\) is the observed velocity dispersion of the gas as inferred from the observation of a given molecular line (N\({}_{2}\)H\({}^{*}\)(1-0) for the purple points, and \({}^{13}\)CO(1-0) for the green points), \(T\) is the gas temperature, \(m_{\rm mol}\) is the mass of the observed molecule (here \(m_{\rm mol}=m_{\rm N_{\rm 2/2}}n_{\rm H}=m_{\rm 1/CO}=29m_{\rm H}\)), \(\mu_{\rm mol}\) is the molecular weight which is here taken to be 2.33, and \(m_{\rm H}\) is the mass of the hydrogen atom. We here assume a gas temperature of 15 K for all clouds, which is the average temperature measured within infrared dark clouds (e.g. Peretto et al., 2010; Battersy et al., 2011). The impact of that assumption is negligible for most velocity dispersion measurements, and only have a measurable impact for velocity dispersions \(\leq 1\) km/s. Finally, the right-hand-side panel of Fig. 6 shows the corresponding viral ratio profiles \(\sigma_{\rm uv}(r)\). Virial ratios, defined as \(\alpha_{\rm vir}=2E_{K}/|E_{\rm col}|\) with \(E_{K}\) the kinetic energy of the gas and \(E_{\rm G}\) its gravitational energy, provide a zero-order measure of a cloud dynamical state. While the kinetic energy of a cloud can be relatively easily estimated, the estimate of its gravitational energy usually requires to make simplifying assumptions on the morphology and density profile of the cloud. Bertoldi & McKee (1992) have evaluated \(E_{\rm G}\) in case of different power law densities and different cloud aspect ratios. They show that for cloud with aspect ratios lower than 10 (as it is the case in this study) \(|E_{\rm G}|\) is only decreased by a maximum of 8% compared to the spherical case. However, for clouds that have power law density such as \(\rho\propto r^{-\gamma}\) with \(\gamma=2\), \(|E_{\rm G}|\) is increased by 67%. The impact of the density gradient on \(|E_{\rm G}|\) is stronger than the non-sphericity of the cloud. For simplicity, most studies of the virial ratio of molecular clouds usually approximate them as uniform density spheres, which is also what we will do, and discuss correction factors later. In this case, one can show that the virial ratio \(\alpha_{\rm vir}(r)\) is given by:
\[\alpha_{\rm vir}(r)=5\frac{\sigma_{\rm tot}^{2}r}{Gm} \tag{12}\]
Figure 6 shows a number of important features. In the following we will discuss those separately.
#### 3.3.1 Observed mass profiles
The mass profiles presented in Fig. 6 spread over 4 orders of magnitude in mass and 2 orders of magnitude in radius. The masses estimated on cloud scale (the green lines) and clump scale (the purple
Figure 5: Comparison between the ratio of \({}^{13}\)CO-based and _Herschel_-based uncorrected/correct (top/bottom) cloud masses for all clouds, at all radii, as a function of the corrected _Herschel_-based cloud masses.
lines) mostly connect at a scale of about 2 pc, which corresponds to the maximum extent of the N\({}_{2}\)H\({}^{*}\)(1-0) emission and half the resolution (3') of the \({}^{13}\)CO-based column density images that we use to derive the morphology of the clouds. Note that, even though derived from the same _Herschel_-based H\({}_{2}\) column density maps, the masses on clump and cloud scales do not produce continuous mass profiles. The reason for this is that we are removing the column density background \(N_{\rm N,H^{*}}^{\rm edge}\) to every clump scale mass measurements so that the velocity dispersion estimate is that of the measured gas mass. Finally, when looking at shapes of the profiles, we notice that the clump scale mass profiles are more curvy and and exhibit shallower gradients than those from the more diffuse parts.
#### 3.3.2 Observed velocity dispersion profiles
The velocity dispersion profiles presented in Fig. 6 are the most striking. First, there is a clear discontinuity between the velocity dispersion measurements obtained on clump scale and those obtained on cloud scale. Having such different measurements clearly indicates that there is a systematic bias in the method that is being used to perform those measurements. Second, the shapes of the profiles are also strikingly different. While on the largest scale, the velocity dispersion mostly decreases with decreasing radius, on the smallest scale, the velocity dispersion profiles are mostly flat. This is very different from a typical Larson-type relation (Larson, 1981) for which we would expect the velocity dispersion to decrease down to the sonic-scale at about 0.1 pc.
#### 3.3.3 Observed virial ratio profiles
Since the virial ratio profiles presented in Fig. 6 are built from the mass and velocity profiles, they carry similar features. For instance, the virial ratios present a discontinuity at the around \(r=2\) pc, which is the consequence of the discontinuity observed in the velocity dispersion profiles. Note, however, that this discontinuity is attenuated as a result of the slightly larger masses estimated from the cloud scale measurements at that radius. Also, it is pretty clear that for most of the clouds, the virial ratios on the large scales (green lines) increase as the radius decreases. This trend has already been observed by Hernandez & Tan (2015) who interpreted it as a sign of CO depletion. Finally, the virial ratios estimated on clump scales (purple lines) present a curvy shape, which is the direct reflection of the curvy mass profiles observed on the same scales.
### Uncertainties
There are a number of uncertainties that we need to consider when interpreting the profiles presented in Fig. 6. First, there are uncertainties that do not affect the shape of the profiles but do impact their overall scaling. One example of such uncertainty is the distance to
Figure 6: Profiles of all 26 infrared dark clouds and their parent molecular clouds from our sample. On the top row the purple points represent those for which the clump scale velocity dispersion has been measured using N\({}_{2}\)H\({}^{*}\)(1-0), and the green points are those for which the cloud scale velocity dispersion has been measured using \({}^{13}\)CO(1-0). The middle and bottom rows show the same data point as the top row but each individual cloud/clump has a unique colour so that one can track their profiles. Half of the clouds have been plotted in each for clarity. (left): mass profiles \(m(r)\); (middle): velocity dispersion profiles \(\sigma_{\rm tot}(r)\); (right): virial ratio profiles \(\sigma_{\rm vir}(r)\).
the clouds which is typically 10% to 20% (Reid et al., 2009). If our IRDC-hosting cloud are not located at the near distance though then the distance could be 4 times larger for some clouds (see Fig. 11). That uncertainty will impact the mass and radius measurements uniformly across the profile of an individual cloud. Second, there are uncertainties that can potentially impact the shape of individual profiles. Regarding the mass profiles, the assumption of a single temperature along the line-of-sight could potentially have an impact on the shape of the observed profiles. However, as we have shown in Sec. 3.1.3, the impact on the shape of the profile is minimal, while the impact on the absolute mass values can be impacted by 20% on average. Another uncertainty is related to the dust emissivity, i.e. \(\kappa_{\rm L}\), we used when computing the _Herschel_-based H\({}_{2}\) column density maps. In this study we used the same dust emissivity law for the clump scale and cloud scale measurements. It is however well known that dust emissivity changes with density and temperature (e.g. Ysard et al., 2015; Sadavoy et al., 2016). At this point we have no means to set strong constraints on this particular aspect of dust property uncertainties, but the law we adopted has been shown to be compatible with dust emission in both the more diffuse (Planck Collaboration et al., 2011) and denser (Rigby et al., 2018) gas environments. Also, as it can be seen in Fig. 5, the _Herschel_-based masses are within a factor of two of the \({}^{11}\)CO-based masses which use a completely different set of assumptions. This suggests that, if dust properties do change across the radial profiles of molecular clouds, this does not have a dramatic effect on our mass estimates. Finally, the uncertainty related to our choice of \(N_{\rm N_{\rm H},{\rm H}}^{\rm edge}\) (see Sec. 3.1.3) has a direct impact on the clump scale mass estimates with a \(\sim 10\%\) to \(\sim 30\%\) uncertainty for most clumps (see Table 2). This fractional mass uncertainty is not constant across the clump radial profiles and therefore might affect the mass profile shape. However, after computing the clump mass profiles with a representative range of \(N_{\rm N_{\rm H},{\rm H}}^{\rm edge}\), we can confirm that it barely impacts their overall shapes.
Regarding uncertainties on the velocity dispersion, the N\({}_{2}\)H\({}^{+}\)(1-0) and \({}^{11}\)CO(1-0) measurements differ. Indeed, the N\({}_{2}\)H\({}^{+}\)(1-0) velocity dispersion measurements are very well constrained, and have uncertainties that are of the order of \(\sim 0.1\) km/s. This implies that the flat velocity dispersion profiles observed on clump scale are very robust. Uncertainties on the \({}^{11}\)CO(1-0) velocity dispersion measurements are a lot more variable from cloud-to-cloud depending on how complex the \({}^{11}\)CO(1-0) spectra are. For the simple cases, such as SDC18.888-0.476 (see Fig. 1) the uncertainty is of the order of \(\sim 0.2\) km/s, however is can be as high as \(\sim 1\) km/s in more complex cases such as SDC18.624-0.070 (see Fig. 11). These larger uncertainties are also reflected by the large differences in velocity dispersion measurements when using different evaluation methods (see Fig. 12).
Overall, while the inherent uncertainties on the different quantities presented in Fig. 6 might shift the profiles up and down, their shapes are fairly robust and are likely to be a true representation of how the projected mass, velocity dispersion, and virial ratio profiles of clumps and clouds behave.
## 4 Spherical Models
As discussed above, the profiles displayed in Fig. 6 present a number of characteristic features. Before interpreting them one needs to be aware a few biases that exist and that we may be able to quantify. First, masses, as presented in the left-hand-side panel of Fig. 6, have been computed using the bijective mass estimates (Rosolowsky et al., 2008). Such masses are always overestimated as a consequence of cloud material lying along the line-of-sight which is not part of the closed volume of radius \(r\) (see Fig. 7). The impact of using the bijective method to estimate masses at different radius is illustrated in Fig. 7b. Second, the velocity dispersion measurements are being done on spectra that also include the same unrelated line-of-sight material which may have larger or smaller velocity dispersion than the gas lying within the volume of interest. Depending on the exact shape of the combined density and velocity dispersion profiles, this might lead to over or underestimated observed velocity dispersions (see Fig. 7c). Also, the clump and cloud scale measurements are derived at different angular resolutions, and a background column density has been subtracted to the former and not to the latter. The impact of all those on the observed profiles is unclear. The purpose of the models presented in the rest of this section is to quantify the impact of projection on the mass and velocity dispersion measurements in relation to the observed profiles (for a similar approach on core scale see Singh et al., 2021). We do not attempt to fit the profiles of individual clouds as spherical clouds are not a good representation of the complex density structures of molecular clouds.
Figure 7: (a) Sketch illustrating measurement biases once a 3D cloud is being projected onto the plane of sky. In this case, a spherical cloud (on the right) whose volume density increases towards the centre has a projected column density profile that is represented by the plot on the left. The bijective mass estimate within projected radius \(R\) will be that represented by the green shaded area, which includes material that is not part of the volume of the sphere of radius \(R\) (region of the cloud that is barred). (b): Normalised mass profiles of a spherical cloud with 3 different density profile \(\rho\propto r^{-7}\), with \(\gamma=(2.0,1.5,1.0)\). The solid lines show the masses as observed, while the dashed lines show the real mass enclosed within a given radius. (c): Normalised velocity dispersion profiles of a spherical with the same density profiles as in (b) and with a velocity dispersion profile \(\sigma\propto r^{\beta}\) with \(\beta=0.5\).
### Single power-law profiles
We first consider models with single density and velocity dispersion power-law such as:
\[\rho(r)=\rho_{0}\left(\frac{r}{r_{0}}\right)^{-\gamma} \tag{13}\]
\[\sigma(r)=\sigma_{0}\left(\frac{r}{r_{0}}\right)^{\beta} \tag{14}\]
where \(\rho_{0}\), \(r_{0}\) and \(\sigma_{0}\) are normalisation constants. For a given pair of \(\gamma\) and \(\beta\) values, we numerically construct a spherical cloud of a given mass and radius that we then project on the plane-of-the-sky in order to construct mass surface density maps (see Appendix C for more details). We do this last operation twice, once up to radius \(R_{\rm edge}^{\rm N_{\rm B}H^{+}}\) and \(R_{\rm edge}^{\rm N_{\rm B}H^{+}}\) being the radii at which \({}^{\rm N_{\rm B}}\)CO(1-0) and \({}^{\rm N_{\rm B}}\)H\({}^{\rm(1-0)}\) emission becomes undetectable. We then convolve each mass surface density images at the resolution of our observations. Finally, we integrate both mass surface density images at various radii to derive their projected mass profiles. Regarding the velocity dispersion profile, we first weight the velocity dispersion at each radius, in 3D, by the local mass density. We then project this quantity onto the plane-of-the-sky, and then integrate the resulting maps at various radii. Finally, we divide these profiles by the corresponding mass profiles in order to obtain projected mass-weighted velocity dispersion profiles.
In the models presented here, there are essentially four free parameters, i.e \(\gamma\), \(R_{\rm edge}^{\rm N_{\rm B}H^{+}}\) (\(=r_{0}\)), \(R_{\rm edge}^{\rm N_{\rm CO}}\), and \(M_{\rm edge}^{\rm N_{\rm B}H^{+}}\), for the mass profiles, and an additional two free parameters, i.e. \(\beta\), and \(\sigma_{\rm edge}^{\rm N_{\rm B}H^{+}}\) (\(=\sigma_{0}\)), for the velocity dispersion profiles. The parameter \(\rho_{0}\) is derived from \(\gamma\), \(R_{\rm edge}^{\rm N_{\rm B}H^{+}}\), and \(M_{\rm edge}^{\rm N_{\rm B}H^{+}}\) and is, thus, not a free-parameter of the models. As already mentioned, the purpose of those models are not to find a set of best parameters for each individual clouds, but rather to understand the trends that are present in the cloud sample. With that in mind, Fig. 8 shows a set of 9 models against the observed profiles. The normalisation of those models is such they match the range of mass and velocity dispersion at parsec-scales. Each row corresponds to a different \(\gamma\) value but the same \(\beta\) value. In each row, the three panels correspond to the mass, velocity dispersion, and virial ratio profiles. There are a number of important features in those models that we can notice straight away. First, regarding the mass profiles, one can see that the cases \(\gamma=1\) and \(\gamma=2\) over-predict and under-predict, respectively, the mass of the clouds on the largest scales. We also notice that, while the \(\gamma=1.5\) case provides a better overall agreement with the observed profiles, the profile shapes provided by the cases \(\gamma=2\) and \(\gamma=1\) seem to give a better match to the inner and outer parts, respectively, of the observed profiles. We also notice that we successfully reproduce the curved shape of the inner parts of the profiles.
Figure 8: Mass, velocity dispersion, and virial ratio profiles (from left to right), for three different \(\gamma\) values (from top to bottom: \(\gamma=[1.0,1.5,2.0]\)). The grey lines are the same observed data points as presented in Fig. 6. The red, orange, and gold lines are three different spherical models with three different normalisations such that they cover the range of masses and velocity dispersions as measured on parsec scale (i.e. on scales representative of \(R_{\rm N_{\rm B}H^{+}}^{\rm start}\) in Table 2). All models have the same velocity dispersion profile exponent, i.e. \(\beta=0.5\).
Moving on to the velocity dispersion profiles displayed in Fig. 8 it is clear that the simple 1D models represented here manage to reproduce the velocity dispersion discontinuity, in particular for the \(\gamma=1\) case. The reason for this is that, for that series of models, a larger fraction of the mass is at low density where the velocity dispersion is the largest and as a result of the projection, the mass weighted velocity dispersion is overestimated by a large factor, up \(\sim 3\) in the \(\gamma=1\) case. Also, while the \(\gamma=1\) case manages to reproduce in a satisfactory way the shape and amplitude of the outer velocity dispersion profile, it cannot entirely match the observed flat velocity dispersion profiles in the inner regions. And this gets worse when considering steeper density profiles, i.e. \(\gamma=1.5\) and \(\gamma=2\).
Finally, looking at the virial ratio profiles in the last column of Fig. 8, we notice that none of them are completely satisfactory when compared to the observed profiles, even though one could argue that the shallower density models do better than the steeper ones.
From this comparison between single-power law models and observed profiles, it seems clear that most of the observed features can be reproduced, at least to some extent, providing strong evidence that projection biases are mostly responsible for them. This comparison also shows that the single power-law models are limited and do not allow us to reproduce both the the inner and outer parts of the observed mass and velocity dispersion profiles. Most noticeable is the velocity dispersion profiles for which the flat inner profiles are clearly different from the outer profile shapes.
### Broken power-law profiles
In this section, we extend the models presented above from single power-law profiles to broken power-law profiles. More explicitly, we set two power-law exponents for both the density and velocity dispersion profiles defined as:
\[\gamma_{\rm in}\ \ {\rm and}\ \ \beta_{\rm in}\ \ {\rm for}\ \ r<r_{0}\ \ \ \ \ \ \ \ \gamma_{\rm out}\ \ {\rm and}\ \ \ \beta_{\rm out}\ \ {\rm for}\ \ r>r_{0} \tag{15}\]
The method used to create the profiles is identical to that presented in the previous section. Based on our single power-law models, it seems that the inner density profiles are, on average, steeper than the outer ones. Therefore, in the series of models presented in Fig. 9 we used \(\gamma_{\rm in}=2\) and \(\gamma_{\rm out}=1.5\). Regarding the velocity dispersion profiles it seems clear that the velocity dispersion on clump scale is rather flat, with apparent very little variation in the profiles. On the other hand, the velocity dispersion profiles on cloud scale are diverse, both in terms of shape and normalisation. Therefore, each row in Fig. 9 corresponds to a different value of \(\beta_{\rm out}\), while \(\beta_{\rm in}\) is fixed to 0 for all models. In that figure one can see that we now reproduce rather well the average shape and magnitude of the mass profiles, and similarly for the velocity dispersion profiles on clump scales. One can also see that we do reproduce well some of the velocity dispersion profiles on cloud scale, although we fail in reproducing the low-mass high velocity dispersion profiles that populate the top part of the velocity and virial ratio profiles. This is where our simple 1D models reach their limitations. Indeed, if one looks at the large scale mass distribution of those clouds (via the \({}^{13}\)CO-based H
Figure 9: Same as Fig. 8 but for broken power-law profiles. Each row now corresponds to a different \(\beta_{\rm out}\) value (from top to bottom: \(\beta_{\rm out}=[0.3,0.5,0.7]\)). For all models, \(\beta_{\rm in}\), \(\gamma_{\rm in}\), and \(\gamma_{\rm out}\) are fixed to [0.0, 20.1.5], respectively.
column density maps - see Fig. 1 and Appendix A), one can see that the clump we are focussing on does not dominate the mass on those scales, with many or more sibling clumps being present in the same parent cloud. As a result, the large scale velocity dispersion measured towards the clumps of interest is not likely driven by the presence of its siblings. This cannot be reproduced with spherical models. Nevertheless, what this is showing is that lower gas density layers of high velocity dispersion gas surround those clumps, generating steep velocity dispersion discontinuities in their profiles.
## 5 Discussion
### Self-gravitating molecular clouds
The question of the gravitational binding of molecular clouds has been, and still is, the subject of numerous debates (e.g. Heyer et al., 2009; Dobbs et al., 2011; Ballesteros-Paredes et al., 2011; Miville-Deschenes et al., 2017; Vazquez-Semadeni et al., 2019). Here, we have all the necessary information to check wether the clouds we selected are gravitationally bound or whether there is a scale at which they switch from being bound to unbound. The profiles displayed in Fig. 6 show that observed virial ratios for our cloud sample are nearly systematically below \(\alpha_{\rm vir}=3\) at all radii. Most of the exceptions correspond to the measurements made at the smallest radii of the \({}^{13}\)CO(1-0)-based profiles. As our models showed (see Fig. 8) increased virial ratios with decreasing radii can be reproduced when large layers of high velocity dispersion gas lay along the line-of-sight and contaminates the measurements. As a result the most reliable measurements are those obtained on the largest scales. Figure 10 shows the distributions of virial ratios obtained at those largest scales for both N\({}_{2}\)H\({}^{+}\)(1-0) and \({}^{13}\)CO(1-0) measurements. For uniform density spheres, the transition between gravitationally bound and unbound gas occurs at \(\alpha_{\rm vir}=2\), while for clouds with density profiles such as \(\rho\propto r^{-1}\), \(\rho\propto r^{-1.5}\), and \(\rho\propto r^{-2}\), the limit moves up to 2.2, 2.5, and 3.3, respectively. Correction factors regarding the non-spherical shape of clouds are less than 8% as long as the aspect ratio of the clouds is lower than 10 (Bertoldi & McKee, 1992), which is the case for all clouds in the sample. For non-uniform velocity dispersion profiles correction factors also exist (Miville-Deschenes et al., 2017), but these are of the order of 5% for the diffuse parts of the cloud and non-existent for the dense part (see Appendix D). Figure 10 reveals that \(\sim\) 85% of \({}^{13}\)CO-based measurements, and 100% of the N\({}_{2}\)H\({}^{+}\)-based measurements have \(\alpha_{\rm vir}\leq 2.5\). Taken at face value, this demonstrates that the vast majority of the molecular clouds, if not all, from our sample is self-gravitating on all scales, from tenths of a parsec up to several tens of parsecs.
How does this fit with studies such as that of Miville-Deschenes et al. (2017, MD17 hereafter), claiming that most molecular clouds are unbound? In order to answer that question we searched for the MD17 counterparts of all 24 molecular clouds from our sample and compared the distributions of the virial ratio (as estimated by MD17) to the entire MD17 cloud population. Figure 11 shows that our 24 IRDC-hosting molecular clouds are amongst the most gravitationally bound clouds from the MD17 sample, oversampling the low virial ratio tail of the distribution. So the fact that all the clouds studied in the present paper are gravitationally bound is not in contradiction with the MD17 results. We also notice that the virial ratio values plotted in Fig. 11 and estimated by MD17 are larger than those we have estimated ourselves for the same sample of clouds. This difference could be real as MD17 computed their cloud properties from a lower gas density tracer, that is \({}^{12}\)CO(1-0), or it could be due to systematics in the way properties are calculated. We investigated this by reporting the MD17 values of radius, mass, velocity dispersion, and virial ratio for all 24 clouds and added them to our observational profiles. Figure 12 (transparent orange circular symbols) shows an overall good agreement between our data points and those from MD17. However, while the radii reported by MD17 are larger, the masses are very similar to those we report on smaller radii. This somehow suggests that either we have overestimated our masses or MD17 have underestimated their \({}^{12}\)CO masses. In MD17 they used a standard X\({}_{\rm CO}=2\times 10^{30}\)cm\({}^{-2}\) (K km\({}^{-1}\))\({}^{-1}\) factor to convert integrated \({}^{12}\)CO intensities into H\({}_{2}\) column densities. As Barnes
Figure 11: Distributions of virial ratios as estimated by Miville-Deschenes et al. (2017) for the 24 clouds presented here (blue histogram) and their entire cloud population (orange histogram).
Figure 10: Violin plots of the virial ratios obtained on the largest scales in both N\({}_{2}\)H\({}^{+}\)(1-0) (magenta) and \({}^{13}\)CO(1-0) (green), along with that obtained by Miville-Deschenes et al. (2017) in \({}^{12}\)CO(1-0) (yellow) for the same sample of clouds. Each violin plot is located, along the x-axis, at the median radius value of each group. The median virial ratio value for each group is represented by a coloured circular symbol with a black edge, while the 16\({}^{th}\) and 84\({}^{th}\) percentile ranges are represented by vertical solid black lines. We have also overplotted the corresponding individual measurements as coloured circular symbols. The horizontal black dashed lines show virial ratio values \(\alpha_{\rm vir}=1\) while the shaded area show the region of energy equipartition for density profile indices between \(\gamma=1\) and \(\gamma=2\).
et al. (2015) showed, this standard conversion factor typically underestimates column densities by a factor of \(\sim 2\) for resolved molecular clouds. Taking into account this change in X\({}_{\rm CO}\) would put the MD17 masses more in line with ours (see Fig. 12 yellow circular symbols). In addition to this mass correction, one can wonder whether one should apply one to velocity dispersion measurements as well. Indeed, \({}^{12}\)CO(1-0) is typically optically thick above H\({}_{2}\) column densities of few \(10^{30}\) cm\({}^{-2}\), which means that mass and velocity dispersion measurements could be overestimated and underestimated, respectively. The effect on the velocity dispersion though is probably only of the order of 20% (e.g. Hacar et al., 2016), but as the result of the \(\sigma^{2}\) dependency of the virial ratio, a small correction factor on the velocity dispersion can lead to a significant difference on the virial ratios. However, as it can be seen in Fig. 12, the velocity dispersion measurements obtained by MD17 are in good agreement with ours, and we therefore do not believe that there is a systematic underestimation of \({}^{12}\)CO velocity dispersion for the clouds we are looking at. The corresponding distribution of virial ratios has also been reported onto Fig. 10 showing that 85% of the \({}^{12}\)CO-based virial ratio measurements are below 2.5, which is identical to the \({}^{13}\)CO-based virial ratio measurements. Overall, Figs. 12 and 10 show that even on scales of 100 pc, the vast majority of clouds from our sample is self-gravitating.
### Larson's, Solomon's, and Heyer's relations
Probably one of the most influential studies on the observational characterisation of dynamical state of molecular clouds is that by Richard Larson in 1981. In that study, they found, mostly from using \({}^{13}\)CO(1-0) data from the literature at the time, that the averaged cloud properties follow a number of relationships such that: \(\sigma\propto r^{\beta}\) with \(\beta=0.38\), and \(\rho\propto r^{-\gamma}\) with \(\gamma=1.1\). A third relation, consequence of the first two, is that molecular clouds have virial ratios close to unity and \(\alpha_{\rm vir}\propto r^{-\beta}\) with \(\delta=0.14\). Larson's size-velocity dispersion relation has been interpreted as an evidence for turbulence-regulated gas dynamics, since \(\beta=1/3\) is what one expects for incompressible Kolmogorov-like turbulence. However, these relations have later been revised by Solomon et al. (1987, S87 hereafter) who found a steeped size-velocity dispersion relation with \(\beta\simeq 0.5\). They suggested that such index is the direct consequence of the virialisation of individual molecular clouds at nearly constant mass surface densities. Heyer et al. (2009, H09 hereafter) reanalysed S87 cloud sample using \({}^{13}\)CO(1-0) GRS data and determined that, even though cloud properties are compatible with being in virial equilibrium at all radii, the change in the internal mass surface density of clouds result in a different size-velocity dispersion relation to that proposed by Larson's and Solomon's.
Finding out how our study compares to those mentioned above and understanding where the differences come from is fundamental if one wants to settle the question of the dynamical states of molecular clouds. Interestingly, half of our cloud sample (12/24) is common to both H09 and S87's samples, and since H09 used the same \({}^{13}\)CO data we use here, one can make a direct one-to-one comparison. The first property we compare is the distance used for all 12 clouds. As it can be seen in Fig. 11, for half of the clouds the distances match, while for the other half they do not. The latter group of clouds have been assigned the far distance by S87 and H09. Even though they have recalculated the kinematics distances, H09 have kept the near/far distance ambiguity solutions provided by S87. Looking in detail, these 6 clouds with far distances have been assigned so based
Figure 12: The profiles are the same as those presented in the upper row of Fig. 6. In addition, we have added the \({}^{12}\)CO(1-0) data points as presented in MD17 (transparent orange symbols) and once corrected by a factor of 2 in mass (yellow symbols). The blue dashed-lines show Larson’s laws.
Figure 13: The profiles are the same as those presented in the upper row of Fig. 6, only restricted to the 12 clouds in common with H09. In addition, we have added the H09’s measurements for these 12 clouds, for both radii measurements. We also overplot Larson’s relations as dashed blue lines, and Solomon’s size-velocity dispersion relation in the middle panel.
on the fact that: i. they best fit the S87 size versus velocity dispersion relation; ii. They best match the scale-height of the molecular layer for that position and velocity range. These are both very questionable criteria. All clouds here host IRDCs, and it has been shown that 90% of IRDCs are located at the near distance (Ellsworth-Bowers et al., 2013). This would suggest that maybe one of the 12 clouds presented here is indeed at the far distance, but it is very unlikely that the 6 are. In the rest of the comparison we set the distance to all 12 clouds to that we give in Table 1.
Figure 13 compares the profiles of the 12 clouds as we measured them with H09's measurements (after distance correction). In H09, each cloud has two measurements taken at different radii, both measured using \({}^{11}\)CO. The cyan symbols represent the large scale measurements and the yellow symbols the small scale ones. Compared to our measurements of the same clouds, we can see that, at large radii, both H09's masses and velocity dispersions tend to be underestimated. On small scales though, the masses are similar but the velocity dispersions are overunderestimatestimated. Before interpreting these discrepancies, one needs to understand the differences in the measurements themselves. For the large-scale mass measurements H09 used the original rectangular boxes that S87 used to measure their own masses. Those boxes where defined based the location and extent of the \({}^{12}\)CO(1-0) emission peaks derived from low-resolution high-noise maps. Figure 14 show three representative examples of such boxes overlaid on top of the cloud column density images. One can see that, with the exception of the biggest clouds, the boxes do not match the cloud morphologies, sometime missing the column density peaks, and often covering regions where no, or little, column density is present. The net impact of this is, for a given effective radius (defined as the radius of the disc having the same area as the box), the mass is heavily underestimated. This problem mostly disappears for small scale mass measurements as H09 have for those used the contours of the column density maps (as we did). Regarding the velocity dispersion measurements on small scales, H09 overestimate them as a result of the same projection effect that is responsible for over-estimating our own \({}^{13}\)CO velocity dispersion measurements. On large scale H09 underestimate the velocity dispersion most likely because of the unadapted velocity window used to compute their 1\({}^{\rm st}\) order moment. However, we cannot
Figure 14: Colour and contours are the \({}^{13}\)CO-based H\({}_{2}\) column density images of three of our clouds that are in common with H09’s sample. The white dashed boxes show the area used by H09 to derive the large-scale masses represented as cyan symbols in Fig. 13.
Figure 15: Heyer’s plots (left): measurements from H09 on large (blue) and small (orange) scales. The points highlighted in cyan and yellow with black edges are those clouds in common with our sample; (right): Measurements from our study on large (green) and small (magenta) scales. The clouds in common with H09’s sample are highlighted with light green and pink symbols with black edges. On each panel, the ellipses show the 1\(\sigma\), 2\(\sigma\), 3\(\sigma\) ellipses for each distributions, where \(\sigma\) is the standard deviation. Lines of constant virial ratios of 1 and 3 are shown as black dashed lines.
test this since velocity windows used for the integration by H09 are not provided.
One particular plot that has been used by H09, and many others since, to support the picture of virialised clouds on all scales is one that plots the mass surface density \(\Sigma_{\rm gas}\) of the clouds versus the parameter \(p=\sigma_{v}/\sqrt{R}\). On the left panel of Fig. 15 we reproduced the figure from H09, the clouds in common with our studies being highlighted with different colours (cyan and yellow) and with black edges. In this panel, it is quite clear that the large scale data points (the blue squares) are at lower mass surface densities than the small scales points (orange). The distribution of these points stretch across more than two orders of magnitude along lines of constant virial ratios between 1 and 3. On the same figure, the right-hand-side panel shows the same quantities for the common sample of clouds with properties as derived in this paper (here we used the values displayed in Table 2). We can see that the large scale points (green) completely overlap with the small scale points (magenta). When compared with Heyer's quantities for the same clouds we see that the spread is reduced by one order of magnitude. This is a direct consequence of the measurement biases explained above. In fact, whether we look at the large scale measurements obtained on scales of 20 pc to 60 pc or measurements obtained on scales between 1.5pc and 5pc, the data points are located within a very similar area of the plot. This is a direct consequence of the density profile of the clouds being close to \(\rho\propto r^{-1}\) on those scales and velocity dispersion profiles close to \(\sigma\propto\rho^{\beta.5}\). We also notice the quasi-absence of points below a 100 M\({}_{\odot}\)/pc\({}^{-2}\). As noted by Schruba et al. (2019), molecular clouds with lower mass surface density are non-self-gravitating. As our comparison with the MD17 virial ratio distribution shows, we are here biased towards the most self-gravitating clouds of the Milky Way population, it is therefore consistent to have nearly no measurements with mass surface density below 100 M\({}_{\odot}\)/pc\({}^{-2}\).
### Dynamically decoupled clumps
As the comparison of our observed profiles and spherical models have shown, the discontinuity in the observed velocity dispersion profiles is most likely the result of the combination of projection effects and a genuine change of the velocity dispersion profile index from \(\beta\simeq 0.5\) on large scales to \(\beta\simeq 0\) on small scales. However, one can wonder how sensitive the observed velocity dispersion profiles are to the exact value of \(\beta\) as the clumps only have a limited number of angular resolution elements in them. To test this, we built spherical models of varying \(\beta\) index in order to set some constraints on the range of values compatible with our observations. Figure 16 displays the observed clump-scale velocity profiles along with different spherical models. Each panel correspond to models of the same mass and radius, but with different velocity dispersion profiles. One can see that the different profiles are better resolved for the largest clouds, as expected. With this models in hands, it is also clear that \(|\beta|<0.2\) in all clumps, confirming the fact that the clump velocity dispersion profiles are flat and significantly different from Larson's profile.
The velocity dispersion discontinuity observed in Fig. 6 between the N\({}_{2}\)H\({}^{*}\)(1-0) and the \({}^{13}\)CO(1-0) measurements is, according to our models, the result of foreground/background layers of low-density and high-velocity dispersion gas that contaminate the \({}^{13}\)CO(1-0) velocity dispersion measurements at small radii. If this interpretation of the observed profiles is correct, measuring the gas velocity dispersion with a line emission that traces intermediate gas densities should bridge, to some extent, the observed velocity dispersion discontinuity. To test this conjuncture, we used the CHIMPS \({}^{13}\)CO(3-2) survey data (Rigby et al., 2016). Indeed, being a higher transition line, \({}^{13}\)CO(3-2) is optically thinner and less extended than \({}^{13}\)CO(1-0), making it a good tracer of intermediate gas densities. However, only 8 of our clouds have been covered by CHIMPS, amongst which one shows clear sign of self-absorption and has therefore been discarded. Figure 17 shows the velocity dispersion profiles of 6 of the 7 remaining clouds (one has been left out for a matter of figure readability). The \({}^{13}\)CO(3-2) line has been fitted following the exact same procedure as for the \({}^{13}\)CO(1-0) line. On this figure, we can see that the \({}^{13}\)CO(3-2) velocity dispersion systematically lies at intermediate values between that of the other two tracers. Also, in most cases, the \({}^{13}\)CO(3-2) profiles nicely make the bridge between the denser and more diffuse gas. Altogether, these profiles further support our interpretation that clumps are dynamically decoupled from their parent molecular clouds.
As discussed in Sec. 5.1 the vas majority of the clouds studied here are self-gravitating on all scales. This means that the observed change in the velocity dispersion profiles cannot be the result of the gas switching from a non self-gravitating state to a self-gravitating state. Because clouds are self-gravitating, it can only be in two states:
Figure 16: Velocity dispersion profiles. The models displayed in each of the three panels correspond to three different clump masses and radii, but all with the same density profile \(\gamma=2\). The yellow, orange, and red solid lines correspond to models with velocity dispersion profile index \(\beta=0,\beta=\pm 0.1\), and \(\beta=\pm 0.2\), respectively. The observed profiles are represented with thin coloured lines.
either it is collapsing, or in quasi-static equilibrium, with the nature of the stabilising agent left to be determined. One possibility would be for instance that the clouds, as proposed by Vazquez-Semadeni et al. (2019), are collapsing on all scales. However, the collapse in these models is scale-free, and there is no evidence for a transition regime at any scale. It is important to note that feedback, such as protostellar outflows (e.g. Duarte-Cabral et al., 2012; Hsieh et al., 2023), are not included in such simulations which could play a role in generating a change in the velocity dispersion profiles of collapsing clouds. It could also be that, as the specific angular momentum increases during the collapse, clumps become somewhat supported by rotation (e.g. Lee & Hennebelle, 2016). However, this seems incompatible with a steeper clump density profile, and no systematic observation of rotation motion is observed in these clumps (Peretto et al. in prep.). A third and preferred possibility is that clouds are stable on the largest scales and that they collapse on clump scale. Indeed, both density (\(\gamma=2\)) and velocity dispersion profiles (\(\beta=0\)) derived from our 1D modelling of the clumps are asymptotic solutions to a spherical isothermal non-free-falling collapsing cloud with initial uniform density (Larson, 1969; Penston, 1969), and as noted by these authors the self-similar nature of the solution means that it may apply to any structure (i.e. protostellar core, clump, cloud). Note however that what we observe is the velocity dispersion profile and not the infall velocity profile. Even though we do expect a relationship between the two, it is not clear whether both are expected to have the exact same index. In a recent study, Gomez et al. (2021) showed that \(\gamma=2\) naturally arises in collapsing cores that accrete from their surroundings. There is also now plenty of evidence for clump collapse and clump accretion (e.g. Peretto et al., 2006, 2007, 2013, 2014, 2020; Schneider et al., 2010; Traficante et al., 2018; Williams et al., 2018; Schworer et al., 2019; Barnes et al., 2019; Anderson et al., 2021; Rigby et al., 2021; Bonne et al., 2022; Zhou et al., 2022; Xu et al., 2023). A possible agent that might be able to stabilise the clouds on the largest scales is stellar feedback. For instance, in Watkins et al. (2019), it has been shown that stellar feedback from embedded O stars does not impact much the dynamical properties of the dense gas that has already been assembled, but does clearly modify the structure of the larger scale clouds. This is compatible with the observed change in velocity dispersion profiles presented here. Even though most clumps in our study do not have any embedded H\({}_{\rm II}\) regions associated to them, injection of momentum and energy within the more diffuse cloud could come from nearby sites of massive star formation. Another possible agent that could stabilise the cloud is magnetic field. An increasing number of studies suggest that magnetic fields are dynamically important/dominant in the low density regions of molecular clouds. A transition in the relative orientation between magnetic field and the density gradients of interstellar structures has been interpreted as evidence for a change in the dominant energy source, from magnetic energy on large scale to gravitational energy on clump scale (e.g. Soler et al., 2013, 2016; Chen et al., 2016; Planck Collaboration et al., 2016, 2016; Tang et al., 2019; Arzoumanian et al., 2021). In Vela C, Fissel et al. (2019) determined that this change of relative orientations occur at a number density of \(n\sim 10^{3}\) cm\({}^{-3}\). The change in the velocity dispersion profiles we observe in our sample could then be the dynamical counterpart of that "magnetic" transition. Figure 18 shows the same plot as in Fig. 15 in which the largest cloud scale measurements and all the clump scale measurements are shown. On that plot it becomes obvious that clumps behave differently than their parent clouds. The clump mass surface densities increase over two order of magnitudes along lines of constant virial ratios except towards the most central points where \(p\) and the virial ratios increase. To our knowledge no theoretical equivalent to the plots we are producing here exists. However, a scenario in which parsec-scale clumps are collapsing while their parent molecular clouds are in quasi-static equilibrium seems to intuitively match what we see.
Another important aspect of Fig. 6 is the relatively large range of velocity dispersion profile indices on cloud scale (the green lines). They range from being flat \(\beta\sim 0\) to relatively steep \(\beta>0.5\). The fact that molecular clouds of several \(10^{4}\) M\({}_{\odot}\) on tens of parsec-scale may have velocity dispersions that are barely above 1 km/s clearly shows that Larson's relation is just a statistical average over clouds
Figure 17: Velocity dispersion profiles of a sub-sample of 6 clouds. The magenta and green lines are the same of those plotted in Fig. 6 for those 6 clouds. The blue lines show the velocity dispersion profiles obtained from CHIMPS \({}^{13}\)CO(3-2) emission.
Figure 18: Same as Fig. 15 with the addition of all measurements made from N\({}_{2}\)H\({}^{\star}\)(1-0) (purple lines). The shaded area show the region of energy equipartition for density profile indices between \(\gamma=1\) and \(\gamma=2\).
of very different dynamical states. In fact, it is possible that the range of velocity dispersion profiles correspond to different evolutionary stages in the formation and evolution of molecular clouds and clumps within. Any scenario that attempt to explain the dynamical decoupling of clumps needs to do so in the context of this observed variety of large-scale velocity dispersion profiles.
## 6 Summary and Conclusion
We performed the analysis of 27 IRDC embedded within 24 molecular clouds. We computed the mass, velocity dispersion, and virial ratio profiles of each of them using three different datasets: _Herschel_-derived H\({}_{2}\) column density maps, GRS \({}^{13}\)CO(1-0)-derived H\({}_{2}\) column density cubes, and N\({}_{2}\)H\({}^{+}\)(1-0) data cubes. The combination of these data allowed us to probe both the dense and diffuse parts of the clouds, with radii from \(\sim 0.2\) pc up to \(\sim 30\) pc. Using 1D power-law models we can explain the origin of the different features observed in those profiles and we conclude that: 1. the vast majority of cluster-forming molecular clouds are self-gravitating on all scales; 2. the diffuse part of the cloud has a shallow density profile (\(\gamma\sim 1\)) that steepens (\(\gamma\sim 2\)) in the densest parts on a couple of parsec scale; 3. the velocity dispersion profile switches, for most clouds, from \(\beta\sim 0.5\) in the diffuse part of the clouds to \(\beta\sim 0\) in the denser parts. We discuss the possible interpretation of such a decoupling of the clumps from their surrounding cloud and conclude that the observations are best explained by a universal global collapse of dense clumps embedded within stable molecular clouds, even though we cannot completely rule out a scenario in which the entire cloud collapses, with small-scale feedback, such as protostellar outflows, impacting the gas kinematics on clump scales. We also notice that the velocity dispersion profiles on molecular cloud scales (i.e. \(>2\) pc) show a large variety of \(\beta\) values, some very far from the standard Larson's relation, which might be linked to their evolution since the time of their formation.
Understanding the origin of the observed low star-formation efficiency (SFE) in molecular clouds is one of the main goals of star formation research. A low SFE involves a scale/density-dependent dynamical state of the gas in which most of a cloud mass is not directly involved in the formation of stars. Observationally, the existence of a star formation threshold has been discussed in the context of the study of nearby star-forming clouds (e.g. Lada et al., 2010; Heiderman et al., 2010; Pokhrel et al., 2020). However, so far, no studies has searched for direct evidence of a transition regime in the dynamical properties of the gas within individual molecular clouds. The work presented here clearly suggests that such transition regime does exist. Because parsec-scale clumps are believed to be the direct progenitors of star clusters (e.g. Krumholz et al., 2019), our results hence suggest that star cluster formation is not a scale-free process.
Our results also carry a number of key questions and implications. First, we here do not explain what the trigger of the clump collapse is, whether it is the result of a gravitationally instability or the diffusion of magnetic fields, or any other mechanism. We also do not explain what is the main agent that counter-balance gravity in the diffuse parts of the clouds. These questions will have to be answered if one wants to derive a comprehensive scenario for the formation of star clusters. Also, one implication of our results is the fact that star formation is likely to be mostly confined to these parsec-scale collapsing clumps. Therefore their properties define the initial conditions for cluster formation, and understanding the link, on one side, between the properties of clumps and that of their associated protostellar population, and on the other side, between the global population of Galactic clumps and the star formation rate and efficiency of the Milky Way remains a fundamental challenge.
## Acknowledgements
NP and AJR acknowledges the support of STFC consolidated grant number ST/N000706/1 and ST/S00033X/1. FL acknowledges support by the Marie Curie Action of the European Union (project _MagiiStar_, Grant agreement number 841276). G.A.F acknowledges support from the Collaborative Research Centre 956, funded by the Deutsche Forschungsgemeinschaft (DFG) project ID 184018867. This work is based on observations carried out under project number 02-13 with the IRAM 30m telescope. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain).
## Data Availability
The _Herschel_ and GRS data used in this article are already publicly available on their respective survey webpages. The IRAM 30m N\({}_{2}\)H\({}^{+}\)(1-0) data can be provided upon reasonable request to N. Peretto.
|
2310.17804
|
BlackJack: Secure machine learning on IoT devices through hardware-based
shuffling
|
Neural networks are seeing increased use in diverse Internet of Things (IoT)
applications such as healthcare, smart homes and industrial monitoring. Their
widespread use makes neural networks a lucrative target for theft. An attacker
can obtain a model without having access to the training data or incurring the
cost of training. Also, networks trained using private data (e.g., medical
records) can reveal information about this data. Networks can be stolen by
leveraging side channels such as power traces of the IoT device when it is
running the network. Existing attacks require operations to occur in the same
order each time; an attacker must collect and analyze several traces of the
device to steal the network. Therefore, to prevent this type of attack, we
randomly shuffle the order of operations each time. With shuffling, each
operation can now happen at many different points in each execution, making the
attack intractable. However, we show that shuffling in software can leak
information which can be used to subvert this solution. Therefore, to perform
secure shuffling and reduce latency, we present BlackJack, hardware added as a
functional unit within the CPU. BlackJack secures neural networks on IoT
devices by increasing the time needed for an attack to centuries, while adding
just 2.46% area, 3.28% power and 0.56% latency overhead on an ARM M0+ SoC.
|
Karthik Ganesan, Michal Fishkin, Ourong Lin, Natalie Enright Jerger
|
2023-10-26T22:37:52Z
|
http://arxiv.org/abs/2310.17804v1
|
# BlackJack : Secure machine learning on IoT devices through hardware-based shuffling
###### Abstract
Neural networks are seeing increased use in diverse Internet of Things (IoT) applications such as healthcare, smart homes and industrial monitoring [67]. Their widespread use makes neural networks a lucrative target for theft. An attacker can obtain a model without having access to the training data or incurring the cost of training. Also, networks trained using private data (e.g., medical records) can reveal information about this data [28]. Networks can be stolen by leveraging side channels such as power traces of the IoT device when it is running the network. Existing attacks require operations to occur in the same order each time; an attacker must collect and analyze several traces of the device to steal the network. Therefore, to prevent this type of attack, we _randomly shuffle_ the order of operations each time. With shuffling, each operation can now happen at many different points in each execution, making the attack intractable. However, we show that shuffling in software can leak information which can be used to subvert this solution. Therefore, to perform secure shuffling and reduce latency, we present BlackJack, hardware added as a functional unit within the CPU. BlackJack secures neural networks on IoT devices by increasing the time needed for an attack to centuries, while adding just 2.46% area, 3.28% power and 0.56% latency overhead on an ARM M0+ SoC.
+
Footnote †: Both authors contributed equally to this work.
## I Introduction
The Internet of Things (IoT) has enabled novel applications in fields such as health monitoring [44], smart homes [79] and remote sensing [74]. Within IoT, Machine learning (ML) is seeing increased use in areas such as image and voice recognition, indoor localization and biomedical monitoring [67]. With their increasingly widespread deployment, ML models have become appealing targets for theft. There are many reasons why an attacker would wish to steal a ML model deployed on an IoT device:
* Classification accuracy is highly dependent on access to high-quality training data, which an attacker might not have access to. Obtaining a pre-trained model obviates the need for this training data, allowing an attacker to replicate the accuracy of a well-trained model.
* A trained model can leak information about the training data, which must remain confidential. For models trained using patient medical records for example, leaking training information can result in a serious breach of privacy [28].
* In networks used for financial applications, reverse engineering the network would allow an attacker to bypass fraud detection. For example, EMVCo (i.e., 'chip and pin'), used by Visa and MasterCard, employs neural networks for fraud detection [73]. An attacker with access to this model could learn how to circumvent fraud detection and charge credit cards without getting caught.
For these reasons, it is critical that ML models deployed on IoT devices be secured against attackers.
Direct access to the models stored in on-chip memory is normally blocked by manufacturers. For example, TI's MSP430FR chips require a password to access the JTAG port [77]. However, attackers can still use side channels to gather secret information from the device. Side channels are vectors such as timing, power consumption or electromagnetic emanations (EM) which can leak information about data being processed by the device [66]. Prior work shows that side-channel leakage can be used to fully reverse engineer a neural network running on an IoT device [8, 54, 62]. By analyzing EM traces of the device running the network, an attacker can learn the size, activation function and the weights for every layer.1 While these attacks target neural networks, we show that they can also apply to other ML algorithms, such as autoencoders and support vector machines (Section III-C).
Footnote 1: We elaborate on the full details of the attack in Section III.
Power side-channel attacks require collecting and analyzing several traces, to eliminate the effect of noise from other system components. For this analysis to work properly, the operations being targeted must occur in the same place in each trace. Therefore, one approach to thwart such attacks is to _randomly shuffle_ the order of operations each time. When shuffling is applied to neural network layers, each weight is used at a different point for every inference run. The attacker must then try every possible combination of the recovered weights to carry out a successful attack. For example, to reverse engineer \(M\) shuffled weights, the attacker would need \(\mathcal{O}(M!)\) traces to mount an attack. For a single neuron with \(64\) weights, shuffling increases the number of traces needed for a successful attack by _90 orders of magnitude_. If an attacker collects and analyzes \(1000\) traces a second, they would need \(4.026\times 10^{78}\) years to reverse engineer the weights of a single neuron. In BlackJack, we also shuffle the order of neurons (\(N\)) per layer.2 Thus, the total number of possible permutations increases to \(\mathcal{O}(M!)\times\mathcal{O}(N!)\). As neural networks consist of hundreds of neurons and thousands of weights, collecting
enough traces to reverse engineer a whole network would take millions of years, making the attack completely untenable. For example, in the networks we evaluate in Section VI-C, the largest values of \(M\) and \(N\) are \(5670\) and \(128\).
Prior work has shown that shuffling is effective at preventing side-channel attacks targeting neural networks [12, 71]. However, these works implement shuffling in software, which suffers from a number of drawbacks:
1. Software shuffling leaks side-channel information. We demonstrate a new attack which undermines the security benefits of software shuffling (Section IV).
2. Software shuffling adds significant latency overheads due to the additional CPU instructions required (Section VI).
To overcome the limitations of software shuffling, we propose BlackJack, hardware to perform random shuffling. BlackJack is added as a functional unit within the CPU, which significantly reduces the latency overhead of shuffling (Section VI-C). While prior work has proposed hardware for shuffling, these designs are limited to shuffling \(2^{N}\) objects [9, 15, 21]. This limitation makes existing approaches unsuitable for shuffling the arbitrary number of weights and neurons used in neural networks. BlackJack provides an efficient, low-latency hardware solution which supports shuffling any number of values. Furthermore, BlackJack is'symmetric' (i.e., it does not leak information based on the current input) and therefore does not leak any side-channel information (Section VI-E). While we focus on securing neural networks deployed on IoT devices, BlackJack can also be used to secure other applications which operate on sensitive data (Section VII-A). Finally, we show that BlackJack can also thwart other side-channel attacks against neural networks, such as floating-point timing attacks and fault-injection attacks (Section VII-B).
In summary, we make the following contributions:
* We show that shuffling is an effective technique to prevent side-channel attacks against ML algorithms, due to the large number of operations that can be shuffled.
* To the best of our knowledge, we show the first side-channel attack against software shuffling, to learn the exact values being shuffled. An attacker can use this information to 'undo' shuffling and carry out the attack as before.
* To perform shuffling securely and with much less overhead, we add BlackJack as a functional unit within the CPU. BlackJack effectively prevents side-channel attacks, while adding just 2.46% area, 3.28% power and 0.56% latency overhead to an ARM M0+ SoC.
* We demonstrate the versatility of our approach by showing that BlackJack is effective at preventing other side-channel attacks as well as securing other applications against such attacks.
## II Background and Related Work
In this section, we provide background on side-channel attacks and prior work on shuffling, a commonly used technique to defend against these attacks.
### _Side-channel attacks_
Side-channels attacks are a widely used mechanism to obtain secret information about a system, without interfering with normal system operation. Attacks such as SPECTRE [56] and MELTDOWN [60], which target large out-of-order cores, have highlighted the strength of side-channel attacks. SPECTRE and MELTDOWN use _timing_ side channels, where an attacker leverages the time difference between certain operations to steal secret information. As IoT systems typically employ very simple processors, they are more commonly targeted by power side channel attacks [66].
Performing a power side-channel attack requires collecting traces of the system being targeted. A trace is a measurement of the device while it is operating on secret data. The power trace varies based on the secret information being operated on by the device. Thus, by analyzing these power traces, an attacker can reverse engineer the secret information used. To collect these traces, an attacker only needs access to the voltage (\(V_{dd}\)) input of the device. A commonly used proxy for power is to measure the Electromagnetic (EM) emanations of the device. This does not even require the attacker to physically contact the device at all; the EM probe must simply be placed near the device [66].
A key difference between timing and power/EM side channels is the number of traces required; with timing channels, information can be leaked with a single trace. However, for power/EM attacks, many traces are required to recover secret information. This is because these side channels are noisy due to interference from other system operations [66]. Thus a single trace does not provide sufficient resolution for an attacker to recover information. The attacker must therefore collect a large number of traces and analyze them together to eliminate noise. Thus, variations between the traces makes the attack more difficult as the attacker must compensate for variations before performing the attack. One popular technique for preventing side-channels attacks is _masking_, which we describe in Section VIII-B. We now focus on the other common technique, _shuffling_, which is the basis of our work.
### _Shuffling_
Shuffling randomly reorders the sequence of sensitive operations each time a program is run [64]. With operations happening at different points in each trace, the attacker can no longer identify the position of each operation. Therefore, shuffling \(N\) operations forces the attacker to collect \(N!\) traces to account for every possible ordering.
We provide a detailed survey of prior approaches which employ shuffling in Section VIII. Our work differs from prior approaches for hardware shuffling in two major ways: 1) We target neural networks, which have hundreds of neurons and thousands of weights. The major limitation of shuffling for securing AES is that there are only \(16\) S-Box values to shuffle, which limits the number of possible permutations to \(16!\) for AES. Thus, our use of shuffling for securing neural networks results in a huge number of possible permutations, and consequently the time needed for a successful attack tremendously.
2) Unlike prior works which can only shuffle the order of \(M\) operations when \(M\) is a power of 2, BlackJack can efficiently shuffle the order of operations for any value \(N\). To the best of our knowledge, BlackJack is the first technique to perform hardware shuffling for arbitrary values of \(M\).
Randomly shuffling operations requires a means to secure produce random numbers. For this purpose, we use a True Random Number Generator (TRNG), a hardware module to produce a sequence of random bits. TRNGs use some physical phenomenon (e.g., power supply noise, temperature, voltage fluctuations) to generate random numbers [85]. By relying on such analog phenomenon, TRNG outputs do not conform to a repeating pattern than an attacker can learn to subvert the security of the TRNG. On supported systems, the TRNG output can be accessed in software using a random number generation function (e.g., \(rand()\) in C). We assume that both software shuffling and BlackJack use a TRNG for generating random numbers.
## III Attacking neural networks
In this section, we explain how the neural network running on an IoT device can be stolen via side-channel attacks. We focus on power/EM side-channel attacks and describe other types of attacks in Section VII-B. While several power/EM side-channel attacks have been proposed [46, 54, 8], we focus on CSI NN [8] as a representative side-channel attack from this class. We then describe how we replicate the CSI NN attack. Finally, we show how this attack can be extended to ML models other than neural networks.
### _Csi Nn_
CSI NN uses electromagnetic emanations from an IoT device running a neural network to learn the weights and the hyperparameters (i.e., number of layers, number of neurons per layer and the activation functions) of the network. We show how each of these is determined, starting with the network hyperparameters.
**Number of neurons.** Calculating the output of a neuron consists of several multiplication operations followed by the activation function. Figure 1 shows the EM trace for a layer with six neurons. An attacker needs to simply count the number of neurons from the trace.
**Activation function.** In Figure 1, two distinct regions can be seen per neuron. The per neuron trace in Figure 2 shows several multiplication operations followed by the activation function. CSI NN observes that common activation functions (i.e., \(sigmoid\), \(tanh\), \(ReLU\) and \(softmax\)) show significant variations in runtime (Table I). \(ReLU\) takes \(<10\)ms, while \(sigmoid\) and \(tanh\) take \(50\)-\(200\)ms and \(sigmoid\) takes \(700\)-\(900\)ms. This variation can be used to identify the specific activation function used for each layer. With the activation function known, the attacker can then split the trace into segments containing only the weights for the next step.
**Weights.** The next step is to determine the values of these weights using Correlation Power Analysis (CPA), which requires an accurate model of the device's power consumption. The power model is highly dependent on the hardware being targeted. In microcontrollers, the memory bus consumes the most power [88]. The memory bus is pre-charged to all 0's before any memory is read. Then, based on the value read, the power consumed is proportional to the number of bus lines that are charged to 1. This is known as the _Hamming Weight (HW)_ power model and is the most commonly used model for microcontrollers [8, 62]. The attacker then generates 'weight candidates' - a list of all possible weight values and their Hamming weights.
**Correlating traces.** For this step, the attacker first splits each trace into per-weight segments and targets each weight separately. For each weight, the attacker has \(D\) power traces (i.e., \(t\)), each consisting of \(T\) measured data points. The attacker also has a list of \(I\) weight values (\(h\)), one for each trace (since each trace uses a different input). Now, the attacker must correlate the measured traces \(t\) against the guesses of the power model \(h\). The Pearson correlation coefficient (PCC) is
\begin{table}
\begin{tabular}{|c|c c c|} \hline Activation function & Min & Max & Mean \\ \hline ReLU & 5.8 & 6.06 & 5.9 \\ Sigmoid & 152 & 222 & 189 \\ Tanh & 51 & 210 & 184 \\ Softmax & 724 & 877 & 813 \\ \hline \end{tabular}
\end{table} TABLE I: Comparison of delays (in ms) for commonly used activation functions running on an ARM M3 CPU [8]
Fig. 1: Identifying the number of neurons in a layer [8].
Fig. 2: Identifying the number of weights per neuron and the activation function used per layer [8].
the most widely used metric for this purpose [88]. The PCC (\(\rho\)) is calculated using Equation 1.
\[\rho_{t,h}=\frac{\sum_{d}^{D}[(h_{d,i}-\overline{h_{i}})(t_{d,j}-\overline{t_{j}} )]}{\sqrt{\sum_{d}^{D}(h_{d,i}-\overline{h_{i}})^{2}\sum_{d}^{D}(t_{d,j}- \overline{t_{j}})^{2}}} \tag{1}\]
The parameters in Equation 1 are:
* \(t_{d,j}\) is point \(j\) in trace \(d\).
* \(h_{d,i}\) is the weight guess \(i\) for trace \(d\).
* \(\overline{t_{j}}\) is the mean of all guesses for each trace \(d\).
* \(\overline{h_{i}}\) is the mean of all guesses for a weight \(i\).
The attacker then uses the absolute value of the PCC to perform the correlation. A value of \(|\rho_{t,h}|\) close to one means that the weight guess \(h\) correlates closely with the trace \(t\), indicating that weight guess is more likely to be the correct guess for that weight. The value with the highest \(|\rho|\) is taken as the final guess for that weight. This process is then repeated for every weight in the trace to generate all the weights for the network. In CSI NN, the authors are able to reverse engineer networks with a \(<1\%\) loss in classification accuracy. In contrast, when shuffling is applied using BlackJack, the recovered weights yield a network with a much lower classification accuracy. For one of the networks we evaluate in Section VI-C, the accuracy using the recovered weights is just 11.7%.
**Number of layers.** Figure 1 shows a single fully connected layer with six neurons. However, it is not possible to tell this network apart from a network with two layers having three neurons each. In CSI NN, the authors use the PCC values to also determine the layer boundaries. The attacker uses a known input to attack all the neurons in the trace. The neurons belonging to the first hidden layer will correlate strongly with the input (i.e., have high PCC values). However, as neurons in the second hidden layer do not depend on the input, they show weak correlation. Thus, the last neuron which shows a high correlation marks the layer boundary.
The attacker follows an iterative procedure where they target the first hidden layer, determine its size and recover the weights. Once this is done, they can calculate the outputs of that layer and feed them to the second hidden layer as inputs and repeat the attack. The attacker repeats this process for each layer to reverse engineer the whole network. We now describe how we reproduce the CSI NN attack as a baseline to evaluate BlackJack.
### _Reproducing the attack_
To reproduce the CSI NN attack, we use the ChipWhisperer CW-NANO platform [72]. The ChipWhisperer is a commonly used platform for side-channel analysis [58, 69, 37]. The CW-NANO platform consists of an ARM M0+ CPU as the 'target' for side channel attacks, alongside an FPGA for data collection and processing.
We collect traces of a MLP network consisting of a 32, 10 and 5 neurons in the input, hidden and output layers, respectively. We first split this trace into segments of just the weights for each neuron and then use correlation power analysis (CPA) to reverse engineer the weights. We empirically determine that 100 traces is sufficient to recover all the weights of the network with 100% accuracy. Our experiments differ from those in CSI NN in two ways: We are able to recover the weights with 100% accuracy with just 100 traces in contrast to CSI NN, which required In contrast, CSI NN required several hundred traces and weights were not recovered with 100% accuracy. This is due to the following two reasons: 1) CSI NN targeted floating-point values while we target fixed-point operations. Low-power IoT devices typically lack floating-point hardware, which makes fixed-point operations a natural choice for running NNs on these devices. 2) CSI NN used the EM side channel which is more susceptible to noise compared to the power side channel which we use. Despite this, our attack is equivalent to CSI NN since EM is merely a proxy for power. We also see that the number of traces needed does not scale with the number of weights. This is because each weight is treated independently, thus having more weights does not affect the 'averaging of traces' needed to recover each weight value.
### _Extending the attack_
While CSI NN targets MLPs and CNNs, we show that this attack also works for other ML algorithms, namely autoencoder (AE) networks and support vector machines (SVMs). As AE networks use the same layer types as CNNs, the attack applies directly to them. For SVMs, we target linear kernels (suitable for low power IoT devices), which use two nested for loops. The outer loops iterates over all support vectors and the inner loops over all the input dimensions. The inner loop performs a dot product of the input and a secret weight vector, which an attacker wishes to steal. Thus, shuffling SVMs is similar to shuffling fully connected layers, which are also implemented using two nested for loops. Next, we demonstrate for the first time how shuffling performed in software can still be attacked via side-channel information.
## IV Attacking Software Shuffling
In this section, we describe how shuffling is implemented in software and how this implementation leaks side channel information. Finally, we outline our attack against software shuffling, which can nullify the security benefits of shuffling in software.
### _Shuffling for security_
**Shuffling for neural networks.** Software shuffling has been applied to prevent side channel attacks against neural networks [71, 12]. Both papers shuffle the order of neurons per layer as well as the order of weights per neuron. Algorithm 1 shows a shuffled implementation of a fully-connected layer with \(M\) neurons and \(N\) weights per neuron. In the un-shuffled case, the next neuron to run is picked by the loop iterator \(i\). With shuffling, we need a separate list to store the shuffled order. Therefore, we make a new list with the values \([0,M)\) in sequence, using the CreateList function (Line 1). The new list is then shuffled and for each loop iteration, we read
the next element from the shuffled list and run that neuron (Line 4). This process is repeated each time this layer is run, effectively randomizing the order of operations. The weights per neuron are also shuffled in a similar way. Next, we describe how the shuffled list is created in software.
```
1M_list = CreateList(M)
2M_shuffled = FisherYatesShuffle(M_list)
3for\(i=0;\ i<M;\ i++\)do
4r_i = M_shuffled[i]
5N_list = CreateList(N)
6N_shuffled = FisherYatesShuffle(N_list, N)
7for\(j=0;\ j<N;\ j++\)do
8r_j = N_shuffled[j]
9sum[r_i] += input[r_j] \(\times\) weight[r_i][r_j]
10sum[r_i] += bias[r_i]
11output[r_i] = actFunc(sum[r_i])
```
**Algorithm 1**Fully connected layer with software shuffling.
**Fisher-Yates shuffling.** Prior work uses the Fisher-Yates algorithm (Algorithm 2) for shuffling [12]. The Fisher-Yates algorithm is widely used to perform shuffling in security-critical applications such as data and image encryption [3, 45, 78, 86]. Given a list of \(N\) numbers, Algorithm 2 generates a random permutation of this list. Algorithm 2 iterates over every item in the list and for each item, picks a second random item and swaps them. The \(rand()\) function queries a \(l\)-bit TRNG, which produces a number in the range \([0,2^{l})\) (Line 3). The TRNG output is scaled to the desired range of \([0,i+1)\) with a modulus operation. Finally, the \(swap()\) function then swaps both entries (Line 4). When the all iterations are complete, the items in the list indicate the random order in which iterations should be run.
```
1FunctionFisherYatesShuffle(list, \(N\)):
2for\(i=N-1;\ i>0;\ i--\)do
3j = rand() % (i+1);
4swap(list[i], list[j]);
```
**Algorithm 2**Fisher-Yates algorithm for shuffling.
**Computing modulus.** We now focus on the modulus operation, which is the source of the side channel leakage. In hardware, modulus is computed as the remainder of a division operation [1]. Ultra-low power CPUs, such as the ARM M0+ that we use in our evaluation, do not have a hardware divider [1]. They instead implement division in software, using shifts and subtracts.
**Software division.** For the M0+ CPU, ARM GCC (i.e., _arm-none-eabi-gcc_) uses the _aeabi_udivmod function for division and modulus. Algorithm 3 shows pseudo-code for this function. The \(division\) function computes \(a\div b\) and returns the quotient \(q\) and remainder (i.e., the modulus) \(r\). The first _While_ loop counts the number of steps division will take, by shifting \(b\) 1-bit to the left until bit 31 is \(1\). The number of shifts required is stored in \(i\), which then determines how many times the second _While_ loop runs. The second _While_ loop performs division by implementing a _restoring division_ algorithm. Both the time taken and the power trace vary based on the dividend \(a\) and divisor \(b\).
### _Analyzing software division_
**Latency variation.** We begin by profiling the number of cycles taken by software division. We once again use the CW-NANO platform we described in Section III-B. We measure latency using a C program, compiled using the ARM GNU compiler v9.2.1 with _-O3_ optimizations. Figure 2(a) shows the heat-map of cycles of \(a\div b\) for \(a,b\in[1,16384)\).3 We see a significant variation in latency when \(a>b\) (top left of Figure 2(a)). As Algorithm 3 performs \(a-b\) during each iteration, the bigger then value of \(a\) compared to \(b\), the more iterations are needed. In contrast, the latency is similar for all cases where \(a<b\) (bottom right of Figure 2(a)). This is because in these cases, Algorithm 3 only runs a single iteration. Since many input values have the same latency, we also analyze the variation in _power_ when performing division, to uniquely identify \(a\) and \(b\).
Footnote 3: We use 16,384 as that is the maximum number of iterations our implementation supports (Section VI-C). However, our attack scales to all values of \(a\) and \(b\).
**Power variation.** Figure 2(b) shows mean subtracted power traces for \(100\div b\) for \(b\in[8,15]\). We first take the average of all the power traces (to remove the power contribution of other system components) and then plot each trace minus this average. While dividing \(100\) by each of these \(b\) values has the same latency, we see that the power traces differ based on the value of \(b\), allowing us to tell them apart. While we only show a small range of values for clarity, we see this behaviour for the entire range of inputs we study. Together, we use the input-dependent variation in the latency and power of software division as the basis of our attack.
We could gather and store traces of the system computing \(a\div b\) for all possible values of \((a,b)\). Then, we would compare each stored trace against each trace we collect from the system. The stored trace which exactly matches the collected trace would give us the values of \(a\) and \(b\). However, this approach suffers from two drawbacks: 1) as \(a\) and \(b\) can each take \(2^{N}\) values, the number of comparisons required grows exponentially with \(N\). 2) As Figure 2(a) shows, many values of \(a\) and \(b\) have the same latency. Thus, the variations in the power traces between these values is small, making it difficult to uniquely identifying \(a\) and \(b\) from a single trace this way. We now describe two techniques to narrow the search space and make this identification tractable.
**1. Making efficient comparisons.** From our earlier profiling of software division, we have a minimum (\(t_{min}\)) and maximum (\(t_{max}\)) time that division can take. Instead of finding out where division ends, we find where in the collected trace 2 begins. As 2 is the same for every trace, this comparison is much more efficient. For the first division operation in Figure 2(d), we compute the difference between 2 and the collected trace starting \(11+t_{min}\) until \(11+t_{max}\). The value of \(t\) (i.e., \(t_{div}\)) where the difference is \(0\) gives us the latency of division. Once we know \(t_{div}\), we only need to compare against traces which take that number of cycles. However, as Figure 2(a) shows, many values can have the same division latency. Our second optimization further shrinks the search space.
**2. Sequential values.** In Algorithm 2, the inputs to the division operation are \(rand()\) and \(i+1\). We cannot know \(rand()\) as it is the output of a TRNG. However, as \(i\) goes from \(N\) to \(1\), we know the value of \(i+1\) during each iteration.4 We can learn \(N\) by analyzing the traces shown in Figure 2 as software shuffling does not obscure the number of items being shuffled. We now know the divisor (\(b\), which is \(i+1\)) during each division operation. We only need to compare the trace against the stored traces where the divisor is \(i+1\), which further reduces the number of comparisons needed.
Footnote 4: Some implementations of Fisher Yates access items from index 0 to \(N-1\). Our approach still applies as elements are accessed sequentially.
**Training a classifier.** With the first two optimizations having reduced the number of comparisons needed, we train decision tree classifiers to predict \(a\), given \(b\) and \(t_{div}\). We train our classifiers using SciPy version 1.9.0, using the _gini_ criterion. We train a separate classifier for each value of \(t_{div}\) and \(b\). By narrowing the range of values that each classifier must predict, we obtain smaller and more accurate classifiers. This allows
Fig. 3: Analysis for side-channel attack against software division.
the classifier to predict the value of \(a\) with 100% accuracy.
Note that we get \(a\) and \(b\) from **a single trace**. CSI NN requires multiple traces because the multiplication operation being targeted is a single-cycle operation. However, as software division takes many cycles, our classifier has many data points it can use, which allows our attack to work with a single trace.
### _Putting it all together_
The target of our attack is the modulus operation (Line 3 in Algorithm 2). We wish to learn the value of \(j\) so we know the inputs to the \(swap()\) function (Line 4). Using our attack, we find the output of \(rand()\) and we also know the value of \(i\). Knowing these values lets us determine the value of \(j\) for every iteration. We then collect multiple traces as outlined in Section III and then rearrange each trace based on the swapped indices. With the rearranged traces, we can carry out the power side-channel attack as before. With our attack, software shuffling offers **no security improvement** over the baseline. We therefore conclude from our attack that we require novel hardware which does not leak side channel information for shuffling.
## V Securing model weights with BlackJack
In this section, we describe BlackJack, hardware for efficient shuffling. We begin by outlining the main challenges associated with designing hardware for shuffling. We then provide an overview of our hardware, followed by a description of how software interfaces with our hardware.
### _Design challenges_
**Avoiding the memory bus.** We want to avoid the memory bus as it is the main source of information leakage. Therefore, we add BlackJack as a functional unit directly within the CPU. This also reduces the latency of our approach.
**Reducing latency.** Similar to software shuffling, we could also produce a shuffled list ahead of each loop. However, storing a list of arbitrary size \(N\) in hardware is challenging. We cannot store this list in memory as that would require using the memory bus and subject to leaking information. Alternatively, we could use a dedicated on-chip storage but sizing this to accommodate the large dimensions of neural networks would add considerable overhead. As we show in Section VI-C, our implementation supports \(16,384\) iterations for four loops. We use CACTI 7 [7] to determine that storing this many iterations would add \(61\)% area overhead to an ARM M0+ SoC [70]. Instead, we produce random iterations while the layer is running and store the next iteration value in a CPU-accessible register. The CPU reads from this register in a single cycle, thereby minimizing latency and storage overhead.
**Avoiding the modulus operation.** We must convert the TRNG output from a value in the range \([0,2^{l})\) to the range \([0,N)\), using a modulus operation. As we showed in Section IV, modulus (implemented as division) is susceptible to side-channel attacks. Therefore, we need a way to randomize the order of iterations without using a modulus operation. We now describe BlackJack, which addresses these challenges without incurring significant overheads.
```
1load_bank(BANK0,M)
2load_bank(BANK1,N)
3for\(i=0;\ i<M;\ i++\)do
4f_i=get_next_iteration(BANK0)
5for\(j=0;\ j<N;\ j++\)do
6f_j=get_next_iteration(BANK1)
7sum[r_i]+=input[r_j] * weight[r_i][r_j]
8sum[r_i]+=bias[r_j]
9output[r_i] =actFunc(sum[r_i])
```
**Algorithm 4**Fully connected layer with BlackJack functions added.
### _High level overview_
To generate iterations in the range \([0,N)\) in a random order, without repetitions, we first split the total number of iterations into \(k\) 'bins'. Each bin then represents a subset of the total number of iterations that must be run. For example, for a single neuron with ten weights, we split them into two bins: bin 0 for iterations 0-4 and bin 1 for iterations 5-9.5 To start with, each bin is set to its minimum value (i.e., 0 and 5). To pick an iteration to run, we pick one of the two bins and the value in that bin is output. Next, the value in that bin is incremented, ensuring a unique output each time. The process repeats ten times to output ten total iterations, with one of the two bins picked randomly each time. In essence, BlackJack converts the problem of selecting an iteration in the range \([0,N)\), to
Fig. 4: Hardware for counter-based shuffling.
picking from a much smaller number of bins. By restricting the number of bins to always be a power of 2, _we can directly use the output of the TRNG without requiring a modulus operation._ Next, we quantify the total number of possible permutations when using BlackJack.
**Mathematical formulation.** When \(N\) is a multiple of \(k\), all bins will have the same number of iterations. But when \(N\) is not a multiple of \(k\), there will be one bin with fewer iterations. In this case, the first \(k-1\) bins will each have \(a\) iterations where \(a=\lceil N/k\rceil\), while the last bin will have \(b\) iterations, where \(b=N-(k-1)a\). If \(N\) is a multiple of \(k\) however, \(a=N/k\) and \(b=0\). Therefore, the total number of permutations, \(P\) is:
\[P=\begin{cases}N!/[(a!)^{k}],&\text{if N is a multiple of k}\\ N!/[(a!)^{k-1}\times b!],&\text{otherwise}\end{cases} \tag{2}\]
For the case with a single register (\(k=1\)), we have \(a=N\) (i.e., \(N\) iterations all in one bin) and \(b=0\). This gives us just \(1\) possible order, which is the same as the baseline case. Using \(N\) registers per set (i.e., \(k=N\)), with \(a=1\) (i.e., 1 bin per iteration) and \(b=0\), gives us the maximum possible \(N!\) permutations. Using Equation 2 for our example above (N=\(10\) and k=2), we get \(P=252\) possible orderings. For larger sizes of \(N\) and \(k\), \(P\) quickly grows into the millions, which effectively randomizes the sequence. In Section VI-B, we show how such huge values of \(P\) make the attack take intractable lengths of time. We now describe our hardware implementation of this 'bins' to track iterations.
### _Hardware overview_
Figure 4 shows an overview of BlackJack. We use a set of \(k\) registers to keep track of the value of each bin. In our example above, we would use two _current count_ registers, to track the current value of each bin. These two current count registers are initially loaded with the values 0 and 5, respectively. We use a TRNG to pick a current count register to output the next iteration. As registers are picked, their values are incremented each time. However, once a current count register reaches its maximum value (i.e., 4 or 9 in our example), we must disallow it from being run again. To keep track of the maximum values for each current count register, we use another set of _max count_ registers.
As current count registers begin to saturate, the output from the TRNG will pick disallowed registers. To quickly pick another valid register to run, we employ a combinational Round Robin Arbiter (RRA). The RRA keeps track of all current count registers, using a single bit set to '1' per register, indicating that this current count register still has iterations that can be run. The output of the TRNG is fed to the RRA to pick a current count register. If the corresponding RRA bit is '1', that register's value is output. Next, we compare the value in that current count register with its corresponding max count register. If the max value has not been reached, the current count register is incremented. This is performed using the 'compare and increment' (CAI) block in Figure 4. However, if a register has reached its maximum value, the CAI block sets the bit corresponding to that register in the RRA to '0', via the 'disallow register' signal. If a disallowed register is later picked, the RRA outputs the closest allowable register to be run instead. The RRA is purely combinational and therefore returns a valid register in a single cycle each time.6 The number of registers per set is a parameter that can be configured at design time. We explore both the frequency of our design and the number of register per set we use in Section VI. The larger the number of registers, the more security our design provides but at the cost of increased area. To balance security and added area, we opt for \(16\) registers per set.
Footnote 6: This avoids repeatedly querying the TRNG for a valid register which would be time consuming.
**Hardware banks.** The hardware shown in Figure 4 generates random iterations for a single loop. However, neural network layers are implemented as a series of nested loops. We therefore use one copy of the hardware in Figure 4 per loop that we wish to randomize. Each loop is associated with one bank and we use multiplexors to pick which bank to use for each loop. We opt for a design which uses four banks, to balance security vs. area and latency overhead. We use two banks for fully connected layers. For convolutional layers, we opt to four out of the six loops. Therefore, we loop over input channels, the output channels, input rows and input columns. Lastly, for max pooling layers, we use three banks to shuffle rows, columns and channels. Shuffling of these loops is achieved by means of additions we make to the code (shown using highlighted boxes in Algorithm 4). Next, we describe the purpose of these code additions.
### _System interface_
In this section, we show how BlackJack is controlled via software and the necessary extensions to support this.
**Code annotations.** We program BlackJack via two functions: \(load\_bank\) and \(get\_next\_iteration\). Algorithm 4 shows the code for a fully connected layer with our changes highlighted. The \(load\_bank\) function (lines 1 and 2) loads the registers in a specified bank, before running the loops. The \(get\_next\_iteration\) function (lines 4 and 6) queries the hardware for the next iteration from a given bank. The values returned from the \(get\_next\_iteration\) are stored in \(r\_i\) and \(r\_j\) and then used in the loops instead of the original loop iterators \(i\) and \(j\). The \(load\_bank\) and \(get\_next\_iteration\) functions are defined in a library that we provide. Our library implements these functions using custom ISA instructions, which we describe next.
**ISA extensions.** We add additional CPU instructions to interface with BlackJack (Figure 5). The first instruction, _SHFL_LD_, loads initial values to the current count and max count registers before each layer. The bits of the _SHFL_LD_ instruction are:
* [27:20] shows an unused opcode in the baseline ARM ISA which we use for our instructions.
* [19:18] select the bank we want to access.
* [17] select the set (i.e., current/max count registers).
* [16:10] specify the register within the set.
* [9:0] are the value to be loaded into the selected register.
The second instruction, _SHFL_GNI_, returns the next iteration from one of the banks. This instruction format is:
* [31:18] are identical to the _SHFL_LD_ instruction.
* [17:4] are unused in this instruction.
* [3:0] specifies a CPU register for the result.
The \(SHFL\_LD\) instruction uses 10 bits for the register value, which allows each register to count up to 1024 iterations. We use 7 bits for the register select, which allows for designs with up to 128 registers per set. This instruction encoding therefore supports loops with up to 131,072 iterations. As this is much larger than networks run on an IoT device, this encoding does not limit the size of networks that our technique can support.7 Our technique does not impose any restriction on the number of layers nor the total number of weights a network can have.
Footnote 7: For example, the largest layer we run in Section V-E is an order of magnitude smaller than the maximum iterations supported by our encoding.
Our library contains definitions for the \(load\_bank\) and \(get\_next\_iteration\) function calls.8 The \(load\_bank\) function calculates and loads (using _SHFL_LD_) the current count registers and max count registers. The \(load\_bank\) function is only called once per bank, before each layer. The overhead of \(load\_bank\) scales with the number of registers but is not affected by the size of the layers. Thus, layers of any size require the same number of instructions for the loading operation, which amortizes the overhead of \(load\_bank\).
Footnote 8: Our library can be integrated within other NN libraries such as STMicro’s STM32Cube.AI [83].
The \(get\_next\_iteration\) function performs a single register read and therefore adds just one _SHFL_GNI_ instruction to the program binary. However, as the _SHFL_GNI_ instruction is called for every single loop iteration, it is critical that we minimize the cycle count of that instruction, to reduce the overall latency impact. Next, we explain how BlackJack achieves this goal of minimizing the latency of the _SHFL_GNI_ instruction.
### _Hardware latency_
We design BlackJack to provide the next iteration number to the CPU with a one-cycle latency. To do this, we take advantage of the time between subsequent calls to the \(get\_next\_iteration\). In Algorithm 4, we first load BANK0 (line 1). As soon as BANK0 is loaded, the hardware begins selecting the next iteration for that bank. In the meantime, the CPU is loading values for BANK1 (line 2). Thus, we have several cycles to pick the next iteration for BANK0 before it is queried by the CPU.
Similarly, there are seven cycles between subsequent calls to the \(get\_next\_iteration\) function, even in the inner for loop (line 5). This is because CPU must do several operations (i.e., calculating the index of the next weight, loading that weight and associated input, performing a multiplication and addition) before requiring the next iteration number. This allows BlackJack to select the next iteration in time for the next request from the CPU. BlackJack takes a total of three cycles to generate the next iteration, which gives us several cycles of buffer before the next iteration is queried.
Once calculated, the next iteration value is stored in the 'next iteration' register in each bank until it is read by the CPU. The CPU can then read from this register using the _SHFL_GNI_ instruction in a single cycle. As soon as this register is read, the hardware begins selecting the next iteration to run for that bank. This allows us to minimize the latency of the \(get\_next\_iteration\) to a single cycle.
## VI Evaluation
In this section, we evaluate the effectiveness of BlackJack in preventing side-channel attacks. We begin by showing that BlackJack masks the side channel and secures neural networks against attackers. Next, we show how BlackJack greatly increases the time needed to collect enough traces to carry out the attack. We then report the runtime overhead of BlackJack on several representative neural networks and compare against the overhead of software shuffling. Finally, we quantify the area and latency overhead of BlackJack and explain how BlackJack does not leak side channel information.
### _Efficacy of hardware shuffling_
We first show how shuffling impacts the effectiveness of the power side-channel attack. Since our target CPU does not have shuffling hardware, we calculate a shuffled order of weight accesses for each run and load this into our CPU prior to trace collection. Thus the traces we collect have the weights accessed in a new shuffled order during each run. For robustness, we collect 200 traces for each run of the network, exceeding the 100 traces we need for the baseline attack.9 This ensures that merely increasing the number of traces required does not allow an attacker to circumvent our solution.
Footnote 9: Our experiments show that even increasing the number to 1000 traces does not change our results.
**Effect on \(\rho\).** First, we see how shuffling affects the Pearson correlation coefficient (\(\rho\)), which is the metric used by CPA to
Fig. 5: Custom ISA instructions for shuffling hardware.
determine the most likely value of each weight. We demonstrate using a single run with 16 weights, where we vary the amount of shuffling from \(k=1\) (no shuffling) to \(k=16\) (full shuffling). Recall that \(k\) is the number of registers per set in our design. Figure 5(a) shows \(\rho\) (y-axis) as we analyze more traces (x-axis) For each \(k\) value, we show the average \(\rho\) of all 16 weights for two cases:
1. _Continuous line_: the weight with the highest \(\rho\) value (\(\rho_{max}\)), which is the weight guessed by the CPA attack
2. _Dashed line_: the \(\rho\) value of the correct weight (\(\rho_{correct}\)).
As we use the Pearson Correlation Coefficient, all weights starts with a value of \(1\). But as we add more traces, we expect \(\rho_{correct}\) to stay high while the \(\rho\) values for incorrect weights to settle to significantly lower values. This is precisely what we see in the no shuffling case, as the continuous and dashed lines overlap. This means that \(\rho_{max}=\rho_{correct}\) and that the attack can identify the correct weight values in just 30 traces. However, for cases with shuffling, as we analyze more traces, the attack always guesses an incorrect weight as the best guess (i.e., \(\rho_{max}\)). The correct weight (\(\rho_{correct}\)) is consistently lower, giving the attacker no means of identifying this as the correct weight.
Despite this, the incorrect guesses could still be numerically close to the correct weights. To study this, we use the weights obtained with shuffling for one of the networks we study in Section VI-C, namely _mnist-mlp_. With shuffling, the network achieves a classification accuracy of just 11.7%, compared to the original accuracy of 92.9%. Thus, the weights recovered with shuffling do not provide any useful information to the attacker to steal the network.
**Partial Guess Entropy (PGE).** In addition to looking at \(\rho\), we also look at how far away the guessed weight is from the correct weight. Recall that the CPA attack generates a list of possible weight guesses, ranked by \(\rho\). PGE [22] is the position of the correct weight in this list of guessed weights; A PGE of \(0\) means that the attack correctly guessed the weight. Figure 5(b) shows the average PGE values for 16 weights as we vary \(k\). As expected, with no shuffling, PGE reaches \(0\) with just 30 traces analyzed. However with shuffling, PGE values remain high and do not move closer to \(0\), even with more traces. This shows that analyzing additional traces does not diminish the effectiveness of our technique. We also see that increasing \(k\) leads to an increase in the average PGE value. Past 150 traces, we see that the PGE values stabilize in order of \(k\), with \(k=2\) and \(k=16\) having the lowest and highest average PGE values, respectively. This shows that increasing \(k\) leads to an increase in the security offered by our design. So far, we have used small values of \(N\) and \(k\) for clarity. We now quantify the impact of larger values of \(N\) and \(k\) on the time needed for a successful attack.
### _Effect on time needed for attack_
The increase in number of total permutations is effective as a security measure since it dramatically increases the time needed by the attacker to collect and process enough traces to find the correct weights. Table II shows the time it would take _(in years)_ for an attacker to gather enough traces and process them to recover the weights. We assume an attacker who can gather and process 1000 traces a second, which is similar to the speed of our setup. While the time is relatively short for small values (e.g., \(\sim 7\) days for \(N=32,k=2\)), this rapidly grows into decades and then centuries for larger values
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline k N & 32 & 64 & 128 \\ \hline
2 & 1.91E-02 & 5.81E+07 & 7.59E+26 \\ \hline
4 & 3.16E+06 & 2.10E+25 & 2.55E+63 \\ \hline
8 & 7.58E+13 & 5.76E+41 & 3.33E+98 \\ \hline
16 & 1.27E+20 & 3.32E+56 & 2.51E+131 \\ \hline
32 & 8.34E+24 & 9.37E+68 & 8.33E+160 \\ \hline
64 & - & 4.02E+78 & 6.63E+185 \\ \hline
128 & - & - & 1.22E+205 \\ \hline \end{tabular}
\end{table} TABLE II: Time needed (in years) to collect and process enough traces to carry out the attack for a single dimension, for different values of \(N\) (columns) and \(k\) (rows).
Fig. 6: Effect of different degrees of shuffling for 16 weights.
of \(N\) and \(k\). As the benchmarks we evaluate below contain thousands of weights, the time needed to carry out the attack would be thousands of years to reverse engineer even a single layer. We further note that the analysis above is for securing one single dimension, such as the weights of a single neuron. Thus, the time needed to reverse engineer a whole network would be cumulative, making it totally untenable to carry out the attack in a reasonable amount of time. This tremendous increase in time needed for the attack is the cornerstone of the security offered by BlackJack.
### _System evaluation_
In this section, we evaluate the overhead of shuffling using the networks listed in Table III. For each network, we list the layers, the activation function used per layer and the size of each layer. For FC layers, the size is given as input channels \(\times\) output channels, while for CONV layers it is given as kernel width \(\times\) kernel height \(\times\) input channels \(\times\) output channels. Also, each CONV layer is followed by a 2\(\times\)2 max pooling layer. The networks are written in C and compiled using ARM GCC 2019.4 compiler, with optimization set to -O3. For performance, we use Thumbulator [47], a cycle accurate simulator for the ARM M0+ CPU. All our networks use 16-bit fixed point values in Q4.11 format.10 For fully connected and max pooling layers, we shuffle the order of all loops. For convolutional layers, we shuffle input channels, the output channels, input rows and input columns.
Footnote 10: Our solution also applies to networks that use floating point, such as those shown in CSI NN.
**Benchmarks.** Our benchmarks cover typical networks run on IoT devices. **mnist-mlp** and **mnist-cnn** represent image recognition tasks, which are increasingly popular on IoT devices [61]. **kws-mlp** is a audio keyword spotting network for IoT devices [93]. **har-cnn** classifies users' activities based on accelerometer data [42]. **gesture-cnn** takes camera input and classifies the gesture performed to control an IoT system [92]. **ceg-ae** uses an autoencoder to detect anomalous readings from ECG data [50]. **seizure-svm** processes EKG data to identify the on-set of a seizure, so preventing action can be taken [80]. **Software shuffling.** For all the benchmarks we study, we see that software shuffling adds a significant overhead, up to 271%. For MLP networks, shuffling takes longer for larger layers, as the list of indices to be shuffled is longer. The overhead for _kws-mlp_ is lower compared to the _mnist-mlp_ network, as the former has smaller FC layers. For the CNN networks, benchmarks with more CONV layers have lower overhead. There are two reasons for this: 1) CONV layers have smaller indices which makes shuffling faster and 2) CONV layers require more computation than FC layers. For CONV layers, each weight kernel of size \(N\times N\) requires \(N^{2}\) multiply accumulate (MAC) operations, while FC require a single MAC operation per weight. This higher compute cost amortizes the high cost of software shuffling. However, networks with fewer CONV layers have very high overhead as the first FC layer has a large number of neurons. The overhead from this large FC layer dominates the overhead of software shuffling. In contrast, prior work only shows an 18% overhead for software shuffling [12]. This low overhead is because they only test a very simple MLP network with 15, 10 and 10 neurons per layer. As we evaluate much larger networks, we see significantly higher overheads when using software shuffling. **Hardware shuffling.** In contrast to software shuffling, the additional instructions needed for hardware shuffling adds an average of just 0.56% latency overhead. The overhead is higher for the MLP networks as they consist solely of fully connected layers. As we shuffle both dimensions (i.e., neurons and weights per neuron) for FC layers, our technique adds more instructions, leading to greater overhead. We see lower overhead for CNN networks as they spend more time computing convolutional layers. Unlike software shuffling, the overhead of our technique does not scale with the size of layers.
**Impact of shuffling on accuracy.** Shuffling does not affect network accuracy, as all the operations are still performed, merely in a different order. In contrast, using the weights recovered by the attack when shuffling is used results in a significant loss of accuracy. For example, for mnist-mlp the weights recovered with \(k=16\), result in an accuracy of just 11.7%. It is important to note that shuffling the order of operations does not incur any additional latency due to cache non-locality. Low-power IoT CPUs such as the ARM M0+ do not use caches. Thus, all memory accesses take the same number of cycles to complete.
### _Area, frequency and power analysis_
As mentioned in Section V-C, we opt for a design with \(16\) register per set (i.e., bins). The largest layer in our evaluation is \(5760\) neurons. We therefore use 10-bit registers, allowing us to support a maximum of 16,384 iterations. With this sizing in mind, we now explore the operating frequency and the area and power overheads of BlackJack.
We design BlackJack in Verilog and synthesize it using the Synopsys Design Compiler Version N-2017.09. As IoT devices are typically manufactured using older device technologies [84], we use the TSMCs 65nm (nominal) process technology. For area and delay, we use Cadence Innovvas v16.22-s071 and Mentor Graphics ModelSim SE 10.4c. BlackJack adds just 2.2% area to an ARM M0+ SoC manufactured in 65nm [70]. BlackJack has an \(F_{max}\) of 257.83MHz, which is much faster than the clock speed of IoT devices. Prior works use frequencies ranging from 10MHz to 50MHz for IoT devices used for ML applications [16, 20, 35, 41]. Thus BlackJack has no impact on the \(F_{max}\) or the overall system. We opt to run our CPU at 24MHz, matching prior work [48]. At this frequency, BlackJack incurs a 2.22% power overhead, compared to a ARM M0+ CPU [82]. This is in contrast to software shuffling, which, on average, more than doubles the latency and therefore energy cost of computation.
**TRNG.** We now quantify the randomness required by BlackJack. Our hardware runs at 24MHz and we use
16 registers per set. As we described in Section V-E, BlackJack produces a new value every 3 cycles. Therefore, we require \(24\times log_{2}(16)\div 3=32\) Mbits/s of randomness. To satisfy this requirement, we used a TRNG which provides up to 86 Mbits/s of randomness [75]. The TRNG adds an additional 0.26% area and 1.06% power overhead, which brings our total overhead to 2.46% area and 3.28% power.
### _Security of shuffling hardware_
We now explore whether BlackJack can leak any side channel information that an attacker can use to subvert our solution. We use a _formal verification_ based approach, which is highly effective in detecting possible side channel leaks. Formal verification has previously identified leaks in a hardware encryption algorithm, which was previously deemed secure based on attacking captured traces [6].
We use the _CocoAlma_[43] tool, which takes a Verilog file as input and searches for possible side channel leaks. _CocoAlma_ checks for any variations in latency or power during operation which could potentially serve as a side channel leak. This tool also accounts for hardware leakage effects such as glitches. We analyze BlackJack using _CocoAlma_ and verify that there are no side channel leaks from BlackJack.
## VII Broader applicability of BlackJack
In this section, we outline how BlackJack can be used for more than just securing ML algorithms against power-side channel attacks. First, we describe two security-critical applications that can be secured using BlackJack. We then provide an overview of other types of attacks against neural networks running on IoT devices and describe how BlackJack can also effectively prevent these attacks.
### _Other applications_
**Elliptical curve cryptography (ECC).** ECC is a public-key cryptography scheme based on elliptic curves over finite fields [13]. ECC encodes keys as coefficients of polynomials. Prior work shows that ECC leaks side channel information, which can be used to recover private keys [19]. Attacks target the _elliptic curve multiplication (ECM)_ operation, commonly implemented using the 'double-and-add' method [53]. ECM takes a point \(p\) as input and loops over each bit of \(p\); if the bit is \(1\), ECM performs an _add_ operation. Thus, iterations which take longer have a \(1\) in that bit position. With BlackJack, we can shuffle the order in which bits are accessed each time, which prevents the attacker from learning which bits are \(1\). As ECC uses at least \(224\) bit keys [14], shuffling increases the number of possible permutations tremendously.
**Biometric authentication.** An emerging use case for IoT devices is for biometric authentication [39]. An example of this is a fingerprint recognition system, such as those commonly used in laptops. Prior work shows that such systems are susceptible to side channel attacks [29]. Specifically, the CPA attack (outlined in Section III) can be used to learn each user's stored fingerprint data [17]. The recognition system is implemented as a set of nested for loops, which can be shuffled using BlackJack to obscure this side channel.
### _Other attacks_
**Floating point timing attack.** The difference in time taken by floating point multiplication based on the input values [40] can be used to mount an attack. In the IEEE-754 32-bit floating point format, the smallest number using the normal representation is \(1.0\times 2^{-126}\). Numbers smaller than this are called subnormal; operations involving subnormal numbers take much longer than operations using only normal numbers. For example, on an x86 system, \((normal\times normal=subnormal)\) takes 124 cycles, while \(normal\times normal=normal\) takes only 10 cycles. During network inference, each \((input\times weight)\) operation has a specific \(input\) value which will cause the output to become subnormal. The attacker sweeps the \(input\) to find this threshold value and then uses that to learn the \(weight\). The attacker can then recover all the weights of the first layer and repeat the process for the other layers. While this attack is limited to networks that use floating point numbers, it requires less equipment as it relies on timing rather than power. However, this attack still requires each operation to occur in the same place in each trace, so the attacker can try multiple \(input\) values to find the threshold \(input\) value. BlackJack prevents this attack by randomizing the order of operations, and preventing the iterative search.
**Fault injection attacks.** The attacks discussed thus far have focused on stealing the model; in contrast, fault injection attacks cause the model to operate in an abnormal way [10],
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Network} & \multirow{2}{*}{Architecture} & \multicolumn{2}{c|}{Overhead} \\ \cline{3-4} & & Software & Hardware \\ \hline mnist-mlp & F(768\(\times\)128), F(128\(\times\)10) & 75.82\% & 1.15\% \\ \hline kws-mlp & F(250\(\times\)144), F(144\(\times\)144), F(144\(\times\)10) & 59.10\% & 1.12\% \\ \hline mnist-cnn & C(3\(\times\)3\(\times\)1\(\times\)6), C(3\(\times\)3\(\times\)6\(\times\)6), F(150\(\times\)20), F(20\(\times\)10) & 39.90\% & 0.19\% \\ \hline har-cnn & C(2\(\times\)2\(\times\)1\(\times\)128), F(5632\(\times\)128), F(128\(\times\)128), F(128\(\times\)6) & 271.36\% & 0.21\% \\ \hline gesture-cnn & C(5\(\times\)5\(\times\)1\(\times\)32), C(3\(\times\)3\(\times\)32\(\times\)64), C(3\(\times\)3\(\times\)64), F(5760\(\times\)128), F(128\(\times\)10) & 100.06\% & 0.14\% \\ \hline ecg-ae & F(128\(\times\)1024), F(1024\(\times\)1024), F(1024\(\times\)140) & 75.06\% & 1.17\% \\ \hline seizure-svm & F(2854\(\times\)179) & 86.74\% & 0.58\% \\ \hline \multicolumn{3}{c}{Average} & 101.15\% & 0.56\% \\ \hline \end{tabular}
\end{table} TABLE III: List of networks evaluated, showing the architecture, and overheads (C-Convolutional and F-Fully connected layers).
[11]. For example, in the network used for 'chip-and-pin', fault injection can be used to classify a fraudulent transaction as legitimate. Attackers inject 'faults' into the system while it is running the model, forcing it to mis-classify its inputs. Prior work shows a practical attack using lasers to inject faults [10]. To counteract such attacks, techniques have been proposed to detect faults [52, 36]. However, detection techniques incur high overheads and are not 100% accurate. To minimize the chance of detection, the attacker must inject as few faults as possible [34, 94]. Prior work shows that a mis-classification can be forced with just 4 injected faults [33]. However, the attacker must have full knowledge of the model, to determine the exact points where faults must be injected. By shuffling the order of operations, BlackJack prevents the attacker from determining the exact location for fault injection. The attacker must therefore inject many more faults, and therefore significantly increase the chances of detection.
## VIII Related work
In this section, we present some related work on _masking_, which is another commonly used technique for preventing side-channel attacks. We also list prior works on securing machine learning in a broader context.
### _Shuffling_
Shuffling was first proposed as a technique to secure AES encryption against side channel attacks [64]. Most shuffling techniques target the \(16\) S-Box operations performed in AES [88]. We now detail prior work which perform shuffling in software and hardware.
**Software.** One approach to shuffle the order of operations is to pick a random index to start at each time. As this only requires calculating one random value, it adds significantly less overhead [65, 27, 68]. However, follow-on work shows that this approach does not significantly improve security as it only results in \(N\) permutations instead of \(N!\)[88]. Another approach is to combine shuffling with inserting dummy instructions to further mis-align the recorded power traces [59]. However, this approach is challenging as the dummy instructions must appear genuine to the attacker or else they can easily remove them from the trace before analysis.
Other approaches perform 'fully shuffling', for securing the S-Box operation of AES running on a low-power CPU [9]. This is done by unrolling the loop which computes the \(16\) S-Box computations and running these steps in a random order. This technique would be impractical for neural networks due to the larger, arbitrary number of neurons and weights in neural networks.
**Hardware.** Shuffling in hardware has also been implemented for AES on FPGA [21, 76, 89]. Techniques that combine hardware and software to perform AES shuffling have also been proposed [31]. Adding hardware for shuffling has also been proposed for other encryption algorithms such as elliptic curve cryptography [14] and lattice-based cryptography [15]. These approaches are also restricted to shuffling \(2^{N}\) iterations, while BlackJack supports shuffling any number of iterations. Shuffler [90] and Morpheus [30] employ shuffling to protect against code reuse attacks, while we defend against side channel attacks.
**Shuffling for NNs.** Dubey et al. add shuffling to an accelerator for binary neural networks, to defend against side-channel attacks [25]. However, they only shuffle the starting index which leads to a significantly smaller number of permutations. They explicitly state that they do not full shuffling as the values are not powers-of-2, which is the problem solved by our approach.
### _Masking_
One popular technique to obfuscate side-channels is to _mask_ secret data by _splitting_ this data into several parts and operating on each part separately. Mathematically, the secret information \(s\) is split into \(d\) parts \(s_{1},s_{2},...s_{d}\) such that \(s_{1}\oplus,s_{2}\oplus...\oplus s_{d}=s\). The masking must be done such that any subset of less than \(d\) shares are statistically independent of \(s\). A simple way to achieve this is by picking \(s_{1},...,s_{d-1}\) uniformly at random (the masks), and setting \(s_{d}=s\oplus s_{1}\oplus...\oplus s_{d-1}\) (the masked variable). The masking is then said to be of order \(d-1\). However, a \((d-1)^{th}\) order masked implementation is susceptible to a \(d^{th}\) order attack, which analyses all \(d-1\) shares collectively to recover secret information.
Masking has been extensively studied for securing encryption algorithms such as AES [55], Saber [57] and Midori64 [32]. Masking has also been applied to CPUs to prevent side-channel attacks but incur a \(141\times\) latency overhead [4]. Similar to our approach, Dubey et al. modify a RISC-V CPU to mask operations during network inference, but add \(2\times\) latency overhead [26], compared to just 0.56% added by BlackJack.
Techniques to secure neural networks accelerators by masking have also been proposed, although these techniques impose significant latency (up to 2.8\(\times\)) and area (up to 5.9\(\times\)) overheads [23, 24]. Maji et al. propose a masking-based neural network accelerator to prevent power side-channel attacks, which adds \(1.4\times\) latency and \(1.64\times\) area overhead and only targets fully connected layers [63]. In contrast, BlackJack adds just 0.56% latency and 2.46% area overhead to an ARM M0+ SoC.
### _Machine learning security._
Prior work has shown attacks to steal networks deployed on cloud service such as AWS or Google Cloud [81, 28]. These attacks require the attacker to repeatedly query an online network to train their own network to match accuracies. However, unlike the attacks we counter, these attacks require access to the same training data as the online model.
Attacks against ML algorithms using cache side channels have also been proposed [91, 49]. These attacks leverage the difference in cache access timing to infer information. However, as IoT devices typically lack caches, such attacks do not apply to them. Similarly, the memory access pattern
of neural network accelerators has also been used as a side-channel to recover information [51]. The authors are able to reverse engineer the network architecture from the memory access patterns observed during inference. This attack does not apply to low-power embedded systems, where all memory accesses take the same number of cycles.
Another solution to obscure side channel leakage due to memory access patterns is oblivious RAM (ORAM) [38]. ORAM however cannot be used to hide power side channels, which is the focus of our work [18]. Also, ORAM imposes a large 100\(\times\) overhead, compared to just a few percent for our technique [2].
## IX Conclusion
We show that shuffling is an effective technique to prevent side channel attacks against neural networks running on IoT devices. We detail a new attack against software shuffling - proposed in prior work - leaks information which can be used to obviate the security benefits of shuffling. To perform secure shuffling, we propose BlackJack, hardware added as a functional unit within the CPU. BlackJack uses a novel counter-based approach, to effectively shuffle the large, arbitrary sizes of neural network layers. BlackJack adds just 0.56% latency overhead, compared to over 100% in the case of software shuffling. We show that BlackJack effectively secures the weights of neural networks and can also be used to secure other applications. We also describe how BlackJack is effective at preventing other side channel attacks such as floating point timing attacks and fault injection attacks. BlackJack adds just 2.46% area and 3.28% power overhead, without itself leaking any side channel information.
## Acknowledgements
We thank the members of NEJ group for their valuable feedback. We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant RGPIN-2020-04179. This research was undertaken, in part, thanks to funding from the Canada Research Chairs program.
|
2306.14433
|
A data-driven framework for dimensionality reduction and causal
inference in climate fields
|
We propose a data-driven framework to simplify the description of
spatiotemporal climate variability into few entities and their causal linkages.
Given a high-dimensional climate field, the methodology first reduces its
dimensionality into a set of regionally constrained patterns. Time-dependent
causal links are then inferred in the interventional sense through the
fluctuation-response formalism, as shown in Baldovin et al. (2020). These two
steps allow to explore how regional climate variability can influence remote
locations. To distinguish between true and spurious responses, we propose a
novel analytical null model for the fluctuation-dissipation relation, therefore
allowing for uncertainty estimation at a given confidence level. Finally, we
select a set of metrics to summarize the results, offering a useful and
simplified approach to explore climate dynamics. We showcase the methodology on
the monthly sea surface temperature field at global scale. We demonstrate the
usefulness of the proposed framework by studying few individual links as well
as "link maps", visualizing the cumulative degree of causation between a given
region and the whole system. Finally, each pattern is ranked in terms of its
"causal strength", quantifying its relative ability to influence the system's
dynamics. We argue that the methodology allows to explore and characterize
causal relationships in high-dimensional spatiotemporal fields in a rigorous
and interpretable way.
|
Fabrizio Falasca, Pavel Perezhogin, Laure Zanna
|
2023-06-26T06:00:26Z
|
http://arxiv.org/abs/2306.14433v3
|
# Causal inference in spatiotemporal climate fields through linear response theory
###### Abstract
The Earth's climate is a complex and high-dimensional dynamical system. At large scale its variability is dominated by recurrent patterns, interacting with each others on a vast range of spatial and temporal scales. Identifying such patterns and their linkages offers a powerful strategy to simplify, study and understand climate dynamics. We propose a data-driven framework to first reduce the dimensionality of a spatiotemporal climate field into a set of _regional_ modes and then infer their time-dependent causal links. Causality is inferred through the fluctuation-response formalism, as shown in Baldwin et al. (2020) [1]. The framework allows us to estimate how a spatiotemporal system would respond to local _external_ perturbation, therefore inferring causal links in the interventional sense. We showcase the methodology on the sea surface temperature field in two cases with different dynamics: weekly variability in the tropical Pacific and monthly variability over the entire globe. In both cases, we demonstrate the usefulness of the methodology by studying few individual links as well as "link maps", visualizing the cumulative degree of causation between a given region and the whole system. Finally, each climate mode is ranked in terms of its "causal strength", quantifying its relative ability to influence the system's dynamics. We argue that the methodology allows to explore and characterize causal relationships in high-dimensional spatiotemporal fields in a rigorous and physical way.
###### Contents
* I Introduction
* II Framework: causality, dimensionality reduction and climate fields
* II.1 Linear response theory and fluctuation-dissipation relation
* II.1.1 General case
* II.1.2 Linear systems and quasi-Gaussian approximation
* II.2 A _null model_ for fluctuation-dissipation relation
* II.2.1 Confidence bounds of the response matrix: numerical estimation
* II.2 Confidence bounds of the response matrix: analytical derivation
* II.3 A simple example
* II.4 Metrics
* II.5 Climate fields and dimensionality reduction
* II.5.1 Complex networks and community detection
* III Data
* IV Causality in climate fields
* IV.1 Applicability of fluctuation-response theory in climate studies
* IV.2 Tropical Pacific dynamics
* IV.2.1 Causal inference at the grid level
* IV.2.2 Dimensionality reduction and causal inference
* IV.3 Global sea surface temperature dynamics
* IV.3.1 Dimensionality reduction and causal inference
* IV.3.2 Investigation of few causal interactions
* V Conclusions and discussion
* Acknowledgments
* Code availability
* A Expected value and variance of the response estimator
* A.1 Computation of each summation
* A.2 Final result
* B Histograms of each mode \(x_{i}(t)\) in the tropical Pacific SST field
* C Histograms of each mode \(x_{i}(t)\) in the global SST field
* D Causal strength and link maps up to \(\tau_{max}=10\) years
## I Introduction
The Earth's climate is a complex dynamical system composed by many interacting components, such as the atmosphere and hydosphere, and their interactions [2]. Such linkages give rise to nontrivial feedbacks, generating self-sustained spatiotemporal patterns [3; 4]. An example is the El Nino Southern Oscillation (ENSO),
|
2307.03266
|
Empirical Analysis of a Segmentation Foundation Model in Prostate
Imaging
|
Most state-of-the-art techniques for medical image segmentation rely on
deep-learning models. These models, however, are often trained on
narrowly-defined tasks in a supervised fashion, which requires expensive
labeled datasets. Recent advances in several machine learning domains, such as
natural language generation have demonstrated the feasibility and utility of
building foundation models that can be customized for various downstream tasks
with little to no labeled data. This likely represents a paradigm shift for
medical imaging, where we expect that foundation models may shape the future of
the field. In this paper, we consider a recently developed foundation model for
medical image segmentation, UniverSeg. We conduct an empirical evaluation study
in the context of prostate imaging and compare it against the conventional
approach of training a task-specific segmentation model. Our results and
discussion highlight several important factors that will likely be important in
the development and adoption of foundation models for medical image
segmentation.
|
Heejong Kim, Victor Ion Butoi, Adrian V. Dalca, Daniel J. A. Margolis, Mert R. Sabuncu
|
2023-07-06T20:00:52Z
|
http://arxiv.org/abs/2307.03266v3
|
# Empirical Analysis of a Segmentation Foundation Model in Prostate Imaging
###### Abstract
Most state-of-the-art techniques for medical image segmentation rely on deep-learning models. These models, however, are often trained on narrowly-defined tasks in a supervised fashion, which requires expensive labeled datasets. Recent advances in several machine learning domains, such as natural language generation have demonstrated the feasibility and utility of building foundation models that can be customized for various downstream tasks with little to no labeled data. This likely represents a paradigm shift for medical imaging, where we expect that foundation models may shape the future of the field. In this paper, we consider a recently developed foundation model for medical image segmentation, UniverSeg [6]. We conduct an empirical evaluation study in the context of prostate imaging and compare it against the conventional approach of training a task-specific segmentation model. Our results and discussion highlight several important factors that will likely be important in the development and adoption of foundation models for medical image segmentation.
Keywords:Foundation model Medical Image Segmentation Prostate MRI In-context Learning
## 1 Introduction
Foundation models (FMs) are general-purpose models trained on extensive amounts of data, typically in a self-supervised fashion [4]. These pre-trained models can serve as the 'foundation' from which to adapt to various downstream tasks with minimal or no supervision. From BERT [11] to GPT-4 [25], FMs have fueled ground-breaking advances in natural language tasks. The success of large language models inspired applications to different domains such as speech [1, 26], robotics [5, 31], and vision [20, 37].
Classical methods for medical image segmentation (MIS) implement carefully-customized pipelines (e.g., FreeSurfer [14]). Pipelines might include pre-selecting
images that include the region of interest (ROI), preprocessing the images to reduce artifacts and/or noise, and applying image-processing algorithms like thresholding and deformable-templates, with empirically chosen parameters. The introduction of deep learning models simplified and improved the performance of automatic segmentation tools [19, 27]. In deep learning, the common approach involves curating a set of labeled images and training a task-specific model on these data. These models can be brittle and not generalize well to new datasets. Moreover, they demand the creation of a relatively large labeled training set for each task. Importantly, training for each task often requires significant computational resources and expertise. Recent studies have proposed data augmentation and synthesis methods to address these problems but they are still early stage [3, 34].
Recently, several FMs for image segmentation tasks have been proposed. These include the Segment Anything Model (SAM) and Segment everything everywhere all at once model (SEEM), which demonstrate great performance in a variety of interactive segmentation tasks in natural images [20, 37]. Unlike task-specific models, these FMs are trained with prompt inputs like points and boxes that guide the segmentation tasks. Once trained, these methods solve new tasks without updating their weights (Figure 1). Another recent FM, UniverSeg [6], is specifically designed to generally solve _medical_ image segmentation tasks. The "prompt" for UniverSeg is a set of image-label pairs, also called a support set. The support set precisely defines the segmentation task. As one of the first FMs developed for medical image segmentation, UniverSeg demonstrated promising performance using limited number of image-label pairs compared to few-shot baseline methods.
A FM for MIS offers several benefits. This approach can minimize the need for labeled data, which can represent a significant reduction in cost for developing automatic segmentation tools. Since these models leverage commonalities across different annotation tasks, adapting a FM to a new task can be made to be computationally efficient and reduce the computational burden for creating task-specific solutions. Finally, adapting FMs to specific tasks can be made easy and user-friendly, which will help lower barriers for clinical practitioners to build on these technologies.
Although promising, studies have shown the limitations of the SAM FM for MIS tasks [8, 10, 16, 17, 18, 23, 24, 29, 35]. The inferior performance of SAM on MIS tasks is often attributed to the fact that SAM was trained with natural images. Some works propose possible remedies, such as prompt-engineering [30, 32] and fine-tuning [15, 22, 33] to improve the performance. In this paper, we report the potential and limitations of an MIS-specific FM, UniverSeg, by evaluating it for prostate MRI segmentation.
## 2 Related Works
### UniverSeg
UniverSeg [6] is a FM for MIS tasks that uses support sets of image-label pairs as a prompt to define new tasks. The architecture employs a Cross-Block mech
anism leveraging information from the query image and support sets by averaging the feature maps. UniverSeg was built using MegaMedical, which contains 53 open-access medical segmentation datasets comprising over 22,000 scans to achieve strong performance when generalizing to held out datasets used to evaluate UniverSeg on unseen anatomies and tasks.
### Prostate MR Segmentation
Prostate MR scans have been increasingly acquired as an initial diagnostic tool. The ROI labels are manually segmented for the clinical workflow, for example, biopsy guidance, and surgical/treatment planning. High-quality segmentation labels can be beneficial but the label generation is time-consuming and demands expertise. Thus, automatic segmentation tools can have a large clinical impact.
## 3 Experiments
### Datasets
We consider three anatomical ROIs in the prostate that are defined in two datasets. For each dataset, we created five sets of support/test splits. Since obtaining high-quality ground-truth labels is a significant bottleneck for real-world MIS problems, we focus on the limited sample size scenario. We created support sets with randomly selected N=1, 2, 5, and 10 cases, while the other cases were used as test set. Since each training case is a 3D volume, we extracted 2D slices from these volumes to create the support or training sets. Unless specified otherwise, we used 2D slices that contained the ROI. All slices are resized to \(128\times 128\) and intensities are normalized to [0, 1].
Figure 1: **Traditional Approach vs. Foundational Model Approach.** Traditional segmentation models like nnUNet are trained first to predict the new images. FMs like UniverSeg and SAM use a trained model for inference of a new task. Instead of retraining, prompts like support sets are used for UniverSeg and points and masks for SAM (Image modified from [6])
**Prostate Gland Segmentation.** We used our in-house prostate MRI dataset (Prostate-Gland) for prostate gland segmentation, amounting to 859 anonymized MRI scans. T2-weighted prostate MRI scans are acquired as part of prostate cancer diagnosis.
**Transitional and Peripheral Zone Segmentation.** We used the publicly available zonal anatomy segmentation labels of 204 patients [9]. The transitional zone (TZ) and peripheral zone (PZ) labels are from the training dataset of the PROSTATEx challenge [21] and annotated by expert radiologists, with rigorous quality assessment [9]. We present two sets of results corresponding to two different labels: PROSTATEx-TZ and PROSTATEx-PZ.
### UniverSeg Inference
One of the crucial limitations of existing FMs for segmentation, including UniverSeg [6], is that they are all trained in 2D. However, most medical image segmentation tasks are in 3D, and the ROIs can be present in a small portion of the entire volume. Thus, many 2D slices will not contain the segmentation label. Regular prompt-based FM's like SAM [20] struggle with this, as they are expected to return a non-zero result for a given query and prompt. Although UniverSeg is trained using 2D slices containing the label, UniverSeg can use images with missing ROIs in the support set, which can be critical for 3D segmentation tasks. Following the original paper, in all our experiments, we set the maximum support set size \(S\) to 64 2D image-label pairs. Furthermore, as previously demonstrated, the quality of the result obtained with UniverSeg heavily depends on the quality of the provided support set [6]. In our experiments, we implement different support set selection strategies, described below.
**Slice-index-aware Support Set Selection.** The anatomical field-of-view along the z-axis of prostate MR images is roughly similar across subjects. We leveraged this to implement a support set selection strategy that relies on the slice index \(Z\) of the query image. For a given query image \(I_{q}\), we computed weights for each of the available labeled slices \(I_{t}\) as follows: \(1/(|Z_{I_{t}}-Z_{I_{q}}|+1)\), where \(Z_{I}\) denote the slice index in image \(I\). Then we randomly selected \(S\) annotated slices with a probability proportional to the pre-computed weights. This is our default support set selection strategy, which was used for the main results.
**Random Support Set Selection.** As an ablation, we ignore the z-index and randomly draw \(S\) support images from available labeled slices, where each of these images has the same (uniform) probability.
These support set selection techniques can be restricted to slices where the ROI is present ("ROI-inclusive"), or can consider all possible slices in the training volumes (i.e., be agnostic to whether the ROI is present or absent in the slice, which we refer to as "ROI-agnostic"). Because UniverSeg was trained with only "ROI-inclusive" slices, comparing the result with "ROI-agnostic" can serve as a good stress test of the released tool.
### nnUNet
As the baseline, we used the (2D) nnUNet, which trains the model from a random initialization on the given labeled data using heavy data augmentation, automatic network configuration, and ensembling (nnUNet-original) [19]. The nnUNet model is widely considered state-of-the-art for a wide range of task-specific segmentation tasks. For further comparison, we trained and tested the nnUNet model with a smaller network capacity that is similar to the size of the UniverSeg model, which we refer to as nnUNet-small (See Appendix for the details).
### Empirical Evaluation
Because high-performance machines are often unavailable in clinical and medical-research settings, understanding the required computational resources is important to utilize deep learning models for clinical use. As many FMs for segmentation are based on Vision Transformer [13] trained with large datasets, they involve a large number of parameters. Also, compared to classification problems, MIS models often involve higher memory requirements. We performed computational resource analysis on nnUNet and UniverSeg by comparing the number of parameters, training, and inference time.
As the main performance metric, we used the Dice score [12] that quantifies the overlap between an automatic and ground-truth segmentation, and is widely used in the field. We compare UniverSeg with nnUNet models, when different number (\(N\)) of training cases are available. We performed ablation studies to understand where the performance improvement occurs for the UniverSeg and nnUNet models. We compute Dice both in 2D and in 3D. The 2D Dice results are presented only for slices that contain the ROI, and aggregated over all slices in the test subjects. For these results, we implemented the ROI-inclusive support set strategy. We also present 3D Dice values, which are computed based on the volumetric overlap in each test subject, which is in turn averaged across subjects.
## 4 Results
### Computational Resource
Table 1 shows computational resources needed for nnUNet and UniverSeg. UniverSeg has a much smaller number of parameters and faster inference runtime. Importantly, UniverSeg does not require task-specific training - saving substantial computational requirement, and obviating the need for a GPU. This substantial savings makes is more applicable to clinical and clinical-research settings. nnUNet implements five-fold cross-validation, which it in turn uses to ensemble five models. This means that for each nnUNet, we store five models and run five inferences. For nnUNet-orig, the automatic configuration in our experiment yielded models with 20.6M parameters, which is 100 times larger than UniverSeg (1.2M). Our nnUNet-small implementation had 1.3M learnable parameters, yet
we emphasize that ensembling over cross-validation runs meant that the memory footprint of nnUNet-small is about five times of UniverSeg. While the inference time for the nnUNet models will not depend on the training set size (\(N\)), UniverSeg's will, since we need to ensemble over various support sets when \(N>2\) for better performance. However, the support set size does not affect the number of parameters as the Cross-Block of UniverSeg averages the representations of interaction between query and support sets at each step in the network.
### Segmentation Performance
We first analyzed segmentation performance for 2D slices that contain the ROI. Table 2 and Figure 2 show quantitative and qualitative results. Models perform better when more training images are available. For Prostate-Gland segmentation, UniverSeg showed overall comparable results to the nnUNet models, particularly when compared with the size-matched version (nnUNet-small). Interestingly, UniverSeg achieved good performance given extremely limited annotated data, e.g., \(N=1\), outperforming the nnUNet models for all three tasks. The lower scores in TZ and PZ segmentation have been previously analyzed, and are due to the small size and difficult shape of these ROIs. For example, prior zonal segmentation studies report varying scores ranging between 0.59 to 0.94 showing the difficulty and variability [2, 7, 28, 36]. The nnUNet models outperform
\begin{table}
\begin{tabular}{c|c c c} \hline & nnUNet–orig. & nnUNet–small & UniverSeg \\ \hline \#Params & \(20.6\)\(\mathrm{M}\times 5\) folds & \(1.3\)\(\mathrm{M}\times 5\) folds & \(\mathbf{1.2}\)\(\mathbf{M}\) \\ Training time (ms) & \(1.6\times 10^{8}\) & \(1.2\times 10^{8}\) & – \\ Inference time (ms) & \(9.7\times 10^{3}\) & \(7.5\times 10^{3}\) & \(\mathbf{6.9}\times\mathbf{10^{2}}\) \\ \hline \end{tabular}
\end{table}
Table 1: Computational resource comparison. The values are averaged across ROIs and calculated for N=1 case for all methods. All models are tested on Nvidia TITAN Xp GPU (12 GB vRAM).
\begin{table}
\begin{tabular}{l l|c c c c} \hline ROI & Method & \(N=1\) & \(N=2\) & \(N=5\) & \(N=10\) \\ \hline \multirow{3}{*}{Prostate-Gland} & nnUNet-Orig & \(0.592\pm 0.088\) & \(0.714\pm 0.045\) & \(0.810\pm 0.007\) & \(0.817\pm 0.016\) \\ & nnUNet-Small & \(0.520\pm 0.076\) & \(0.698\pm 0.057\) & \(0.802\pm 0.008\) & \(0.808\pm 0.019\) \\ & UniverSeg & \(\mathbf{0.711\pm 0.008}\) & \(\mathbf{0.769\pm 0.009}\) & \(0.780\pm 0.003\) & \(0.802\pm 0.005\) \\ \hline \multirow{3}{*}{PROSTATEx-TZ} & nnUNet-Orig & \(0.614\pm 0.049\) & \(0.764\pm 0.034\) & \(0.803\pm 0.006\) & \(0.821\pm 0.010\) \\ & nnUNet-Small & \(0.599\pm 0.066\) & \(0.759\pm 0.033\) & \(0.800\pm 0.006\) & \(0.814\pm 0.011\) \\ & UniverSeg & \(\mathbf{0.632\pm 0.046}\) & \(0.717\pm 0.010\) & \(0.743\pm 0.012\) & \(0.754\pm 0.015\) \\ \hline \multirow{3}{*}{PROSTATEx-PZ} & nnUNet-Orig & \(0.368\pm 0.111\) & \(0.589\pm 0.041\) & \(0.644\pm 0.042\) & \(0.706\pm 0.018\) \\ & UniverSeg & \(0.333\pm 0.122\) & \(0.572\pm 0.048\) & \(0.633\pm 0.049\) & \(0.699\pm 0.016\) \\ \cline{1-1} & UniverSeg & \(\mathbf{0.478\pm 0.056}\) & \(0.570\pm 0.014\) & \(\mathbf{0.647\pm 0.018}\) & \(0.673\pm 0.015\) \\ \hline \end{tabular}
\end{table}
Table 2: 2D Dice scores for UniverSeg and nnUNet models. The scores are averaged across 5 support/test splits.
UniverSeg in TZ segmentation when \(N=5\) and \(N=10\) annotated examples are available. This difference is smaller for PZ and only becomes significant at \(N=10\). It is important to note that the nnUNet models use test time augmentation, which may improve the UniverSeg performance.
Table 3 shows 3D Dice score values and compares two support set selection methods. We observe that the ROI-agnostic support selection method which includes slices that are missing the ROI, achieves significantly better results. This is because, in 3D, there will be many slices that don't include the ROI and if all support examples include the ROI, then the model will likely produce false positive labels for these slices. This highlights the importance of considering the possibility that the query image might be lacking the ROI.
**Ablation.** We conducted ablation studies for both UniverSeg and nnUNet models to assess the impact of model configuration choices. The nnUNet with the default configurations includes ensembling and test time augmentation. The prediction results from five cross-validation models are ensembled by averaging softmax probabilities and at test time augmentation is applied by mirroring all axis. As the post-processing step did not improve the accuracy on validation sets, we did not post-process the predicted labels. We report the 2D Dice scores of nnUNet models before the ensembling and without the test time augmentation. For UniverSeg, we compared the different slice selection methods.
Figure 2: Representative results. UniverSeg results are comparable to the nnUNet baseline. When existing segmentation labels are limited, e.g., \(N=1\) and \(N=2\), UniverSeg shows superior performance than nnUNet models (highlighted in yellow).
Table 4 demonstrates the ablation results on prostate gland segmentation. Ensembling gave all models a boost. For nnUNet models, test time augmentation also slightly enhanced the scores. The results of the support set selection methods demonstrate the effect of support set quality. The result of ensembling 5 times with slice-index-aware (z-weighted) selection method showed superior performance than using all images for support sets for both \(N=5\) and \(N=10\). This, again, highlights the importance of the quality of support sets. The ablation for TZ and PZ achieved the similar results (See Appendix Table 1).
### Conclusion
Based on the successful employment of FMs in multiple domains, we believe FMs will instigate a paradigm shift for medical imaging. In this paper, we eval
\begin{table}
\begin{tabular}{l l|c c c c} \hline ROI & Method & \(N=1\) & \(N=2\) & \(N=5\) & \(N=10\) \\ \hline & w/o augmentation & \(0.590\pm 0.085\) & \(0.712\pm 0.046\) & \(0.809\pm 0.007\) & \(0.815\pm 0.016\) \\ & fold-1 & \(0.581\pm 0.086\) & \(0.681\pm 0.060\) & \(0.798\pm 0.011\) & \(0.808\pm 0.017\) \\ & fold-2 & \(0.564\pm 0.095\) & \(0.710\pm 0.039\) & \(0.797\pm 0.010\) & \(0.798\pm 0.023\) \\ nnUNet-Orig & fold-3 & \(0.590\pm 0.092\) & \(0.691\pm 0.044\) & \(0.795\pm 0.014\) & \(0.807\pm 0.025\) \\ & fold-4 & \(0.599\pm 0.088\) & \(0.708\pm 0.043\) & \(0.785\pm 0.006\) & \(0.804\pm 0.006\) \\ & fold-5 & \(0.553\pm 0.046\) & \(0.692\pm 0.046\) & \(0.790\pm 0.006\) & \(0.810\pm 0.008\) \\ & default & \(\mathbf{0.362\pm 0.088}\) & \(\mathbf{0.714\pm 0.045}\) & \(\mathbf{0.810\pm 0.007}\) & \(\mathbf{0.817\pm 0.016}\) \\ \hline & w/o augmentation & \(0.519\pm 0.072\) & \(0.696\pm 0.056\) & \(0.801\pm 0.007\) & \(0.807\pm 0.018\) \\ & fold-1 & \(0.537\pm 0.047\) & \(0.668\pm 0.074\) & \(0.784\pm 0.014\) & \(0.801\pm 0.021\) \\ & fold-2 & \(0.518\pm 0.068\) & \(0.686\pm 0.051\) & \(0.793\pm 0.012\) & \(0.792\pm 0.023\) \\ nnUNet-Small & fold-3 & \(0.512\pm 0.091\) & \(0.689\pm 0.057\) & \(0.784\pm 0.011\) & \(0.803\pm 0.011\) \\ & fold-4 & \(0.508\pm 0.076\) & \(0.705\pm 0.046\) & \(0.787\pm 0.015\) & \(0.792\pm 0.022\) \\ & fold-5 & \(0.530\pm 0.089\) & \(0.680\pm 0.045\) & \(0.782\pm 0.014\) & \(0.798\pm 0.020\) \\ & default & \(\mathbf{0.520\pm 0.076}\) & \(\mathbf{0.698\pm 0.057}\) & \(\mathbf{0.802\pm 0.008}\) & \(\mathbf{0.808\pm 0.019}\) \\ \hline & all & \(\mathbf{0.711\pm 0.008}\) & \(\mathbf{0.769\pm 0.009}\) & \(0.778\pm 0.006\) & \(0.799\pm 0.005\) \\ & random & – & – & \(0.772\pm 0.002\) & \(0.798\pm 0.005\) \\ UniverSeg & random+5 ensemble & – & – & \(0.779\pm 0.004\) & \(0.800\pm 0.006\) \\ & z-weighted & – & – & \(0.777\pm 0.002\) & \(0.798\pm 0.005\) \\ & z-weighted +5 ensemble & – & – & \(\mathbf{0.780\pm 0.003}\) & \(\mathbf{0.802\pm 0.005}\) \\ \hline Average \# of images available for support set & \(14.0\pm 2.1\) & \(31.4\pm 6.5\) & \(83.4\pm 2.9\) & \(148.0\pm 3.7\) \\ \hline \end{tabular}
\end{table}
Table 4: 2D Dice scores from the ablation study conducted for the prostate segmentation task.
\begin{table}
\begin{tabular}{l l|c c c} \hline Support Set Selection & N & Prostate & PROSTATEx-TZ & PROSTATEx-PZ \\ \hline & 1 & \(0.596\pm 0.047\) & \(0.610\pm 0.060\) & \(0.428\pm 0.070\) \\ ROI-agnostic & 2 & \(0.690\pm 0.035\) & \(0.706\pm 0.011\) & \(0.510\pm 0.031\) \\ & 5 & \(0.716\pm 0.006\) & \(0.740\pm 0.019\) & \(0.593\pm 0.014\) \\ & 10 & \(0.778\pm 0.006\) & \(0.751\pm 0.024\) & \(0.621\pm 0.009\) \\ \hline & 1 & \(0.481\pm 0.035\) & \(0.579\pm 0.066\) & \(0.349\pm 0.042\) \\ ROI-inclusive & 2 & \(0.488\pm 0.034\) & \(0.665\pm 0.009\) & \(0.393\pm 0.009\) \\ & 5 & \(0.513\pm 0.027\) & \(0.685\pm 0.016\) & \(0.487\pm 0.013\) \\ & 10 & \(0.543\pm 0.013\) & \(0.707\pm 0.019\) & \(0.493\pm 0.027\) \\ \hline \end{tabular}.
\end{table}
Table 3: 3D Dice scores for UniverSeg models with two different support set selection strategies
uated the FM for MIS, called UniverSeg, and discussed its performance and adaptability to prostate segmentation tasks.
As future directions, we see several limitations and opportunities in a FM for MIS. First, FMs for 3D MIS are needed, and promise to be impactful. Many medical image data is acquired in 3D and the existing FMs are based on 2D slices extracted from the 3D volumes. Previous studies have shown superior performance when designed for 3D compared to 2D data. FMs like UniverSeg, where the model can account for images without ROI labels, should be further studied for 3D tasks. Second, adaptation of FMs should be further studied. Prostate gland and TZ were comparably easier segmentation tasks then the PZ. Different approaches would include but not be limited to ensembling different models, e.g., ensembling nnUNet and UniverSeg results, prompt engineering, and finetuning. Third, clinical practitioners can easily adapt FMs in their workflows, as it obviates the need to fine-tune. For prostate MRI, some practitioners use an automated prostate gland segmentation tool from the software DynaCAD5. Even though the segmentation needs to be reviewed and edited, the software saves a lot of time over manual segmentation. An FM like UniverSeg, can be used for various segmentation tasks even when limited labels are available.
Footnote 5: [https://www.usa.philips.com/healthcare/product/HC784029/dynacad-prostate](https://www.usa.philips.com/healthcare/product/HC784029/dynacad-prostate)
###### Acknowledgements.
This work was supported by NIH, United States grant R01AG053949 and 1R01AG064027, the NSF, United States NeuroNex grant 1707312, and the NSF, United States CAREER 1748377 grant.
|
2307.04509
|
First law of thermodynamics and entropy of FLRW universe in modified
gravity
|
We investigate the first law of thermodynamics and entropy associated to the
apparent horizon of (non-flat) FLRW space-time in different theories of
modified gravity and in the presence of a perfect fluid of matter. We pose our
attention on those theories which lead to second order differential field
equations on FLRW background. In this way, we observe that one may obtain a
formula for entropy in terms of the radius of the apparent horizon only. Thus,
when considering a modification to the area law of General Relativity, it is
possible to reconstruct the gravitational lagrangian consistent with the
corresponding first law.
|
Lorenzo Sebastiani
|
2023-07-10T12:06:38Z
|
http://arxiv.org/abs/2307.04509v1
|
# First law of thermodynamics and entropy of FLRW universe in modified gravity
###### Abstract
We investigate the first law of thermodynamics and entropy associated to the apparent horizon of (non-flat) FLRW space-time in different theories of modified gravity and in the presence of a perfect fluid of matter. We pose our attention on those theories which lead to second order differential field equations on FLRW background. In this way, we observe that one may obtain a formula for entropy in terms of the radius of the apparent horizon only. Thus, when considering a modification to the area law of General Relativity, it is possible to reconstruct the gravitational lagrangian consistent with the corresponding first law.
Dipartimento di Fisica, Universita di Trento, Via Sommarive 14, 38123 Povo (TN), Italy
## 1 Introduction
In General Relativity (GR), several thermodynamical quantities (energy, surface gravity, temperature, entropy...) may be introduced for black holes by using semiclassical approaches based on quantum mechanical methods in curved space-times. In particular, the Hawking radiation [1, 2, 3, 4, 5, 6] that takes place on the black hole event horizon implies that black holes have temperature \(T_{H}\) and the first law of thermodynamics holds true in the form \(dE=T_{H}dS\), once the notion of quasi-local Misner-Sharp gravitational energy \(E\) is assumed. Thus, the first law is consistent with the Bekenstein-Hawking entropy \(S=\frac{A_{H}}{4}\)[7, 8], also known as the area law, which is proportional to the area \(A_{H}\) of the black hole horizon. Moreover, it has been shown that it is also possible to derive the Einstein's field equations by starting from the first law of black hole thermodynamics [9].
The thermodynamical properties of the black hole horizon can be extended to generic space-time horizon and in the specific to the apparent horizon of Friedmann - Lemaitre - Robertson - Walker (FLRW) space-time describing our observable homogeneous and isotropic universe. The temperature associated to the apparent horizon can be inferred from the surface gravity with a covariant formalism, and if one assumes the validity of the entropy area law, it is possible to recast the Friedmann equations of GR in the form of the first law \(dE=T_{H}dS\), where \(dE\) is the amount of energy flux crossing the apparent horizon [10, 11, 12]. Further generalizations of this result have been investigated for generic \((n+1)\)-dimensional FLRW universe with any spatial curvature, Gauss-Bonnet and Lovelock gravity [13, 14] and for scalar-tensor gravity [15, 16] (see also Refs. [17, 18, 19, 20]).
However, in FLRW universe a source of perfect fluid is present and, differently from the black hole case, there are well-posed concepts of energy density \(\rho\) and pressure \(p\) of the fluid. Thus, in GR the Einstein's field equations evaluated on the apparent horizon may be also interpreted as a first law with the account of a working term, namely \(dE=WdV_{H}+T_{H}dS\), where \(E=\rho V_{H}\) and \(W=\frac{(\rho-p)}{2}\), \(V_{H}\) being the volume enclosed by apparent horizon. This procedure can be extended to Gauss-Bonnet and Lovelock gravity [21, 22] and in all this cases the results lead to the identification of entropy with the area law (see also Refs. [23, 24] for braneworld scenario and Ref. [25] for the issues related to thermodynamics of apparent horizon).
On the other hand, when one moves to the framework of modified theories of gravity, the field equations become quite involved and the area law is not still valid (see for example Refs. [26, 27]). Therefore, the derivation of a first law from the field equations may furnish a way to define the entropy. In Ref. [28] the case of \(F(R)\)-gravity, where the gravitational action is expressed by a function \(F(R)\) of the Ricci scalar \(R\), has been investigated and it has been shown that the first law on FLRW space-time brings to some additional terms to the expected Wald entropy. This result may be due to non-equilibrium thermodynamics of space-time [29, 30].
Here, we should mention that since the area law of Bekenstein-Hawking is not an extensive measure, several modified entropy laws have been proposed in the last decades for gravitational systems, i.e. Tsallis entropy [31, 32], Renyi entropy [33], Kaniadakis entropy [34], logaritmic corrected entropy [35, 36, 37] and, more recently, Barrow entropy [38]. In Ref. [39] an attempt to rewrite the first law for \(F(R)\)-gravity by using the Barrow entropy has carried out.
The fact that the Einstein's field equations of GR are a system of partial differential equations which are at most at the second order in the derivatives has profound mathematical and physical implications that cannot be transposed in a modified theory of gravity. The Lovelock's theorem [40] states that in four dimension the only lagrangian depending on the curvature invariants only and admitting second order differential equations is given by the Hilbert-Einstein action of GR (up to the cosmological constant). However, there are some classes of theories which preserve second order differential equations on four dimensional FLRW space-time, for example the \(F(R,P,Q)\)-models, where \(P\) is the square of the Ricci tensor and \(Q\) is the square of the Riemann tensor, derived in Ref. [41] or the so called non-polynomial models investigated in Ref. [42, 43]. In these frameworks, some thermodynamic issue on FLRW space-time is much more tractable. We also note that this models are inspired by Quantum Loop Cosmology (QLC) and their FLRW solutions are singularity free, admitting the bounce as an alternative scenario for early-time universe (see Ref. [44] and reference therein).
In this paper, we investigate the first law of thermodynamics associated to the apparent horizon of non-flat FLRW metric and in the presence of a perfect fluid by posing our attention on gravitational theories with Friedmann-like second-order differential field equations. The presence of perfect fluid gives rise to a working term in the first law and the entropy is derived consistently with the equations of motion. As a result, at least in the flat spatial case, we found that the entropy can be expressed in terms of the radius of the apparent horizon only and can be computed independently of the explicit form of the scale factor. We comment the results in the light of the various entropy scenarios.
The paper is organized as follows. In Sec.**2** we introduce the formalism. The case of \(F(R,G)\)-gravity is investigated in Sec.**3**, while in Sec.**4** we analyze two non-polynomial gravity models depending on the covariant derivatives of the Ricci scalar and of the Gauss-Bonnet. Sec.**5** and Sec.**6** are devoted to extended mimetic gravity models. Conclusions and final remarks are given in Sec.**7**.
We use the Newton's gravitational constant \(G_{N}=1\).
## 2 Formalism
We work in a non-flat four dimensional FLRW space-time whose metric is given by,
\[ds^{2}=-dt^{2}+a(t)^{2}\left(\frac{dr^{2}}{1-kr^{2}}+r^{2}d\Omega_{2}^{2} \right)\,, \tag{1}\]
where \(a\equiv a(t)\) is the scale factor of the universe depending on cosmological time only, \(d\Omega_{2}^{2}\equiv d\theta^{2}+\sin^{2}\theta d\phi^{2}\) is the metric of a two-dimensional sphere and the parameter \(k=0,\pm 1\) corresponds to the spatial curvature.
We introduce the relevant invariant scalar quantity,
\[\chi=\gamma^{ab}\partial_{a}r\partial_{b}r\,. \tag{2}\]
Here, \(\gamma_{ab}\) is the two-dimensional metric related to \(a,b=0,1\). On FLRW space-time we get
\[\chi=1-r^{2}J^{2}\,,\qquad J^{2}=\left(H^{2}+\frac{k}{a^{2}}\right)\,, \tag{3}\]
where \(H\equiv H(t)=\frac{a}{a}\) is the Hubble parameter, the dot being the time derivative. We take as the physical boundary of the universe the apparent horizon whose radius \(r_{H}\) is given by
\[\chi=0\qquad\rightarrow\qquad r_{H}=\frac{1}{\sqrt{H^{2}+k/a^{2}}}\,. \tag{4}\]
Now it is possible to associate to the apparent horizon a temperature \(T_{H}\) as,
\[T_{H}\equiv\frac{\kappa_{H}}{2\pi}=\frac{1}{2\pi r_{H}}\left(-1+\frac{\dot{r}_ {H}}{2Hr_{H}}\right)\,, \tag{5}\]
where \(\kappa_{H}\) is the Hayward surface gravity [10]. Note that the physical temperature must be assumed to be \(T_{H}=|\kappa_{H}|/(2\pi)\) in order to avoid negative values.
The work density is defined as [11],
\[W\equiv-\frac{1}{2}T^{ab}\gamma_{ab}=\frac{1}{2}\left(\rho-p\right)\,, \tag{6}\]
where \(T_{\mu\nu}\) is the stress-energy tensor of perfect matter-radiation fluid with energy density \(\rho\) and pressure \(p\).
Finally, the total amount of energy inside the apparent horizon is given by,
\[E\equiv\rho V_{H}=\frac{4\pi}{3}\rho r_{H}^{3}\,, \tag{7}\]
where \(V_{H}=\frac{4\pi}{3}r_{H}^{3}\) is the volume enclosed by apparent horizon. As a consequence, the differential of energy reads,
\[dE=\frac{4\pi}{3}r_{H}^{3}d\rho+4\pi\rho r_{H}^{2}dr_{H}\,, \tag{8}\]
and the following relation holds true,
\[dE=\frac{4\pi}{3}r_{H}^{3}d\rho+WdV_{H}+2\pi r_{H}^{2}(\rho+p)dr_{H}\,. \tag{9}\]
In the next sections we will consider different modified theories of gravity on FLRW space-time and in the presence of perfect fluid and we will try to recast the equations of motion in the form of a first law of thermodynamics. Therefore, we will infer a formula for the entropy which makes consistent the first law. We will restrict our analysis on specific classes of models which lead to second order differential equations of motion on FLRW space-time.
## 3 \(F(R,G)\)-gravity
Let us start by considering the following model of modified gravity [45, 46, 47, 48, 49, 50],
\[I=\frac{1}{16\pi}\int_{\cal M}d^{4}x\sqrt{-g}\,\left(F(R,G)\right)+I_{m}\,, \tag{10}\]
where \(g\) is the determinant of the metric tensor, \({\cal M}\) is a compact manifold and \(I_{m}\) is the usual action of matter. Here, \(F\equiv F(R,G)\) is a function of the Ricci scalar \(R\) and of the Gauss-Bonnet four dimensional topological invariant \(G\),
\[G=R^{2}-4R_{\mu\nu}R^{\mu\nu}+R_{\mu\nu\xi\sigma}R^{\mu\nu\xi\sigma}\,, \tag{11}\]
\(R_{\mu\nu}\) and \(R_{\mu\nu\xi\sigma}\) being the Ricci tensor and the Riemann tensor, respectively. When \(F(R,G)=R\) we recover the Hilbert-Einstein action of GR. On FLRW space-time (1) we obtain,
\[R = 6\left(\frac{\ddot{a}}{a}+\frac{\dot{a}^{2}}{a^{2}}+\frac{k}{a^{ 2}}\right)\equiv\left(12H^{2}+6\dot{H}+\frac{6k}{a^{2}}\right)\,, \tag{12}\] \[G = \frac{24}{a^{3}}\left(\dot{a}^{2}\ddot{a}+\ddot{a}k\right)\equiv 2 4\left(H^{2}+\frac{k}{a^{2}}\right)\left(H^{2}+\dot{H}\right)\,. \tag{13}\]
By assuming the matter contents of the universe as a perfect fluid, the field equations on FLRW can be written as,
\[16\pi\rho= 6\left(H^{2}+\frac{k}{a^{2}}\right)F_{R}+\left(F-RF_{R}-GF_{G} \right)+6H\left(\dot{F}_{R}+4\left(H^{2}+\frac{k}{a^{2}}\right)\dot{F}_{G} \right)\,, \tag{14}\] \[6\pi(\rho+p)= F_{R}\left(-4\dot{H}+\frac{4k}{a^{2}}\right)+2H\dot{F}_{R}+\dot{F }_{G}\left(8H^{3}+\frac{24Hk}{a^{2}}-16H\dot{H}\right)-2\ddot{F}_{R}\] \[+\ddot{F}_{G}\left(-8H^{2}-\frac{8k}{a^{2}}\right)\,, \tag{15}\]
where \(R\) and \(G\) are given by (12)-(13) and we use the notation
\[F_{R}=\frac{\partial F}{\partial R}\,,\qquad F_{G}=\frac{\partial F}{\partial G }\,. \tag{16}\]
In the equations above, \(\rho\equiv\rho(t)\) and \(p\equiv p(t)\) are the energy density and pressure of the matter-radiation contents of the universe and obey to the conservation law,
\[\dot{\rho}+3H(\rho+p)=0\,. \tag{17}\]
By taking the variation of Eq. (14) and by evaluating the result on the apparent horizon \(r_{H}\) (4) we get,
\[\frac{4\pi}{3}r_{H}^{3}d\rho=-F_{R}dr_{H}-\frac{H^{2}r_{H}^{3}}{2}dF_{R}+ \left(-2H^{2}r_{H}-4H\dot{r}_{H}\right)dF_{G}+\frac{Hr_{H}^{3}}{2}d\dot{F}_{R} +2Hr_{H}d\dot{F}_{G}\,, \tag{18}\]
where we have multiplied the both sides of the equation by \(r^{3}/12\) and we have used the following relations:
\[\left(H^{2}+\frac{k}{a^{2}}\right)=\frac{1}{r_{H}^{2}}\,,\qquad\left(\dot{H}- \frac{k}{a^{2}}\right)=-\frac{\dot{r}}{Hr_{H}^{3}}\,. \tag{19}\]
Now, from Eq. (15) we derive
\[2\pi r_{H}^{2}(\rho+p)dr_{H}=\frac{\dot{r}_{H}}{2Hr_{H}}F_{R}dr_{H}+\frac{r_{H }^{2}H\dot{r}_{H}}{4}dF_{R}-\frac{r_{H}^{2}\dot{r}_{H}}{4}d\dot{F}_{R}+H\dot{r }_{H}dF_{G}+\frac{2\dot{r}_{H}^{2}}{r_{H}}dF_{G}-\dot{r}_{H}d\dot{F}_{G}\,. \tag{20}\]
As a consequence, from Eq. (18) and Eq. (20) together with relation (9) we get
\[dE = WdV_{H}+\left(F_{R}dr_{H}+\frac{r_{H}}{2}dF_{R}\right)\left(-1+ \frac{\dot{r}_{H}}{2Hr_{H}}\right)-\frac{Hr_{H}^{3}}{2}d\dot{F}_{R}\left(-1+ \frac{\dot{r}_{H}}{2Hr_{H}}\right) \tag{21}\] \[+\frac{r_{H}}{2}dF_{R}\left(H^{2}r_{H}^{2}-1\right)\left(-1+\frac {\dot{r}_{H}}{2Hr_{H}}\right)+\frac{2}{r_{H}}dF_{G}\left(-1+\frac{\dot{r}_{H}} {2Hr_{H}}\right)\] \[+\left(\frac{2}{r_{H}}(H^{2}r_{H}^{2}-1)+4H\dot{r}_{H}\right)dF_{G }\left(-1+\frac{\dot{r}_{H}}{2Hr_{H}}\right)\] \[+\ \ -2Hr_{H}d\dot{F}_{G}\left(-1+\frac{\dot{r}_{H}}{2Hr_{H}} \right)\,.\]
Therefore, by making use of the Wald entropy result for \(F(R,G)\)-gravity (see Appendix),
\[S_{W}=\frac{A_{H}}{4}\left(F_{R}+F_{G}\left(\frac{4}{r_{H}^{2}}\right)\right)\,, \tag{22}\]
with the area of the horizon \(A_{H}=4\pi r_{H}^{2}\), we can write
\[dE = WdV_{H}+T_{H}dS_{W}+\left(-\frac{Hr_{H}^{3}}{2}d\dot{F}_{R}+ \frac{r}{2}dF_{R}\left(H^{2}r_{H}^{2}-1\right)+\left(\frac{2}{r_{H}}(H^{2}r_{ H}^{2}-1)+4H\dot{r}_{H}\right)dF_{G}\right. \tag{23}\] \[\left.-2Hr_{H}d\dot{F}_{G}\right)2\pi r_{H}T_{H}\,,\]
where we have introduced the horizon temperature (5). For \(F(R)\)-gravity only we obtain,
\[dE=WdV_{H}+TdS_{W}++\frac{A_{H}T_{H}}{4}\left(-Hr_{H}^{2}d\dot{F}_{R}+H^{2}r^{ 2}dF_{R}-dF_{R}\right)\,, \tag{24}\]
with \(S_{W}=A_{H}F_{R}/4\). This result is in agreement with Refs. [28, 51, 52].
Thus, when the equations of motion of \(F(R,G)\)-gravity are arranged into a form of the first law of thermodynamics at the apparent horizon, the Wald entropy should be redefined as \(S_{W}\to S_{W}+\bar{S}\) such that
\[d\bar{S}=2\pi r_{H}\left(-\frac{Hr_{H}^{3}}{2}d\dot{F}_{R}+\frac{r}{2}dF_{R} \left(H^{2}r_{H}^{2}-1\right)+\left(\frac{2}{r_{H}}(H^{2}r_{H}^{2}-1)+4H\dot{r} _{H}\right)dF_{G}-2Hr_{H}d\dot{F}_{G}\right)\,. \tag{25}\]
In literature, it has already been observed how this term can be a consequence of the non-equilibrium thermodynamics within \(F(R,G)\)-gravity framework [29] (see the discussion in Ref. [28] about the case of \(F(R)\)-gravity).
In general, the theory under consideration is an higher derivative theory, where the equations of motion involve the presence of third and fourth order time derivatives of the scale factor \(a(t)\). However, there is a suitable choice of the function \(F(R,G)\) which makes the field equations (14)-(15) at the second order, namely [41, 44],
\[F(R,G)=R+f(R,G)\,,\qquad f(R,G)=\frac{R+\sqrt{R^{2}-6G}}{12}\,. \tag{26}\]
It is easy to show that on FLRW metric (1) we get
\[J^{2}=\left(H^{2}+\frac{k}{a^{2}}\right)=\frac{R+\sqrt{R^{2}-6G}}{12}\,, \tag{27}\]
where \(J^{2}\) is the invariant in (3) and on the apparent horizon it reads \(J^{2}=\frac{1}{r_{H}^{3}}\). Note that this choice contains a non analytic dependence on \(R\) and \(G\).
Therefore, by writing \(f(R,G)=f(J^{2})\), since \(f_{R}=-\frac{4}{r^{2}}f_{G}\) and \(f_{G}=-\frac{Hr^{3}}{24r}f_{J^{2}}\), the first law (23) reads,
\[dE=WdV_{H}+T_{H}dS_{W}+\left(-\frac{\pi r_{H}^{4}f_{J^{2}}}{6}dJ^{2}+\frac{\pi r _{H}^{4}}{3}d\left(H^{2}f_{J^{2}}\right)\right)T_{H}\,, \tag{28}\]
where the Wald entropy \(S_{W}\) turns out to be the area law of GR, \(S_{W}=\frac{A_{H}}{4}\). The additional term \(d\bar{S}\) (25) is now given by
\[d\bar{S}=-\frac{\pi r_{H}^{4}f_{J^{2}}}{6}dJ^{2}+\frac{\pi r_{H}^{4}}{3}d \left(H^{2}f_{J^{2}}\right)\,. \tag{29}\]
This term can be explicitly computed for a generic FLRW space-time if we assume \(k=0\) and \(H^{2}=\frac{1}{r_{H}^{2}}\), namely
\[d\bar{S}=\frac{\pi r_{H}}{3}\left(-f_{H^{2}}dr_{H}+r_{H}df_{H^{2}}\right)\,. \tag{30}\]
Thus, given a specific model in the form of (26), \(\bar{S}\) can be derived via integration of (30) independently of the form of the scale factor \(a(t)\) and the result only depends on the radius of the apparent horizon. In particular, by making the choice \(f(J^{2})=\gamma\sqrt{J^{2}}=\gamma\sqrt{\frac{R+\sqrt{R^{2}-6G}}{12}}\), \(\gamma\) being a constant, \(d\bar{S}=0\) and we recover the first law of General Relativity. In this case, it is easy to see that the modification of gravity disappears from equations of motion on FLRW space-time.
Furthermore, we are able to reconstruct the specific gravitational lagrangians associated to some given entropy laws. For example, three years ago, Barrow proposed an interesting entropy corrected law inspired by the fractal structure of the black hole horizon surface [38],
\[S_{B}=\left(\frac{A_{H}}{4}\right)^{1+\frac{\Delta}{2}}\,,\qquad 0\leq\Delta\,. \tag{31}\]
The exponent \(\Delta\) quantifies quantum deformations and the Bekenstein entropy is recovered when \(\Delta=0\). In general, by assuming the Barrow entropy one may retrieve the gravitational field equations from the first law [53]. Here, we can furnish a model of \(F(R,G)\)-gravity which leads to such field equations in the flat spatial case, since by using (30) with \(d\bar{S}=dS_{B}-2\pi r_{H}dr_{H}\) one finds that the model (26) with
\[f(J^{2})=\frac{6\pi^{\frac{\Delta}{2}}}{(\Delta-1)}\frac{\left(1+\frac{\Delta}{ 2}\right)}{\left(1-\frac{\Delta}{2}\right)}\left(J^{2}\right)^{1-\frac{\Delta}{ 2}}+6J^{2}\,,\qquad J^{2}=\frac{R+\sqrt{R^{2}-6G}}{4}\,, \tag{32}\]
brings to Barrow entropy (31). Note that when \(\Delta=0\) the function \(f(J^{2})=0\) and at the leading order of \(\Delta\) one has
\[f(J^{2})=-3\Delta J^{2}=-\Delta\left(\frac{R+\sqrt{R^{2}-6G}}{4}\right)\,. \tag{33}\]
In the following sections we still study the first law in some other classes of gravitational models with second order differential equations of motion on FLRW space-time. The cosmological applications of this models have been largely investigated in Ref. [44].
## 4 Non-polynomial gravity models
Let we now consider the following non polinomial gravity model [54, 55],
\[I=\frac{1}{8\pi}\int_{\mathcal{M}}d^{4}x\sqrt{-g}\left[\frac{R}{2}+\frac{ \alpha}{6}\sqrt{(-\nabla_{\mu}R)^{2}}\right]+I_{m}\,, \tag{34}\]
with \(\alpha\) dimensional parameter and \(I_{m}\) the matter action. The Friedmann-like equations read,
\[16\pi\rho=6\left(H^{2}+\frac{k}{a^{2}}\right)-6\alpha H^{3}\,, \tag{35}\]
\[16\pi\left(\rho+p\right)=-4\left(\dot{H}-\frac{k}{a^{2}}\right)+6\alpha H\dot {H}\,. \tag{36}\]
Variation of Eq. (35) on the apparent horizon leads to
\[16\pi d\rho=-\frac{12}{r_{H}^{3}}dr_{H}-18\alpha H^{2}dH\,. \tag{37}\]
Thus, by introducing the temperature of the apparent horizon (5) the first law takes the form,
\[dE=WdV_{H}+T_{H}\left(2\pi r_{H}dr_{H}+3\pi\alpha H^{2}r_{H}^{4}dH\right)\,, \tag{38}\]
and in the case of \(k=0\) we get
\[dE=WdV_{H}+T_{H}\left(2\pi r_{H}dr_{H}-3\pi\alpha dr\right)\,, \tag{39}\]
such that
\[S=\frac{A_{H}}{4}\left(1-\frac{3\alpha}{r_{H}}\right)\,. \tag{40}\]
We will come back to this result at the end of the section.
A similar example can be found in the following model,
\[I=\frac{1}{8\pi}\int_{\mathcal{M}}d^{4}x\sqrt{-g}\left[\frac{R}{2}+\frac{ \alpha}{6}\sqrt{(-\nabla_{\mu}G)^{2}}\right]+I_{m}\,, \tag{41}\]
where \(\alpha\) is again a dimensional parameter and \(G\) is the Gauss-Bonnet four dimensional topological invariant introduced in Sec. SS3. The Friedmann-like equations read,
\[16\pi\rho=6\left(H^{2}+\frac{k}{a^{2}}\right)+2\alpha H\left(3H^{4}+\frac{2k} {a^{3}}\right)\,, \tag{42}\]
\[16\pi(\rho+p)=-4\left(\dot{H}-\frac{k}{a^{2}}\right)-10\alpha H^{3}\dot{H}+ \frac{4\alpha k}{3Ha^{3}}\left(-\dot{H}+3H^{2}\right)\,. \tag{43}\]
Thus, by following the procedure of the previous example we find,
\[dE=WdV_{H}+T_{H}\left(2\pi r_{H}dr_{H}-5\pi\alpha r_{H}^{4}H^{4}dr_{H}\right) +T_{H}\frac{2\pi r_{H}k\alpha}{a^{3}}\left(\frac{Hr_{H}^{3}da}{a}-\frac{r_{H}^ {3}dH}{3}\right)\,. \tag{44}\]
For \(k=0\) this expression simply reads,
\[dE=WdV_{H}+T_{H}\left(2\pi r_{H}dr_{H}-5\pi\alpha dr_{H}\right)\,, \tag{45}\]
\[S=\frac{A_{H}}{4}\left(1-\frac{5\alpha}{r_{H}}\right)\,. \tag{46}\]
As a result, in this class of models, when \(k=0\), power-law corrections to the area law assume the form,
\[S=\frac{A_{H}}{4}\left(1-\gamma\frac{1}{A_{H}^{\nu}}\right)\,, \tag{47}\]
where \(\gamma\) is a constant and \(1>\nu>0\). This corrections may emerge in dealing with the entanglement of quantum fields [56]. In (40) and (46) we have \(\nu=\frac{1}{2}\). We also observe that for \(\alpha\ll\frac{1}{H}\) the corrections to the area law are negligible in both of the models.
## 5 Extended mimetic gravity
Mimetic gravity was firstly introduced by Mukhanov and Chamseddine in Ref. [57] (see also Ref. [58] for a minimal extension of the model). Thanks to a (singular) disformal transformation of the metric which leads to the presence of a scalar "mimetic" field, the theory is able to reproduce the cosmological dark matter without invoking any exotic fluid, and the solutions of GR can be recovered as a special case. However, in the original formulation of mimetic gravity, scalar perturbations cannot propagate and some additional terms depending on the mimetic scalar field need to be added in the lagrangian. In this section we will work with an extended mimetic model [59] where the field equations on FLRW space-time are at the second order. The action of the model is given by,
\[I=\frac{1}{8\pi}\int_{\mathcal{M}}d^{4}x\sqrt{-g}\left[\frac{R}{2}+\lambda \left(X-\frac{1}{2}\right)+f[\chi(\phi)]\right]+I_{m}\,, \tag{48}\]
where \(X=-\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi\), \(\lambda\) is a Lagrange multiplier, \(\phi\) is a mimetic scalar field and \(I_{m}\) is the matter-radiation action of a perfect fluid. Moreover, \(f(\chi)\equiv f(\chi(\phi))\) is a generic function which depends on the higher order differential term in \(\phi\), namely \(\chi(\phi)=-\nabla^{\mu}\nabla_{\mu}\phi\,/3\). Thus, taking into account that on FLRW metric the Lagrange multiplier constrains the mimetic field to behave as \(\phi=t\), we simply get
\[\chi=H\qquad\rightarrow\qquad f(\chi)=f(H)\,. \tag{49}\]
The Friedmann-like equations can be written as
\[16\pi\rho=6\left(H^{2}+\frac{k}{a^{2}}\right)+f(H)-Hf_{H}\,, \tag{50}\]
\[16\pi(\rho+p)=-4\dot{H}+\frac{4k}{a^{2}}+\frac{1}{3}\dot{f}_{H}\,, \tag{51}\]
with the usual conservation law (17) for ordinary matter. Here, \(\rho\rightarrow\rho+\frac{C}{a^{2}}\), where \(C\) is a constant. The additional "dark matter" term comes from mimetic scalar field, as in the original work of Mukhanov and Chamseddine.
From Eq. (50) on the apparent horizon we derive
\[16\pi d\rho=-\frac{12}{r_{H}^{3}}dr_{H}-Hdf_{H}\;. \tag{52}\]
Thus, if we use (9) together with (51) we get
\[dE=WdV_{H}+dr_{H}\left(-1+\frac{\dot{r}_{H}}{2Hr_{H}}\right)+\frac{Hr_{H}^{3} }{12}df_{H}\left(-1+\frac{\dot{r}_{H}}{2Hr_{H}}\right)\,. \tag{53}\]
By introducing the temperature associated to the apparent horizon (5) we have
\[dE=WdV_{H}+\left(2\pi r_{H}dr_{H}+\frac{\pi Hr_{H}^{4}}{6}df_{H}\right)T_{H}\,. \tag{54}\]
As a result, the first law brings to the identification
\[dS=2\pi r_{H}dr_{H}+\frac{\pi Hr_{H}^{4}}{6}df_{H}=d\left(\pi r_{H}^{2} \right)+\frac{\pi}{6}H\dot{H}\left(\frac{a^{2}}{H^{2}a^{2}+k}\right)^{2}f_{H \,H}dt\,. \tag{55}\]
In fact, the area law of GR is corrected by a term depending on the mimetic scalar field through the function \(f_{HH}\). This result is valid on FLRW space-time only, where for a specific model one may compute the entropy of the apparent horizon once the form of the scale factor is fixed. However, as in the previous examples, a generalization is possible in the flat space with \(k=0\) such that
\[dS=\left(2\pi r_{H}dr_{H}+\frac{\pi}{6H^{3}}\frac{d^{2}f(H)}{dH^{2}}dH\right)\,, \tag{56}\]
and the correction to the area law can be computed integrating with respect to the Hubble parameter. For example, in Born-Infield inflationary scenario considered in Ref. [59] we obtain (we omit some dimensional parameter),
\[f(\chi)=1+\frac{\chi^{2}}{2}-\chi\arcsin\chi-\sqrt{1-\chi^{2}}\,, \tag{57}\]
and the entropy related to the apparent horizon results to be,
\[S=\frac{A_{H}}{4}+\frac{\pi}{12}\left[-\ln\left[\sqrt{\frac{H+1}{H-1}}-1\right] +\ln\left[\sqrt{\frac{H+1}{H-1}}+1\right]+H^{2}\sqrt{\frac{H+1}{H-1}}-H\sqrt{ \frac{H+1}{H-1}}-\frac{2H+1}{H^{2}}\right]\,. \tag{58}\]
Alternatively, formula (56) can be written as
\[dS=2\pi r_{H}dr_{H}\left(1-\frac{f_{HH}}{12}\right)\,. \tag{59}\]
As in the case of \(F(R,G)\)-gravity with second order Friedmann-like equations, one may recover some interesting scenarios in the context of modified entropy law. For example, it is easy to see that corrections of the type
\[f(\chi)=-\frac{\gamma\chi^{4}}{\pi}\,, \tag{60}\]
with \(\gamma\) dimensional constant, lead to logarithmic-corrected entropy,
\[S=\frac{A_{H}}{4}+\gamma\ln\left(\frac{A_{H}}{4}\right)\,. \tag{61}\]
A large variety of scenarios predict logharitmic-corrected entropy as the leading order quantum gravitational correction to the Bekenstein-Hawking entropy [35, 36, 37], mainly motivated by conformal anomaly [60] and quantum tunneling [61, 62] (see also Ref. [63] and references therein).
## 6 Mimetic Horndenski inspired gravity
Horndeski gravity is a class of scalar tensor theories where the scalar field interacts with gravity and the field equations are at the second order like in General Relativity [64]. In literature these theories have been well studied in different scenarios [65, 66, 67, 68, 69, 70]. Here, we remain in the context of mimetic gravity where the scalar field plays the role of dark matter, and we recall a model [71] where mimetic gravity action is implemented with higher-order terms that break the Horndeski structure of the lagrangian but still preserve second-order field equations on FLRW background. In this model scalar perturbations can propagate and gravitational wave speed is close enough to the speed of light ensuring the agreement with last cosmological data [72].
The action of the model is given by,
\[\frac{1}{8\pi}\int_{\mathcal{M}}d^{4}\sqrt{-g}\left[\frac{R}{2}(1+2aX)-\frac{c }{4}\left(\Box\phi\right)^{2}+\frac{b}{4}\left(\nabla_{\mu}\nabla_{\nu}\phi \right)^{2}-\frac{\lambda}{4}\left(2X+1\right)\right]+I_{m}\,, \tag{62}\]
where, again, \(X=-\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi\), \(\lambda\) is a Lagrange multiplier, \(\phi\) is the mimetic scalar field and \(I_{m}\) is the matter-radiation action of a perfect fluid. When \(b=c=4a\) we recover a Horndenski mimetic model. Due to the identification of \(\phi\) with the cosmological time, \(\phi=t\), the equations of motion on FLRW space-time are extremely simple and read,
\[16\pi\rho=\left(\frac{4-b+3c-4a}{4}\right)6\left(H^{2}+\frac{k}{a^{2}}\right)\,, \tag{63}\]
\[16\pi(\rho+p)=\left(\frac{4-b+3c-4a}{4}\right)4\left(-\hat{H}+\frac{k}{a^{2}} \right)\,. \tag{64}\]
The mimetic scalar field contributes to the energy density as \(\rho\rightarrow\rho+\frac{C}{a^{3}}\), where \(C\) is a constant. Thus, when \(a=b=c=0\) we recover the Friedmann equations of GR with the additional contribution of dark matter.
Now the first law is given by,
\[dE=WdV_{H}+\left(\frac{4-b+3c-4a}{4}\right)(2\pi r_{H}dr_{H})T_{H}\,, \tag{65}\]
and the entropy of the apparent horizon reads,
\[S=\left(4-b+3c-4a\right)\frac{A_{H}}{16}\,, \tag{66}\]
and is proportional to the area of the horizon as in GR. In Ref. [72] it has been found that in light of last cosmological data the parameters \(b\) and \(c\) must be extremely close to zero, while the parameter \(a\) should be \(0\leq a<1\) in order to avoid ghost instabilities. It means that by taking into account the observational constraints this mimetic model predicts a (positive) entropy in the form,
\[S\simeq(1-a)\frac{A_{H}}{4}\,. \tag{67}\]
Thus, the value of the parameter \(a\) affects the deviation of entropy result with respect to the area law of GR.
## 7 Conclusions
In this paper, we have investigated the first law of thermodynamics on (non flat) FLRW space-time in different theories of modified gravity. The thermodynamical proprieties of the black hole horizon are extended to the apparent horizon of FLRW space-time where the temperature is derived from the metric through the surface gravity. Furthermore, one expects that the first law associated to the apparent horizon of FLRW space-time is satisfied and the field equations of a gravitational theory should be consistent with its formulation.
In FLRW universe a matter-radiation source of perfect fluid is present and the energy and the working term of the first law can be described by energy density and pressure of the fluid. The formalism has been tested in General Relativity, Gauss-Bonnet and Lovelock gravity, where one recovers the entropy area law.
In modified gravity the entropy associated to apparent horizon does not satisfy the area law. Several alternatives to Bekenstein-Hawking entropy mainly motivated by quantum effects already exist, and in principle one may try to reconstruct the corresponding gravitational field equations by starting from the first law. On the other hand, the issue to formulate the gravitational theory which leads to a given entropy is much more complicate, due to the fact that when one works with a modified framework the field equations are not longer at the second order and we lose many of the proprieties of GR.
For this reasons, it is interesting to see what it is possible to argue in those (higher derivative) theories where the field equations on FLRW background remain at the second order. In our examples, at least for the flat spatial case, we found a formula for entropy in terms of the radius of the apparent horizon only, which allows to reconstruct the gravitational lagrangians associated to some given entropy laws.
First of all, we derived the first law of thermodynamics of non-flat FLRW universe in the framework of \(F(R,G)\)-gravity, were some additional terms to the Wald entropy emerge and in the case of \(F(R)\)-gravity we recover the result of Ref. [28]. Then, we investigated a special class of \(F(R,G)\)-models where the field equations on FLRW space-time are at the second order and we showed that in the flat space it is possible to derive a formula for the entropy which is independent of the explicit form of the scale factor. Similar results are found for a class of non-polynomial models depending on the covariant derivatives of the Ricci scalar and the Gauss-Bonnet where the field equations on FLRW space-time are at the second order. Some attempts to recover specific modified entropy laws are presented.
In the second part of the paper we considered two extended mimetic gravity models where a scalar field plays the role of dark matter on FLRW background. The dark matter contribution
enters in the first law by increasing the total energy density of matter-radiation contents of the universe, while the terms added to pure mimetic gravity modify the entropy area law. We remark that the viability of the model presented in SS6 has been investigated in the light of cosmological data in Ref. [72].
## Appendix
The entropy associated to \(F(R,G)\)-gravity can be calculated via Wald's method [73]. The explicit calculation of entropy \(S_{W}\) is given by the formula [74, 75],
\[S_{W}=-\frac{1}{8}\oint_{\tiny\begin{array}{c}r=r_{H}\\ t=\text{const}\end{array}}\,\left(\frac{\delta F(R,G)}{\delta R_{\mu\nu\xi \sigma}}\right)\,\,e_{\mu\nu}e_{\sigma\xi}r\,d\theta\,d\phi\,. \tag{68}\]
The antisymmetric variable \(e_{\mu\nu}=-e_{\nu\mu}\) is the binormal vector to the (bifurcate) horizon and it is normalized so that \(e_{\mu\nu}e^{\mu\nu}=-2\). Thus, by using the FLRW metric, it turns out to be
\[\epsilon_{\mu\nu}=\sqrt{\frac{a(t)^{2}}{(1+kr^{2})}}(\delta^{0}_{\mu}\delta^ {1}_{\nu}-\delta^{1}_{\mu}\delta^{0}_{\nu})\,, \tag{69}\]
\(\delta^{i}_{j}\) being the Kronecker delta.
By taking the variation of \(F(R,G)\) with respect to \(R_{\mu\nu\xi\sigma}\) as if \(R_{\mu\nu\xi\sigma}\) and the metric \(g_{\mu\nu}\) are independent, formula (68) leads to
\[S_{W} = -\frac{1}{2}A_{H}\,\left(\frac{a^{2}}{1+kr^{2}}\right)\,\left( \frac{\delta F(R,G)}{\delta R_{0101}}\right)\Big{|}_{H} \tag{70}\] \[= -\frac{1}{2}A_{H}\,\left(\frac{a^{2}}{1+kr^{2}}\right)\left(F_{R }\frac{\delta R}{\delta R_{0101}}+F_{G}\frac{\delta G}{\delta R_{0101}} \right)\Big{|}_{H}\,,\]
with \(A_{H}=4\pi r_{H}^{2}\). Since
\[\frac{\delta R}{\delta R_{\mu\nu\alpha\beta}}=\frac{1}{2}\left(g^{\alpha\mu} g^{\nu\beta}-g^{\nu\alpha}g^{\mu\beta}\right)\,, \tag{71}\]
\[\frac{\delta G}{\delta R_{\mu\nu\xi\sigma}}=\left[2R^{\mu\nu\xi\sigma}-2(g^{ \mu\xi}R^{\nu\sigma}+g^{\nu\sigma}R^{\mu\xi}-g^{\mu\sigma}R^{\nu\xi}-g^{\nu \xi}R^{\mu\sigma})+(g^{\mu\xi}g^{\nu\sigma}-g^{\mu\sigma}g^{\nu\xi})R\right]\,, \tag{72}\]
one obtains,
\[S_{W}=\frac{A_{H}}{4}\left(F_{R}+F_{G}\left(\frac{4}{r_{H}^{2}}\right)\right)\,. \tag{73}\]
In General Relativity \(F(R,G)=R\) and one recovers the usual area law, namely \(S_{W}=A_{H}/4\). The result is in agreement with the spherical case of Ref. [76], where static spherical symmetric black hole solutions and the associated Wald entropy in \(F(R,G)\)-gravity are investigated.
|
2305.12512
|
Central Limit Theorem for Gram-Schmidt Random Walk Design
|
We prove a central limit theorem for the Horvitz-Thompson estimator based on
the Gram-Schmidt Walk (GSW) design, recently developed in Harshaw et al.(2022).
In particular, we consider the version of the GSW design which uses randomized
pivot order, thereby answering an open question raised in the same article. We
deduce this under minimal and global assumptions involving only the problem
parameters such as the (sum) potential outcome vector and the covariate matrix.
As an interesting consequence of our analysis we also obtain the precise
limiting variance of the estimator in terms of these parameters which is
smaller than the previously known upper bound. The main ingredients are a
simplified skeletal process approximating the GSW design and concentration
phenomena for random matrices obtained from random sampling using the Stein's
method for exchangeable pairs.
|
Sabyasachi Chatterjee, Partha S. Dey, Subhajit Goswami
|
2023-05-21T17:06:55Z
|
http://arxiv.org/abs/2305.12512v2
|
# Central Limit Theorem for Gram-Schmidt Random Walk Design
###### Abstract
We prove a central limit theorem for the Horvitz-Thompson estimator based on the Gram-Schmidt Walk (GSW) design, recently developed in Harshaw et al. (2022). In particular, we consider the version of the GSW design, which uses _randomized pivot order_, thereby answering an open question raised in the same article. We deduce this under minimal and global assumptions involving _only_ the problem parameters, such as the (sum) potential outcome vector and the covariate matrix. As an interesting consequence of our analysis, we also obtain the precise limiting variance of the estimator in terms of these parameters, which is _smaller_ than the previously known upper bound. The main ingredients are a simplified _skeletal_ process approximating the GSW design and concentration phenomena for random matrices obtained from random sampling using Stein's method for exchangeable pairs.
Key words and phrases:Central limit theorem, causal inference, experimental design, discrepancy theory, exchangeable pairs. 2020 Mathematics Subject Classification: Primary: 60F05, 62K99; Secondary: 60G42, 62E20
###### Contents
* 1 Introduction
* 1.1 Horvitz-Thompson Estimator
* 1.2 Gram-Schmidt Walk Design
* 1.3 Review of Gram-Schmidt walk algorithm
* 1.4 Central Limit Theorem
* 1.5 The main results
* 1.6 Main contributions and comparisons with literature
* 1.7 Proof sketch
* 1.8 Notations and convention for constants
* 2 Definition of the skeletal process and the relevant martingales
* 2.1 Definitions of relevant Martingales
* 3 Preliminaries
* 3.1 Some linear algebraic identities
* 3.2 Some results on random sampling without replacement
* 3.3 The events \(\mathcal{G}_{1,t}\) and \(\mathcal{G}_{2,t}\)
* 4 Equivalence of CLT for different processes
* 5 Asymptotic normality of \(M_{n}\) and the proof of Theorem 1.3
* 6 Outlook
## 1 Introduction
We are interested in the statistical problem of estimating the _average treatment effect_ (ATE). This is one of the canonical problems in causal inference, and provides valuable insights into the effectiveness or impact of a particular treatment or intervention. The setup is the following.
Suppose there are \(n\) units or individuals with each individual \(i\) having two _potential outcomes_\(a_{i}\) and \(b_{i}\) corresponding to two possible treatments. We can think of \(a_{i}\) and \(b_{i}\) as the responses of the individual \(i\) that we would have observed if we had administered treatment \(A\) or \(B\) (respectively) to that individual. We will denote the ATE by \(\tau\), defined as
\[\tau=\frac{1}{n}\sum_{i=1}^{n}(a_{i}-b_{i}). \tag{1.1}\]
Let us set up some notations to be used throughout. Let \(\mathbf{a},\mathbf{b}\in\mathbb{R}^{n}\) denote the two potential outcome vectors and \(\mathbf{\mu}\coloneqq\mathbf{a}+\mathbf{b}\) be the _sum potential outcome vector_.
### Horvitz-Thompson Estimator
One of the classical methods to estimate the ATE is the _Horvitz-Thompson estimator_. Let \(z_{i}=\pm 1\) denote whether treatment \(A\) or \(B\) is administered. The vector \(\mathbf{z}=(z_{1},\ldots,z_{n})\in\{\pm 1\}^{n}\) is called the _design vector_. It then just estimates the ATE by taking the difference of the empirical means of the observed potential outcomes, namely,
\[\widehat{\tau}=\frac{1}{n}\left(\sum_{i:z_{i}=+1}\frac{a_{i}}{\mathbb{P}[z_{i }=1]}-\sum_{i:z_{i}=-1}\frac{b_{i}}{\mathbb{P}[z_{i}=-1]}\right).\]
It is well known (see Imbens and Rubin (2015)) that \(\widehat{\tau}\) is unbiased if \(\mathbb{P}[z_{i}=1]\in(0,1)\) for all units \(i\in[n]\). In this paper, we consider designs where each unit is equally likely to receive either treatment, _i.e.,_, \(\mathbb{P}[z_{i}=1]=\mathbb{P}[z_{i}=-1]=\frac{1}{2}\). The distribution of \(\widehat{\tau}\) clearly depends on the design \(\mathbf{z}\). One choice for \(\mathbf{z}\) is the i.i.d. design where the variables \(z_{i}\)'s i.i.d. Rademacher variables. It is not hard to check (see _e.g.,_ (Harshaw et al., 2022, Lemma 1.2)) that the Horvitz-Thompson estimator \(\widehat{\tau}_{\mathrm{Rad}}\) based on the i.i.d. design has variance given by
\[\mathds{E}[\widehat{\tau}_{\mathrm{Rad}}-\tau]^{2}=\mathrm{Var}[\widehat{\tau }_{\mathrm{Rad}}]=\frac{1}{n^{2}}\|\mathbf{\mu}\|^{2}. \tag{1.2}\]
The setting that we are interested in here is when in addition to our observation of one of the potential outcomes \(a_{i}\) or \(b_{i}\) for the \(i\)-th unit, we also observe a covariate vector \(\mathbf{x}_{i}\in\mathbb{R}^{d}\), for each unit \(i\in[n]\). A natural question that arises at this point is
_whether it is possible to use the information in the covariates to improve the Horvitz-Thompson Estimator?_
### Gram-Schmidt Walk Design
Recently, Harshaw et al. (2022) proposed a solution to the above-posed question by using the Gram-Schmidt Walk algorithm from Bansal et al. (2018) to change the sampling strategy of the design vector \(\mathbf{z}\). This algorithm is reviewed in Section 1.3. The authors obtain a random vector \(\mathbf{z}\in\{\pm 1\}^{n}\) (final output of the so-called Gram-Schmidt walk), which no longer consists of i.i.d. Rademacher entries. However, they consider the same exact form of the Horvitz-Thompson estimator as in (1.1). They show the following mean squared error bound for their estimator \(\widehat{\tau}_{\mathrm{gs}}\).
**Theorem 1.1**.: (Harshaw et al., 2022, Theorem 4.1) Let \(X\) be an \(n\times d\) covariate matrix with rows \(\{\mathbf{x}_{i}\}_{i=1}^{n}\). Let \(\xi:=\max_{i\in[n]}\|\mathbf{x}_{i}\|\) and \(\phi\in(0,1)\) be an algorithm parameter, called the robustness parameter, fixed beforehand. Then, for the Horvitz-Thompson estimator based on the GSW design with parameter \(\phi\), we have the following upper bound on the mean squared error:
\[n\,\mathds{E}[\widehat{\tau}_{\mathrm{gs}}-\tau]^{2}\leqslant\inf_{\mathbf{\beta }\in\mathbb{R}^{d}}\left(\frac{1}{\phi n}\|\mathbf{\mu}-X\mathbf{\beta}\|^{2}+\frac{ \xi^{2}\|\mathbf{\beta}\|^{2}}{(1-\phi)n}\right). \tag{1.3}\]
Suppose we choose \(\mathbf{\beta}=\mathbf{\beta}_{\text{ls}}\) such that \(X\mathbf{\beta}_{\text{ls}}=\operatorname{Proj}_{\operatorname{ColSp}(X)}(\mathbf{\mu})\), where \(\operatorname{Proj}_{\operatorname{ColSp}(X)}\) is the (orthogonal) projector onto the column space of \(X\). The first term on the right-hand side above will then scale like
\[\frac{1}{n}\|\mathbf{\mu}-\operatorname{Proj}_{\operatorname{ColSp}(X)}(\mathbf{\mu})\|^ {2}\]
which is typically \(\Theta(1)\) unless the (sum potential) outcome vector \(\mathbf{\mu}\) lies "too close" to \(\operatorname{ColSp}(X)\). Also we can expect \(\xi^{2}\) and \(\|\mathbf{\beta}\|^{2}\) to be \(O(d)\) as they are norms of \(d\)-dimensional vectors. Therefore, the second-term scales like \(O(d^{2}/n)\) which, if \(d=o(\sqrt{n})\), is a lower order term than the first term. Therefore, in such a regime, the above theorem ensures that the Horvitz-Thompson estimator \(\widehat{\tau}_{\text{gs}}\) based on the Gram-Schmidt Walk design (setting \(\phi\) very close to \(1\)) satisfies the two qualitative properties when compared to \(\widehat{\tau}_{\text{Rad}}\) (see (1.2) above).
1. If the covariates are predictive of the outcome vector \(\mathbf{\mu}\), _i.e.,_\(\|\mathbf{\mu}-\operatorname{Proj}_{\operatorname{ColSp}(X)}(\mathbf{\mu})\|^{2}\) is significantly smaller than \(\|\mathbf{\mu}\|^{2}\) then \(\widehat{\tau}_{\text{gs}}\) has significantly smaller mean squared error than \(\widehat{\tau}_{\text{Rad}}\).
2. However, even if the covariates are not predictive of \(\mathbf{\mu}\), since \(\|\mathbf{\mu}-\operatorname{Proj}_{\operatorname{ColSp}(X)}(\mathbf{\mu})\|^{2}\) is bounded by \(\|\mathbf{\mu}\|^{2}\) and the second term in the right-hand side of (1.3) is negligible compared to the first term, mean squared error of \(\widehat{\tau}_{\text{gs}}\) is never too much greater than the mean squared error of \(\widehat{\tau}_{\text{Rad}}\).
The first property has been termed as _Covariate Balance_ and the second property has been termed as _Robustness_ in Harshaw et al. (2022). In this way, the Horvitz-Thompson estimator using the Gram-Schmidt Walk (GSW) design achieves both covariate balance and robustness. It has also been argued in Harshaw et al. (2022) that the GSW design enjoys certain advantages over other existing design approaches in the causal inference literature such as rerandomization and designs based on matching pairs. Overall, it is probably fair to say that the GSW design, although a recent entrant to the causal inference design toolbox, has already become one of its prominent tools. In the next section, we review the GSW design algorithm; see Bansal et al. (2018), Harshaw et al. (2022) for more detailed discussions on the algorithm.
### Review of Gram-Schmidt walk algorithm
We briefly describe the Gram-Schmidt random walk design in this section using the so-called _randomized pivot ordering_. See Section 3 and Section A1.1 in Harshaw et al. (2022) for further details. Define
\[\mathbf{B}:=\begin{bmatrix}\sqrt{\phi}I_{n}\\ \xi^{-1}\sqrt{1-\phi}X^{\intercal}\end{bmatrix} \tag{1.4}\]
where \(\xi\coloneqq\max_{i\in[n]}\|\mathbf{x}_{i}\|\). We start with \(\mathcal{A}_{1}^{\text{gs}}\coloneqq[n]\), \(\mathbf{z}_{0}^{\text{gs}}=\mathbf{0}\in\mathbb{R}^{n}\). The algorithm at round \(t\in[n]\).
1. If \(p_{t-1}^{\text{gs}}\notin\mathcal{A}_{t}\), choose a pivot \(p_{t}^{\text{gs}}\) uniformly at random from \(\mathcal{A}_{t}^{\text{gs}}\), otherwise set \(p_{t}^{\text{gs}}=p_{t-1}^{\text{gs}}\).
2. Compute a _step direction_\(\mathbf{u}_{t}^{\text{gs}}\in\mathbb{R}^{n}\) as \[\mathbf{u}_{t}^{\text{gs}}\leftarrow\operatorname{argmin}_{\mathbf{u}} \|\mathbf{B}\mathbf{u}\|^{2}\] subject to \[\mathbf{u}[i]=0\text{ for all }i\notin\mathcal{A}_{t}^{\text{gs}}\] \[\mathbf{u}[p_{t}^{\text{gs}}]=1.\]
3. Setting \[\Delta:=\{\delta\in\mathbb{R}:\mathbf{z}_{t-1}^{\text{gs}}+\delta\mathbf{u}_{t}^{\text {gs}}\in[-1,1]^{n}\},\]
let \(\delta^{+}:=|\sup\Delta|\) and \(\delta^{-}:=|\inf\Delta|\). Next define \(\delta_{t}^{\text{gs}}\) as follows:
\[\delta_{t}^{\text{gs}}=\begin{cases}\delta^{+}\text{ with probability }\frac{\delta^{-}}{ \delta^{+}+\delta^{-}}\\ \delta^{-}\text{ with probability }\frac{\delta^{+}}{\delta^{+}+\delta^{-}}\end{cases}\]
4. Now update (1.5) \[\mathbf{z}_{t}^{\text{gs}}=\mathbf{z}_{t-1}^{\text{gs}}+\delta_{t}^{\text{gs}}\mathbf{u} _{t}^{\text{gs}}\] and \(\mathcal{A}_{t+1}^{\text{gs}}=\mathcal{A}_{t}^{\text{gs}}\setminus\{i\in[n]: |\mathbf{z}_{t}^{\text{gs}}[i]|<1\}\).
5. Increment the index \(t\gets t+1\) and go to 1, unless \(t=n\).
### Central Limit Theorem
Apart from estimating the ATE, it is also of interest to give confidence intervals for the ATE. Towards this end, Harshaw et al. (2022) also studied the distributional properties of \(\widehat{\tau}_{\text{gs}}-\tau\). In particular, they showed that \(\widehat{\tau}_{\text{gs}}-\tau\) is subgaussian-\(\sigma\) random variable where \(\sigma\) is given by the right-hand side in (1.3). As shown in Harshaw et al. (2022), this fact allows one to construct confidence intervals valid for any sample size. Moreover, the authors also derived a central limit theorem (CLT) for \(\widehat{\tau}_{\text{gs}}\) and showed how this theorem can be used (in the large sample case) to construct confidence intervals that are narrower than those based on concentration inequalities for a subgaussian random variable. The CLT in Harshaw et al. (2022) was proved for the version of the Gram-Schmidt Walk design which uses a _fixed_ ordering of the pivots. It is based on somewhat strong assumptions on the matrix \(X\) along with the requirement that \(n^{2}\operatorname{Var}[\widehat{\tau}_{\text{gs}}]\), _i.e.,_ the (scaled) variance of the estimator _itself_, be at least \(cn\) (as in the i.i.d. case) which is _unknown_ a priori in relation to the _parameters_\(\mathbf{\mu}\) and \(X\). The authors there conjectured, see (Harshaw et al., 2022, Section 6.3), that the CLT should also hold for the version which uses _randomized pivot ordering_. The main focus of our article is to prove this conjecture under minimal and global assumptions involving _only_ the design matrix \(X\) and the outcome vector \(\mathbf{\mu}\).
### The main results
Recall that the covariate matrix \(X\) and the outcome vector \(\mathbf{\mu}\) both naturally depend on the number of units \(n\). Similarly the number of covariates \(d\) and the robustness parameter \(\phi\) may also depend on \(n\). In the sequel, we let
\[\mathbf{\mu}=X\mathbf{\beta}_{\text{ls}}+\mathbf{v}\text{ with }\mathbf{v}^{\intercal}X=\mathbf{0}, \tag{1.6}\]
_i.e.,_ \(\mathbf{v}\) is the orthogonal projection of \(\mathbf{\mu}\) onto \(\operatorname{ColSp}(X)^{\perp}\). We are now ready to state the three regularity conditions which we will use to show the asymptotic normality. These are "simplified" versions of much more _general_ assumptions under which we can derive the CLT in Theorem 1.3 below.
**Assumption 1.1** (Outcome regularity).: \(\frac{\|\mathbf{v}\|_{\infty}}{\|\mathbf{v}\|}\leqslant\frac{C}{\log^{3+c}(n)}\) _whereas \(\|\mathbf{\beta}_{\text{ls}}\|^{2}\leqslant C\log n\) for some positive constants \(c\) and \(C\)._
**Assumption 1.2** (Covariate regularity).: _The smallest singular value of the covariate matrix \(X\) satisfies \(\sigma_{\min}(X)\geqslant\sqrt{cn}\) for some positive constant \(c\). Also the maximum row norm of the covariate matrix, \(\xi=\max_{i\in[n]}\|\mathbf{x}_{i}\|\), is bounded as \(\xi^{2}\leqslant Cd\log n\) for some positive constant \(C\)._
**Assumption 1.3** (Non-degeneracy).: \(\|\mathbf{v}\|^{2}\geqslant\log^{2+c}(n)\) _for some positive constant \(c\)._
A detailed discussion of the necessity and minimality of these assumptions in comparison to their counterparts in Harshaw et al. (2022) is provided in the next subsection.
**Theorem 1.2**.: _Suppose that Assumptions 1.1-1.3 hold. Also, suppose that \(d\) is fixed and \(\phi\) is bounded away from \(0\) and \(1\). Then the limiting distribution of the Horvitz-Thompson estimator \(\widehat{\tau}_{\mathrm{gs}}\) under the Gram-Schmidt Walk design is normal:_
\[\frac{\widehat{\tau}_{\mathrm{gs}}-\tau}{\sqrt{\mathrm{Var}[\widehat{\tau}_{ \mathrm{gs}}]}}\xrightarrow[n\to\infty]{\mathrm{law}}N(0,1). \tag{7}\]
_Furthermore, in this case, we have the following asymptotic formula for the variance:_
\[\lim_{n\to\infty}\frac{\mathrm{Var}[\widehat{\tau}_{\mathrm{gs}}]}{\|\mathbf{v}\| ^{2}/n^{2}}=1 \tag{8}\]
_where \(\mathbf{v}=\mathrm{Proj}_{\mathrm{ColSp}(X)^{\perp}}(\mathbf{\mu})\) is the orthogonal projection of the outcome vector \(\mathbf{\mu}\) onto \(\mathrm{ColSp}(X)^{\perp}\)._
**Remark 1.1**.: _In Theorem 1.2, we assume conditions 1.1-1.3 to keep the exposition simpler. One can prove a CLT under other assumptions as long as the error term converges to zero. Also, one can apply a similar idea for the case when \(\mathbb{P}[z_{i}=1],\mathbb{P}[z_{i}=-1]\in(\varepsilon,1-\varepsilon)\) for some \(\varepsilon\in(0,1)\) fixed._
### Main contributions and comparisons with literature
In this Section, we briefly discuss the major contributions of the article.
1. **Generality of assumptions on \(\mathbf{\mu}\) and \(X\).** We compare our assumptions vis-a-vis the assumptions made in (Harshaw et al., 2022, Section 6.3) facilitating the CLT. 1.1.1 Outcome Regularity. The analogous assumption in Harshaw et al. (2022), namely Assumption 6.4, is expressed in terms of the \(\ell^{\infty}\)-norm of the outcome vector \(\mathbf{\mu}\): \[\|\mathbf{\mu}\|_{\infty}=O(\log^{c}(n)).\] If we assume that \(\|\mathbf{\mu}\|^{2}\geqslant cn\) in the "typical" scenario, we can reinterpret this condition as (9) \[\frac{\|\mathbf{\mu}\|_{\infty}}{\|\mathbf{\mu}\|}=O\left(\frac{\log^{c}(n)}{\sqrt{n}} \right).\] On the other hand, we formulate our condition in terms of the "residual" outcome vector \(\mathbf{v}=\mathrm{Proj}_{\mathrm{ColSp}(X)^{\perp}}(\mathbf{\mu})\) which turns out to be the right object to look at based on our analysis. Note however that while the covariate matrix \(X\) is known to the experimenter prior to the treatment assignment, \(\mathbf{\mu}\) and hence \(\mathbf{v}\) is unavailable to her. Compared to (9), our assumption on \(\mathbf{v}\) as a vector satisfying \[\frac{\|\mathbf{v}\|_{\infty}}{\|\mathbf{v}\|}=O\left(\frac{1}{\log^{3+c}(n)}\right)\] is significantly weaker. This outcome regularity assumption essentially puts a density constraint on \(\mathbf{v}\); _i.e.,_\(\mathbf{v}\) cannot be too sparse. The assumption in Harshaw et al. (2022) interpreted as (9) above basically says that \(\mathbf{\mu}\) should have _effectively_\(\Omega\left(\frac{n}{\log^{c}(n)}\right)\) many non-zero entries of roughly equal size. In comparison, our assumption allows \(\mathbf{v}\) which is very sparse, in the sense that it can be supported on \(\log^{c}(n)\) many elements of roughly equal size. It is not difficult to see that the effective support of \(\mathbf{\mu}\) or \(\mathbf{v}\) should diverge to \(\infty\) at some rate for any CLT to hold. Our assumption on \(\mathbf{\beta}_{\mathrm{ls}}\) is very mild and can in fact be easily deduced from conditions like \(\|\mathbf{\mu}\|_{\infty}^{2}=O(\log n)\) and Assumption 1.2 on \(X\).
2. _Covariate Regularity._ Assumption 6.5 in Harshaw et al. (2022) stipulates that \(\sigma_{\min}(X_{m})\geqslant\sqrt{cm}\) for all \(m\geqslant n^{1/2-\epsilon}\) where \(X_{m}\) is the submatrix of dimensions \(m\times d\) given by the first \(m\) rows of \(X\) after ordering them according to the ordering of pivots. In our understanding, this assumption is necessary essentially because they work with a fixed pivot ordering. Since we work with the _randomized pivot ordering_, this randomization actually allows us to only require a _global_ condition on the full covariate matrix instead of its many submatrices. We believe that our Assumption 1.2 on the covariate matrix \(X\) is both general and minimal. Assumption 6.5 in Harshaw et al. (2022) also requires a logarithmic upper bound on the so-called _incoherence_ of the submatrix \(X_{m}\) for all \(m\geqslant n^{1/2-\epsilon}\) which we _do not need at all_. Finally, our assumption that the maximum squared \(\ell^{2}\)-norm of \(\mathbf{x}_{i}\)'s (\(\in\mathbb{R}^{d}\)) is at most \(Cd\log n\) is also present in (Harshaw et al., 2022, Assumption 6.5) and is motivated by similar reasons.
3. _Non-degeneracy._ The non-degeneracy criterion presented by (Harshaw et al., 2022, Assumption 6.6) is the blanket assumption that \(n^{2}\operatorname{Var}[\widehat{\tau}_{\text{gs}}]\) is at least \(cn\) like in the case of i.i.d. design. In particular, it is not clear what this assumption means in terms of the problem parameters \(\mathbf{\mu}\) and \(X\). In comparison, our analysis yields that \(n^{2}\operatorname{Var}[\widehat{\tau}_{\text{gs}}]\) is _asymptotically equivalent_ to \(\|\mathbf{v}\|^{2}\) (1.8). Consequently, our Assumption 1.3 suggests a non-degeneracy condition involving solely the super-logarithmic divergence of \(\|\mathbf{v}\|^{2}\) which is a known function of the problem parameters \(\mathbf{\mu}\) and \(X\). Also since \(n^{2}\operatorname{Var}[\widehat{\tau}_{\text{gs}}]\) is asymptotically equivalent to \(\|\mathbf{v}\|^{2}\), our condition is _significantly weaker_ compared to (Harshaw et al., 2022, Assumption 6.6) as it replaces the linear lower bound in the latter with only a poly-logarithmic bound.
2. **A precise and smaller asymptotic variance.** A very important byproduct of our analysis is that we obtain an asymptotically _exact_ variance control for the estimator \(\widehat{\tau}_{\text{gs}}\) as given by (1.8). It is not difficult to see that (see for instance the brief discussion following Theorem 1.1) the best possible upper bound on \(n^{2}\operatorname{Var}[\widehat{\tau}_{\text{gs}}]\) allowed by Theorem 1.1 is \(\frac{1}{\phi}\|\mathbf{v}\|^{2}\). Since \(\phi\in(0,1)\), our formula (1.8) shows that the _true_ variance is _smaller_ by the factor \(\phi\) and as such is _independent_ of the robustness parameter \(\phi\). This is particularly beneficial for statisical applications.
3. **Power of randomization.** As already mentioned, we prove our CLT for the GSW design with randomized pivot ordering as opposed to the fixed pivot ordering in Harshaw et al. (2022) where it was posed as a conjecture. This randomization plays a crucial role in our analysis and enables us to derive our result under such minimal conditions as discussed above. We harness its strength in this paper in three different (but related) ways. Firstly, randomization lies at the heart of our definition of the _skeletal_ process which carries most of our analysis and yields the correct formula for the asymptotic variance in (1.8) (see Section 2.1 below). Secondly, we use it to take advantage of the concentration phenomenon for the sum of a random sample without replacement of size \(k\) out of \(n\) unit-rank matrices when \(1\ll k\ll n\). We use the Stein's method for exchangeable pairs for this purpose (see Proposition 3.2). Finally, we use some classical moments formulae for the sample mean of a random sample without replacement (see Proposition 3.1) propelled by the need to exploit the orthogonality between \(\mathbf{v}\) and \(\operatorname{ColSp}(X)\). All of these point towards the fact that the randomized pivot ordering is perhaps the "natural" setup to perform the GSW design.
### Proof sketch.
We deduce Theorem 1.2 as a special instance of a more general result. Below we let \(Y\) denote the matrix \(\xi^{-1}\frac{\sqrt{1-\phi}}{\sqrt{\phi}}\,X\) (the motivation for choosing this matrix is given below (2.1) in the next section).
**Theorem 1.3**.: _Let \(\phi\in(0,1)\) be bounded away from \(0\) and \(d\leqslant n\). Also let_
\[\kappa\coloneqq\tfrac{n}{\sigma_{\min}(Y)}=\tfrac{n}{\lambda_{\min}(Y^{\intercal} Y)}\]
_(\(\lambda_{\min}(\cdot)\) being the smallest eigenvalue) and \(\mathbf{v}\in\mathbb{R}^{n}\) be any vector satisfying \(\mathbf{v}^{\intercal}X=0\), i.e., \(\mathbf{v}\) is orthogonal to \(\operatorname{ColSp}(X)\). Then we have, with \(\mathbf{z}_{n}^{\operatorname{gs}}\) as given by (1.5),_
\[\frac{\langle\mathbf{z}_{n}^{\operatorname{gs}},\mathbf{v}\rangle}{\sqrt{\operatorname {Var}\langle\mathbf{z}_{n}^{\operatorname{gs}},\mathbf{v}\rangle}}\xrightarrow[n\to \infty]{\text{law}}N(0,1) \tag{1.10}\]
_provided_
\[\lim_{n\to\infty}d^{1/2}\,\frac{\|\mathbf{v}\|_{\infty}}{\|\mathbf{v}\|}\,\kappa^{2} \log n=0. \tag{1.11}\]
_Furthermore, in this case, we have the following asymptotic formula for the variance:_
\[\lim_{n\to\infty}\frac{\operatorname{Var}\langle\mathbf{z}_{n}^{\operatorname{gs }},\mathbf{v}\rangle}{\|\mathbf{v}\|^{2}}=1. \tag{1.12}\]
Proof of Theorem 1.2.: We assume Theorem 1.3 and proceed to deduce Theorem 1.2 from it. Recall that
\[Y=\xi^{-1}\tfrac{\sqrt{1-\phi}}{\sqrt{\phi}}\,X.\]
Since \(d\) is bounded and \(\phi\) is bounded away from \(0\) and \(1\), Assumptions 1.1-1.2 imply condition (1.11) for \(\mathbf{v}=\operatorname{Proj}_{\operatorname{ColSp}(X)^{\perp}}(\mathbf{\mu})\) with \(\kappa\leqslant C\log(n)\) and hence (1.10) and (1.12) hold for this choice of \(\mathbf{v}\) by Theorem 1.3. On the other hand, from (Harshaw et al., 2022, Lemma 1.1) one can write
\[\widehat{\tau}_{\operatorname{gs}}-\tau=\frac{1}{n}\langle\mathbf{z}_{n}^{ \operatorname{gs}},\mathbf{\mu}\rangle=\frac{1}{n}\langle\mathbf{z}_{n}^{ \operatorname{gs}},X\mathbf{\beta}_{\operatorname{ls}}\rangle+\frac{1}{n} \langle\mathbf{z}_{n}^{\operatorname{gs}},\mathbf{v}\rangle. \tag{1.13}\]
However, in view of Theorem 1.1 applied to the case when \(\mathbf{\mu}=X\mathbf{\beta}_{\operatorname{ls}}\), we can bound
\[\operatorname{Var}\langle\mathbf{z}_{n}^{\operatorname{gs}},X\mathbf{\beta}_{ \operatorname{ls}}\rangle\leqslant\frac{\xi^{2}\|\mathbf{\beta}_{\operatorname{ ls}}\|^{2}}{(1-\phi)}.\]
But since \(\|\mathbf{\beta}_{\operatorname{ls}}\|^{2}\leqslant C\log n\) is bounded and \(\xi^{2}\leqslant C\log n\) by Assumptions 1.1 and 1.2 respectively (recall that \(d\) is bounded), we immediately obtain
\[\operatorname{Var}\langle\mathbf{z}_{n}^{\operatorname{gs}},X\mathbf{\beta}_{ \operatorname{ls}}\rangle\leqslant\frac{C\log^{2}(n)}{1-\phi}\]
for some constant \(C>0\). Therefore, from (1.12) and Assumption 1.3, we immediately deduce (1.8) as well as
\[\frac{\langle\mathbf{z}_{n}^{\operatorname{gs}},X\mathbf{\beta}_{\operatorname{ls}} \rangle}{\sqrt{\operatorname{Var}\langle\mathbf{z}_{n}^{\operatorname{gs}},\mathbf{v} \rangle}}\text{ converges to }0\text{ in probability.}\]
Together with (1.10), this yields the CLT for \(\widehat{\tau}_{\operatorname{gs}}\) in view of (1.13).
**Remark 1.2** (Relaxing the assumptions for Theorem 1.2).: _It is clear from the statement of Theorem 1.3 as well as the proof above that there are flexibilities for relaxing or modifying the Assumptions 1.1-1.3 as might be suitable in some situations._
The rest of the article is devoted to proving Theorem 1.3. It is not difficult to see that \(\{\langle\mathbf{z}_{t}^{\text{gs}},\mathbf{v}\rangle\}_{t=0}^{n}\) is a martingale sequence (see Section 2.1 below) and we will be using the Martingale central limit theorem to prove Theorem 1.3. However, running this recipe for the process \(\{\langle\mathbf{z}_{t}^{\text{gs}},\mathbf{v}\rangle\}_{t=0}^{n}\) is far from being straightforward. The GSW design introduces smeared but irregular dependence across all \(t\in[n]\) which makes it very hard to control the martingale differences. Herein enters the _skeletal_ process \(\{M_{t}\}_{t=0}^{n}\) introduced in Section 2.1 renders this dependence "well-behaved" and the explicit computation of variance almost immediate. Roughly speaking, \(M_{n}\) is the projection along \(\mathbf{v}\) of an i.i.d. Rademacher linear combination of vectors obtained by applying the classical Gram-Schmidt process to a uniformly random permutation of the columns of a matrix \(B\). In Section 5 we prove the CLT for \(M_{n}\) and compute its variance in the process. This part involves, among other important ideas, careful application of the concentration for the sum of unit-rank matrices in a random sample using Stein's method for _exchangeable pairs_.
However, one still needs to transfer this result to our original random variable \(\langle\mathbf{z}_{n}^{\text{gs}},\mathbf{v}\rangle\). To this end, we first identify a time point \(t=n-m\) until which the two underlying processes stay very close to each other on some good event \(\mathcal{G}\) occurring with high probability. Then we control the tail ends of these two processes, _i.e.,_ the difference between time \(n\) and \(n-m\). To facilitate these steps, we introduce yet another intermediate process, namely \(\{\widetilde{M}_{t}\}_{t=0}^{n}\) in Section 2.1. This part of our analysis is the content of Section 4 and hinges on some of the ideas used in Section 5.
### Notations and convention for constants
We use boldface to denote vectors. For any natural number \(n\), \([n]\) denotes the set of integers \(\{1,\ldots,n\}\). For any \(m\times n\) matrix \(D\) and subsets \(\mathcal{A}_{1}\subset[m]\) and \(\mathcal{A}_{2}\subset[n]\), we use \(D[\mathcal{A}_{1},\mathcal{A}_{2}]\) to denote the submatrix of \(D\) formed by the rows and columns with indices in \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\) respectively. If \(\mathcal{A}_{1}=[m]\) or \(\mathcal{A}_{2}=[n]\), we denote the corresponding submatrix as \(D[:,\,\mathcal{A}_{2}]\) and \(D[\mathcal{A}_{1},:]\) respectively. Similar notations are also used for vectors. \(D^{\intercal}\) denotes the transpose of the matrix \(D\). \(\operatorname{ColSp}(D)\) denotes the column space of \(D\), and \(\operatorname{Tr}(D)\) denotes its trace, _i.e.,_ the sum of diagonal elements of \(D\). The _Frobenius norm_ of \(D\), _i.e.,_\(\operatorname{Tr}(D^{\intercal}D)=\operatorname{Tr}(DD^{\intercal})\) (\(D\) is always real in our case) is denoted by \(\|D\|_{\operatorname{Frob}}\) whereas its _operator norm_, _i.e.,_ the maximum singular value of \(D\) is denoted as \(\|D\|_{\operatorname{op}}\). For any linear subspace \(S\subset\mathbb{R}^{n}\), \(\operatorname{Proj}_{S}\) denotes the orthogonal projector onto \(S\). We use \(I_{n}\) to denote the \(n\times n\) identity matrix.
Our convention regarding constants is the following. Throughout, \(c,c^{\prime},C,C^{\prime},\ldots\) denote positive constants that may change from place to place. It might be helpful to think of upper and lower-case letters as denoting large and small constants respectively. Numbered constants are defined the first time they appear and remain fixed thereafter. All constants are assumed to be absolute unless explicitly mentioned otherwise. To avoid the cluttering of notations, we suppress the implicit dependence on \(n\) in the problem parameters \(\mathbf{\mu},X\), etc., as well as various stochastic processes that we define in the course of our analysis.
**Note added later:** After the article appeared online, by private communication, we came to know that the authors in Harshaw et al. (2022) are in the process of extending their CLT in a similar direction, albeit under possibly different assumptions.
## 2 Definition of the skeletal process and the relevant martingales
Recall from the introduction that the Gram-Schmidt Walk algorithm furnishes several stochastic processes defined on the same probability space \((\Omega,\mathcal{A},\mathbb{P})\) (say); these are the fractional assignments \(\{\mathbf{z}_{t}^{\text{gs}}\}_{t=0}^{n}\), the sequence of pivots \(\{p_{t}^{\text{gs}}\}_{t=0}^{n}\), the associated sequence of active sets \(\{\mathcal{A}_{t}^{\text{gs}}\}_{t=0}^{n}\), the sequence of direction vectors \(\{\mathbf{u}_{t}^{\text{gs}}\}_{t=0}^{n}\in\mathbb{R}^{n}\) and finally the sequence of step sizes
\(\{\delta_{t}^{\text{gs}}\}_{t=0}^{n}\). In this section we will define, on the same underlying probability space \((\Omega,\mathcal{A},\mathbb{P})\), a _parallel_ process for each of these processes. We will refer to these processes collectively and -- with a slight abuse of notation -- individually as the _skeletal (Gram-Schmidt) process(es)_. In fact, the only CLT we will prove _directly_ in this paper is for the random variables \(\langle\mathbf{z}_{n},\mathbf{v}\rangle\) where \(\{\mathbf{z}_{t}\}_{t=0}^{n}\) is the skeletal process parallel to \(\{\mathbf{z}_{t}^{\text{gs}}\}_{t=0}^{n}\). This is also the reason why we chose relatively heavier notations for the processes associated with the Gram-Schmidt algorithm in the introduction so that we can preserve the lighter notations for the skeletal processes.
To this end let us start with \(\mathbf{z}_{0}^{\text{gs}}=\mathbf{z}_{0}=\mathbf{0}\), \(\mathcal{A}_{1}^{\text{gs}}=\mathcal{A}_{1}=[n]\) and a sequence \(U_{1},\ldots,U_{n}\) of i.i.d. \(\operatorname{Unif}(0,1)\) random variables. In the sequel, we assume that _all_ the random variables are defined on a _common_ probability space \((\Omega,\mathcal{A},\mathbb{P})\). At each round \(t\in[n]\), we will create a number of "allied" pair of random variables, namely the partial assignments \((\mathbf{z}_{t}^{\text{gs}},\mathbf{z}_{t})\) (vectors in \(\mathbb{R}^{n}\)), pivots \((p_{t}^{\text{gs}},p_{t})\) (elements of \([n]\)), the active sets \((\mathcal{A}_{t}^{\text{gs}},\mathcal{A}_{t})\) (subsets of \([n]\)), the directions \((\mathbf{u}_{t}^{\text{gs}},\mathbf{u}_{t})\) (vectors in \(\mathbb{R}^{n}\)) and the step sizes \((\delta_{t}^{\text{gs}},\delta_{t})\) (takes values in \([-2,2]\)). We slightly modify the matrix \(B\) from the introduction as follows (we use \(\mathbf{B}\) to denote the original definition from Harshaw et al. (2022))
\[B=\begin{bmatrix}I_{n}\\ Y\intercal\end{bmatrix} \tag{2.1}\]
where \(Y\coloneqq\xi^{-1}\frac{\sqrt{1-\phi}}{\sqrt{\phi}}\,X\) is an \(n\times d\) matrix with rank \(d\). Note that this matches the definition in (1.4) upto a rescaling by \(\frac{1}{\sqrt{\phi}}\) which does not affect the vectors \(\mathbf{z}_{t}^{\text{gs}}\) as they depend only on \(\operatorname{ColSp}(B)\) (revisit the algorithm in the introduction). Also since \(\xi=\max_{i\in[n]}\|\mathbf{x}_{i}\|\), it follows that
\[\max_{i\in[n]}\|\mathbf{y}_{i}\|\leqslant\frac{\sqrt{1-\phi}}{\sqrt{\phi}} \eqqcolon\zeta \tag{2.2}\]
Finally notice that
\[\mathbf{v}^{\intercal}Y=0 \tag{2.3}\]
as \(\mathbf{v}^{\intercal}X=0\) in view of (1.6).
Now suppose for some \(t\in[n]\), we have already defined these processes for all \(0\leqslant s<t\) (whenever appropriate) and that they are measurable relative to \(\mathcal{F}_{t-1}\subset\mathcal{A}\) where \(\mathcal{F}_{0}\) is the trivial \(\sigma\)-algebra. In order to define the processes at time \(t\), it will be helpful to introduce some new objects and notations.
For any non-empty \(\mathcal{A}\subset[n]\) and index \(p\in\mathcal{A}\), we define \(\mathbf{u}(p,\mathcal{A})\in\mathbb{R}^{n}\) to be a vector \(\mathbf{u}\) satisfying
\[\mathbf{u}\left[[n]\setminus\mathcal{A}\right] =\mathbf{0},\mathbf{u}[p]=1\text{ and } \tag{2.4}\] \[B[:,\,\mathcal{A}\setminus\{p\}]\,\mathbf{u}[\mathcal{A}\setminus \{p\}] =-\operatorname{Proj}_{\operatorname{ColSp}(B[:\,\mathcal{A}-\{p\}])}\, (\mathbf{b}_{p})\]
where \(\mathbf{b}_{p}\coloneqq B[:,p]\) and \(\operatorname{Proj}_{S}(\mathbf{v})\) denotes the orthogonal projection of \(\mathbf{v}\in\mathbb{R}^{n}\) onto the subspace \(S\). Since the matrix \(B\) has full column rank, it follows that the vector \(\mathbf{u}(p,\mathcal{A})\) is in fact unique. Notice that \(\mathbf{u}_{t}=\mathbf{u}(p_{t},\mathcal{A}_{t})\) in step 2 of our original Gram-Schmidt algorithm.
Next we introduce two matrices which we would need to state our augmented algorithm:
\[Y_{t}^{\text{gs}}\coloneqq Y[\mathcal{A}_{t}^{\text{gs}},:]\text{ and }Y_{t}\coloneqq Y[\mathcal{A}_{t},:]\text{ for }t\in[n]. \tag{2.5}\]
Notice that both \(\mathcal{A}_{t}^{\text{gs}}\) and \(\mathcal{A}_{t}\) are measurable relative to \(\mathcal{F}_{t-1}\) and therefore so are \(Y_{t}^{\text{gs}}\) and \(Y_{t}\). Let \(\{\varepsilon_{n}\}_{n=1}^{\infty}\) be a given decreasing sequence of numbers going to \(0\) with \(\varepsilon_{1}\leqslant 1/2\) (any number strictly less than \(1\) would do). Using these we now define two sequences of
events which will be then used below to demacate the time until which the _supscripted_ and _unsuperscripted_ processes coincide. For any \(t\in[n]\), let
\[\begin{split}\mathcal{G}_{1,t}&\coloneqq\{\|\mathbf{z}_{t -1}[\mathcal{A}_{t}]\|_{\infty}<\varepsilon_{n}\}\text{ and }\\ \mathcal{G}_{2,t}&\coloneqq\left\{\left\|Y_{t}^{ \intercal}Y_{t}-\frac{n-t+1}{n}Y^{\intercal}Y\right\|_{\mathrm{op}}\leqslant \frac{n-t+1}{2n}\lambda_{\min}\big{(}Y^{\intercal}Y\big{)}\right\}.\end{split} \tag{6}\]
Note that since \(\mathcal{A}_{t}\) is \(\mathcal{F}_{t-1}\) measurable, \(\mathcal{G}_{1,t}\) and \(\mathcal{G}_{2,t}\) are as well. Now we are ready to extend our processes into round \(t\). Below and in the remainder of the article, we let
\[C_{1}(\zeta)\coloneqq 1+\zeta^{2}(1+\zeta^{2}) \tag{7}\]
whose significance will be clear later (see, e.g., (2) in Lemma 3.1 and Lemma 3.3, both in Section 3).
**Case 1: If \(t\leqslant n-6C_{1}\zeta^{2}\kappa\) and for all \(s\in[t]\), both \(\mathcal{G}_{1,s}\) and \(\mathcal{G}_{2,s}\) occur and \(\mathcal{A}_{s}^{\mathrm{gs}}=\mathcal{A}_{s}\):**
* Choose a pivot \(p_{t}\) uniformly at random from \(\mathcal{A}_{t}^{\mathrm{gs}}\) and set \(p_{t}^{\mathrm{gs}}=p_{t}\).
* Define \[\mathbf{u}_{t}^{\mathrm{gs}}=\mathbf{u}_{t}=\mathbf{u}(p_{t}^{\mathrm{gs}},\mathcal{A}_{t -1}^{\mathrm{gs}})\] and set (8) \[\Delta_{t}=\{\delta\in\mathbb{R}:\mathbf{z}_{t-1}^{\mathrm{gs}}+\delta\mathbf{u}_{t}^ {\mathrm{gs}}\in[-1,1]^{n}\}.\] Now letting \(\delta_{t}^{+}=\sup\Delta_{t}\) and \(\delta_{t}^{-}=|\inf\Delta_{t}|\), define \(\delta_{t}^{\mathrm{gs}}=\delta_{t}\) as follows: (9) \[\delta_{t}^{\mathrm{gs}}=\delta_{t}=\delta_{t}^{+}\,\mathbf{1}\left\{U_{t}\leqslant \frac{\delta_{t}^{-}}{\delta_{t}^{+}+\delta_{t}^{-}}\right\}-\delta_{t}^{-}\, \mathbf{1}\left\{U_{t}>\frac{\delta_{t}^{+}}{\delta_{t}^{+}+\delta_{t}^{-}}\right\}.\]
* Update (10) \[\mathbf{z}_{t}^{\mathrm{gs}}=\mathbf{z}_{t}=\mathbf{z}_{t-1}^{\mathrm{gs}}+\delta_{t}^{ \mathrm{gs}}\mathbf{u}_{t}^{\mathrm{gs}}=\mathbf{z}_{t-1}+\delta_{t}\mathbf{u}_{t}.\]
* Define (11) \[\eta_{t}=\mathbf{1}\left\{U_{t}\leqslant 1/2\right\}-\mathbf{1}\left\{U_{t}>1/2 \right\}.\]
* Update (12) \[\mathcal{A}_{t+1}^{\mathrm{gs}}=\{i\in\mathcal{A}_{t}^{\mathrm{gs}}:|\mathbf{z}_{t }^{\mathrm{gs}}[i]|<1\}\text{ and }\mathcal{A}_{t+1}=\mathcal{A}_{t}-\{p_{t}\}.\]
**Case 2: Otherwise**:
* If \(\mathcal{A}_{t}^{\mathrm{gs}}=\emptyset\), we set \(\mathbf{z}_{t}^{\mathrm{gs}}=\mathbf{z}_{t},\mathcal{A}_{t+1}^{\mathrm{gs}}=\emptyset\), \(\mathbf{u}_{t}^{\mathrm{gs}}=\mathbf{0}\) and \(\delta_{t}^{\mathrm{gs}}=0\). Otherwise if \(p_{t-1}^{\mathrm{gs}}\in\mathcal{A}_{t}^{\mathrm{gs}}\) we set \(p_{t}^{\mathrm{gs}}=p_{t-1}^{\mathrm{gs}}\), or else choose a pivot \(p_{t}^{\mathrm{gs}}\) uniformly at random from \(\mathcal{A}_{t}^{\mathrm{gs}}\).
* With \(p_{t}^{\mathrm{gs}}\) already defined, update \(\mathbf{z}_{t}^{\mathrm{gs}},\mathcal{A}_{t+1}^{\mathrm{gs}},\mathbf{u}_{t}^{\mathrm{gs }},\delta_{t}^{\mathrm{gs}}\) in exactly the same way as in Case 1.
* Define \(\eta_{t}\) in exactly the same way as in Case 1.
* Choose a pivot \(p_{t}\) uniformly at random from \(\mathcal{A}_{t}\). Set \(\delta_{t}=\eta_{t}\), \(\mathbf{u}_{t}=\mathbf{u}(p_{t},\mathcal{A}_{t})\) and define (13) \[\mathbf{z}_{t}=\mathbf{z}_{t-1}+\delta_{t}\mathbf{u}_{t}.\]
* Update \(\mathcal{A}_{t+1}=\mathcal{A}_{t}-\{p_{t}\}\).
Set,
\[\mathcal{F}_{t}=\sigma(\mathcal{F}_{t-1},U_{t},p_{t},p_{t}^{\mathrm{gs}}) \text{ so that }p_{t}^{\mathrm{gs}},p_{t},\mathcal{A}_{t}^{\mathrm{gs}},\mathcal{A}_{t}\text{ etc. are all }\mathcal{F}_{t}\text{-measurable.} \tag{14}\]
**Observation 2.1**: _In view of the displays (8)-(10), it follows that \(\mathbf{z}_{t}\in[-1,1]^{n}\) for all \(t\in[n]\cup\{0\}\). Furthermore, from our rule of updating \(\mathcal{A}_{t+1}^{\mathrm{gs}}\) in (12) we have that \(\|\mathbf{z}_{t}^{\mathrm{gs}}[\mathcal{A}_{t+1}^{\mathrm{gs}}]\|_{\infty}<1\)._
**Observation 2.2**: _It is clear from the definition of \(\mathcal{A}_{t}\) its elements are drawn at random without replacement from \([n]\). In other words, \(\mathcal{A}_{t}\) is distributed uniformly over all subsets of \([n]\) with cardinality \(n-t+1\)._
### Definitions of relevant Martingales
Since we want to prove the CLT for the random variable \(\langle\mathbf{z}_{n}^{\text{gs}},\mathbf{v}\rangle\), let us first consider the process \(\{M_{t}^{\text{gs}}\}_{t=0}^{n}\) defined as
(2.15) \[M_{t}^{\text{gs}}=\langle\mathbf{z}_{t}^{\text{gs}},\mathbf{v}\rangle\stackrel{{ \eqref{eq:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:defdef:defdef:defdef::defdef:defdef:def
_where \(\mathbf{e}_{p}\in\mathbb{R}^{n}\) is unit vector whose \(p\)-th element equals 1. The norm of \(B\mathbf{u}\) has also a simple formula, namely,_
\[\|B\mathbf{u}\|^{2}=\frac{1}{1-\mathbf{y}_{p}^{\intercal}D\mathbf{y}_{p}}\leqslant 1+\zeta^{2} (1+\zeta^{2})=C_{1} \tag{20}\]
_(recall (7)). Notice that \(\|B\mathbf{u}\|\geqslant 1\). The inner product \(\left\langle\mathbf{u},\mathbf{v}\right\rangle=\left\langle B\mathbf{u},\begin{bmatrix} \mathbf{v}\\ 0\end{bmatrix}\right\rangle\), on the other hand, admits of the expression_
\[\left\langle\mathbf{u},\mathbf{v}\right\rangle=\|B\mathbf{u}\|^{2}\mathbf{v}[\mathcal{A}]^{ \intercal}(I_{a}-Y_{\star}DY_{\star}^{\intercal})\mathbf{e}_{p}[\mathcal{A}] \tag{21}\]
_where \(a\coloneqq|\mathcal{A}|\). Finally, we would often need to deal with the normalized inner product in our analysis; hence an equivalent and convenient way to rewrite the above identity is the following._
\[\|B\mathbf{u}\|^{-2}\left\langle\mathbf{u},\mathbf{v}\right\rangle^{2} =\|B\mathbf{u}\|^{2}\left(\mathbf{v}[\mathcal{A}]^{\intercal}(I_{a}-Y_{ \star}DY_{\star}^{\intercal})\mathbf{e}_{p}[\mathcal{A}]\right)^{2} \tag{22}\] \[=\mathcal{Q}+(\|B\mathbf{u}\|^{2}-1)\mathcal{Q}\]
_where_
\[\mathcal{Q}=\mathcal{Q}(\mathcal{A})\coloneqq(\mathbf{v}[\mathcal{A}]^{\intercal }(I_{a}-Y_{\star}DY_{\star}^{\intercal})\mathbf{e}_{p}[\mathcal{A}])^{2}=\|B\mathbf{u} \|^{-4}\left\langle\mathbf{u},\mathbf{v}\right\rangle^{2}. \tag{23}\]
Proof.: _Proof of (1)._ Within this proof, we will also use the notation
\[D^{-}=\left(I_{d}+Y_{\star}^{-\intercal}Y_{\star}^{-}\right)^{-1}\]
as well as the notation
\[B^{-}=B[:,\mathcal{A}^{-}]=\begin{bmatrix}I_{n}[:,\mathcal{A}^{-}]\\ Y_{\star}^{-\intercal}\end{bmatrix}.\]
By definition 2.4 of \(B\mathbf{u}\) and the standard formula for orthogonal projections, we can write
\[B\mathbf{u}=B\mathbf{e}_{p}-B^{-}\left(B^{-\intercal}B^{-}\right)^{-1}B^{-\intercal} B\mathbf{e}_{p}. \tag{24}\]
Now, we can write \(B^{-\intercal}B^{-}=I_{a-1}+Y_{\star}^{-}Y_{\star}^{-\intercal}\) (recall that \(a=|\mathcal{A}|\)) which leads to, by using the Sherman-Woodbury-Morrison formula (see, e.g., Higham (2002)),
\[\left(B^{-\intercal}B^{-}\right)^{-1}=I_{a-1}-Y_{\star}^{-}(I_{d}+Y_{\star}^{ -\intercal}Y_{\star}^{-)-1}Y_{\star}^{-\intercal}\overset{\eqref{eq:B}}{=}I_{ a-1}-Y_{\star}^{-}D^{-}Y_{\star}^{-\intercal}. \tag{25}\]
From this, we get,
\[\mathbf{e}_{p}-\mathbf{u} \overset{\eqref{eq:B}}{=}I_{n}[:,\mathcal{A}^{-}]\left(B^{- \intercal}B^{-}\right)^{-1}(B^{-})^{\intercal}\cdot B\mathbf{e}_{p}\overset{ \eqref{eq:B}}{=}I_{n}[:,\mathcal{A}^{-}]\left(B^{-\intercal}B^{-}\right)^{-1 }Y_{\star}^{-}\mathbf{y}_{p} \tag{26}\] \[\overset{\eqref{eq:B}}{=}I_{n}[:,\mathcal{A}^{-}]Y_{\star}^{-}(I _{d}-D^{-}((D^{-})^{-1}-I_{d}))\mathbf{y}_{p}\] \[=I_{n}[:,\mathcal{A}^{-}]Y_{\star}^{-}D^{-}\mathbf{y}_{p}=I_{n}[:, \mathcal{A}]Y_{\star}D^{-}\mathbf{y}_{p}-\mathbf{y}_{p}^{\intercal}D^{-}\mathbf{y}_{p}\, \mathbf{e}_{p}.\]
Also since
\[I_{d}+Y_{\star}^{-\intercal}Y_{\star}^{-}=I_{d}+Y_{\star}^{\intercal}Y_{\star} -\mathbf{y}_{p}\cdot\mathbf{y}_{p}^{\intercal},\]
we get from the Sherman-Woodbury-Morrison formula,
\[D^{-}=D+\frac{1}{1-\mathbf{y}_{p}^{\intercal}D\mathbf{y}_{p}}\cdot D\mathbf{y}_{p}\mathbf{y}_{p }^{\intercal}D.\]
This immediately gives us
\[D^{-}\mathbf{y}_{p}=\frac{D\mathbf{y}_{p}}{1-\mathbf{y}_{p}^{\intercal}D\mathbf{y}_{p}}. \tag{3.9}\]
Plugging this into the right-hand side of (3.7) yields (3.1).
Proof of (3.2).: In view of (2.1), we can write
\[\begin{split} B\mathbf{u}&=\begin{bmatrix}\mathbf{u}\\ \mathbf{Y}_{\star}^{\intercal}\mathbf{u}\end{bmatrix}\overset{\eqref{eq:B}}{=}\begin{bmatrix} \mathbf{e}_{p}-I_{n}[:,\mathcal{A}^{-}]Y_{\star}^{-}D^{-}\mathbf{y}_{p}\\ \mathbf{y}_{p}-Y_{\star}^{-\intercal}Y_{\star}^{-}D^{-}\mathbf{y}_{p}\end{bmatrix}= \begin{bmatrix}\mathbf{e}_{p}-I_{n}[:,\mathcal{A}^{-}]Y_{\star}^{-}D^{-}\mathbf{y}_{p} \\ \mathbf{y}_{p}-((D^{-})^{-1}-I_{d})D^{-}\mathbf{y}_{p}\end{bmatrix}\\ &=\begin{bmatrix}\mathbf{e}_{p}-I_{n}[:,\mathcal{A}^{-}]Y_{\star}^{-}D^{-}\mathbf{y}_{p }\\ D^{-}\mathbf{y}_{p}\end{bmatrix}.\end{split} \tag{3.10}\]
Thus we get
\[\begin{split}\|B\mathbf{u}\|^{2}&=1+\mathbf{y}_{p}^{ \intercal}D^{-}Y_{\star}^{-\intercal}Y_{\star}^{-}D^{-}\mathbf{y}_{p}+\mathbf{y}_{p}^ {\intercal}(D^{-})^{2}\mathbf{y}_{p}\\ &\overset{\eqref{eq:B}}{=}1+\mathbf{y}_{p}^{\intercal}D^{-}((D^{-})^{- 1}-I_{d})D^{-}\mathbf{y}_{p}+\mathbf{y}_{p}^{\intercal}(D^{-})^{2}\mathbf{y}_{p}=1+\mathbf{y}_ {p}^{\intercal}D^{-}\mathbf{y}_{p}(\geqslant 1).\end{split} \tag{3.11}\]
Now plugging (3.9) into the right-hand side of (3.11), we obtain
\[\|B\mathbf{u}\|^{2}=\frac{1}{1-\mathbf{y}_{p}^{\intercal}D\mathbf{y}_{p}}. \tag{3.12}\]
All that remains is to bound the above. Let us start with \(\|B\mathbf{u}\|^{2}-1\).
\[\begin{split}\|B\mathbf{u}\|^{2}-1\overset{\eqref{eq:B}}{=}& \mathbf{y}_{p}^{\intercal}D\mathbf{y}_{p}\,\|B\mathbf{u}\|^{2}\overset{\eqref{eq:B}}{=} \mathbf{y}_{p}^{\intercal}D\mathbf{y}_{p}(1+\mathbf{y}_{p}^{\intercal}D^{-}\mathbf{y}_{p})\\ \leqslant&\mathbf{y}_{p}^{\intercal}D\mathbf{y}_{p}(1+\|D^{-} \|_{\mathrm{op}}\|\mathbf{y}_{p}\|^{2})\overset{\eqref{eq:B}}{\leqslant}(1+\zeta^ {2})\,\mathbf{y}_{p}^{\intercal}D\mathbf{y}_{p}\leqslant\zeta^{2}(1+\zeta^{2}).\end{split} \tag{3.13}\]
where in the last two steps we also used the fact that \(\max_{i\in[n]}\|\mathbf{y}_{i}\|\leqslant\zeta\). This finishes the proof of (3.2).
Proofs of (3.3) and (3.4).: Plugging (3.9), (3.12) as well as the observation
\[I_{n}[:,\mathcal{A}]Y_{\star}=I_{n}[:,\mathcal{A}^{-}]Y_{\star}^{-}+\mathbf{e}_{p }\mathbf{y}_{p}^{\intercal}\]
into (3.10), we get
\[B\mathbf{u}=\|B\mathbf{u}\|^{2}\begin{bmatrix}\mathbf{e}_{p}-I_{n}[:,\mathcal{A}]Y_{\star }D\mathbf{y}_{p}\\ D\mathbf{y}_{p}\end{bmatrix}. \tag{3.14}\]
Now we are ready to evaluate the inner product
\[\begin{split}\left\langle B\mathbf{u},\begin{bmatrix}\mathbf{v}\\ 0\end{bmatrix}\right\rangle&\overset{\eqref{eq:B}}{=}\|B\mathbf{u}\|^{2}\,(\mathbf{v}[ \mathcal{A}]^{\intercal}\mathbf{e}_{p}-\mathbf{v}[\mathcal{A}]^{\intercal}Y_{\star}D \mathbf{y}_{p})\\ &=\|B\mathbf{u}\|^{2}\,\mathbf{v}[\mathcal{A}]^{\intercal}(I_{a}-Y_{\star} DY_{\star}^{\intercal})\mathbf{e}_{p}[\mathcal{A}]\end{split} \tag{3.15}\]
(recall that \(p\in\mathcal{A}\)), hence (3.3). As to (3.4), we can just expand
\[\begin{split}\|B\mathbf{u}\|^{-2}\left\langle B\mathbf{u},\begin{bmatrix} \mathbf{v}\\ 0\end{bmatrix}\right\rangle^{2}&=\|B\mathbf{u}\|^{2}\,\mathbf{v}[ \mathcal{A}]^{\intercal}(I_{a}-Y_{\star}DY_{\star}^{\intercal})\mathbf{e}_{p}[ \mathcal{A}]\mathbf{e}_{p}[\mathcal{A}]^{\intercal}(I_{a}-Y_{\star}DY_{\star}^{ \intercal})\mathbf{v}[\mathcal{A}]\\ &=\mathcal{Q}+(\|B\mathbf{u}\|^{2}-1)\mathcal{Q}\end{split} \tag{3.16}\]
where \(\mathcal{Q}\) is as defined in (3.5).
In the sequel, we we will denote
\[Y_{t}=Y_{\star}(\mathcal{A}_{t})=Y[\mathcal{A}_{t},\,:],\ D_{t}=(I_{d}+Y_{t}^{ \intercal}Y_{t})^{-1} \tag{3.17}\]
and similarly for \(Y_{t}^{\text{\tiny{S}}{\text{S}}}\) and \(D_{t}^{\text{\tiny{S}}{\text{S}}}\) with \(\mathcal{A}_{t}\) replaced by \(\mathcal{A}_{t}^{\text{\tiny{S}}{\text{S}}}\).
_3.2. Some results on random sampling without replacement._ Recall from Observation 2.2 that \(\mathcal{A}_{t}\) is a random sample of size \(n-t+1\) from \([n]\) without replacement. In the current subsection we will gather some results about the behavior of the sum of numbers or matrices indexed by \([n]\) evaluated over \(\mathcal{A}_{t}\). For clarity of presentation, we will state these results for a general family of objects \(\{x_{1},\ldots,x_{n}\}\) -- which would be either real numbers or matrices -- and a random sample \(\mathcal{A}\) of size \(a\) from \([n]\) without replacement. Denote by \(\mathds{P}\) and \(\mathds{E}\) the corresponding probability measure and expectation respectively and consider the random variable \(W=W(\mathcal{A})\) defined as
\[W= \sum_{i\in\mathcal{A}}x_{i}\,\,\,\text{so that}\,\,\,\mathds{E}[W]= \frac{a}{n}\sum_{i\in[n]}x_{i}.\]
Our first result concerns the concentration of \(W\) around \(\mathds{E}[W]\) when \(x_{i}\)'s are matrices with bounded operator norm.
**Proposition 3.2**.: _Let \(x_{1},\ldots,x_{n}\) denote \(d\times d\) symmetric matrices with \(\max_{i\in[n]}\|x_{i}\|_{\mathrm{op}}\leqslant 1\). Then for any \(a\in[n]\) and \(x\geqslant 0\), we have_
\[\mathds{P}\left[\left\|W-\mathds{E}[W]\right\|_{\mathrm{op}} \geqslant x\right]\leqslant 2d\cdot\exp(-nx^{2}/2a(n-a)). \tag{3.18}\]
Proof.: We use Stein's method for exchangeable pairs to prove the concentration inequality. Let \(\pi\) be a random uniform permutation of \([n]\). It is easy to see that \(\mathcal{A}\stackrel{{\mathrm{d}}}{{=}}\{\pi(i)\mid i\in[a]\}\). Thus, we can define
\[W=\sum_{i\in[a]}x_{\pi(i)}.\]
We create an exchangeable pair in the following way. Let \((I,J)\) be a uniform sample from the set \(\{(i,j)\mid 1\leqslant i<j\leqslant n\}\)_independent_ of \(\pi\). Define
\[\pi^{\prime}=\pi\circ(I,J)\,\,\text{and}\,\,W^{\prime}=\sum_{i \in[a]}x_{\pi^{\prime}(i)}\,.\]
One can check that
\[\Delta W:=W^{\prime}-W=(x_{\pi(J)}-x_{\pi(I)})\cdot\mathds{1}\{I \leqslant a<J\}.\]
In particular, we have
\[\mathds{E}[\Delta W\mid\pi] =\frac{2}{n(n-1)}\left(a\sum_{j>a}x_{\pi(j)}-(n-a)\sum_{i \leqslant a}x_{\pi(i)}\right)=-\alpha\cdot(W-\mathds{E}[W])\]
where \(\alpha=\frac{2}{n-1}\). Similarly, we have
\[\mathds{E}[\Delta W^{2}\mid\pi] =\mathds{E}[(x_{\pi(J)}^{2}+x_{\pi(I)}^{2}-x_{\pi(I)}x_{\pi(J)}- A_{\pi(J)}x_{\pi(I)})\cdot\mathds{1}\{I\leqslant t<J\}\mid\pi]\] \[\leqslant\mathds{E}[(x_{\pi(J)}^{2}+x_{\pi(I)}^{2})\cdot\mathds{1 }\{I\leqslant t<J\}\mid\pi]\leqslant 2\,\mathds{P}\left[I\leqslant t<J\right] \cdot I_{d}\] \[=\frac{4a(n-a)}{n(n-1)}\cdot I_{d},\]
_i.e.,_
\[\frac{1}{2\alpha}\,\mathds{E}[\Delta W^{2}\mid\pi]\leqslant\frac {a(n-a)}{n}\cdot I_{d}.\]
Here, for symmetric matrices "\(\leqslant\)" means positive definite ordering. Using (Mackey et al., 2014, Theorem 4.1), we get for all \(x\geqslant 0\)
\[\mathds{P}\left[\lambda_{\min}(W-\mathds{E}[W])\leqslant-x\right]\vee\mathds{P }\left[\lambda_{\max}(W-\mathds{E}[W])\geqslant x\right]\leqslant d\cdot e^{- nx^{2}/(2a(n-a))}.\]
This completes the proof.
The following result gives formulae for the moments of \(W\) when \(x_{i}\)'s are real numbers. Their proofs are relatively standard, which we provide for the sake of completeness.
**Proposition 3.1**.: _Let \(x_{1},\ldots,x_{n}\) be real numbers satisfying \(\sum_{i\in[n]}x_{i}=0\). Then for any \(a\in[n]\), we have_
\[\mathds{E}\left[W\right] = 0,\,\mathds{E}[W^{2}]=\frac{a(n-a)}{(n)_{2}}\sum_{i\in[n]}x_{i}^ {2},\text{ and }\] \[\mathds{E}[W^{4}] = \frac{3(a)_{2}(n-a)_{2}}{(n)_{4}}\big{(}\sum_{i\in[n]}x_{i}^{2} \big{)}^{2}+\frac{a(n-a)}{(n)_{2}}\left(1-\frac{6(a-1)(n-a-1)}{(n-2)(n-3)} \right)\sum_{i\in[n]}x_{i}^{4}, \tag{3.19}\]
_where \((n)_{k}\coloneqq n(n-1)\ldots(n-k+1)\) is the \(k\)-th downward factorial of \(n\)._
Proof.: Let \(\pi\) be a uniform random permutation of the set \([n]\). It is clear to see that \(\{\pi(i)\mid 1\leqslant i\leqslant a\}\) has the same distribution as \(\mathcal{A}\). So we can take \(W=\sum_{i=1}^{a}x_{\pi(i)}\). The zero-mean result follows from the fact that \(\sum_{i\in[n]}x_{i}=0\). For the second moment result, we note that \(\mathds{E}[W^{2}]=a\,\mathds{E}[x_{\pi(1)}^{2}]+a(a-1)\,\mathds{E}[x_{\pi(1)} x_{\pi(2)}]\). Moreover, we have
\[\mathds{E}[x_{\pi(1)}x_{\pi(2)}]= \,\mathds{E}\left[x_{\pi(1)}\cdot\frac{1}{n-1}\sum_{i\neq\pi(1)}x _{i}\right]=-\frac{1}{n-1}\,\mathds{E}[x_{\pi(1)}^{2}]\]
and \(\mathds{E}[x_{\pi(1)}^{2}]=\frac{1}{n}\sum_{i\in[n]}x_{i}^{2}\). Similarly, for the fourth moment, we get
\[\mathds{E}[W^{4}] = a\,\mathds{E}[x_{\pi(1)}^{4}]+4(a)_{2}\,\mathds{E}[x_{\pi(1)}^ {3}x_{\pi(2)}+3(a)_{2}\,\mathds{E}[x_{\pi(1)}^{2}x_{\pi(2)}^{2}]\] \[+6(a)_{3}\,\mathds{E}[x_{\pi(1)}^{2}x_{\pi(2)}x_{\pi(3)}]+(a)_{4} \,\mathds{E}[x_{\pi(1)}x_{\pi(2)}x_{\pi(3)}x_{\pi(4)}].\]
Moreover, we have
\[\mathds{E}[x_{\pi(1)}x_{\pi(2)}x_{\pi(3)}x_{\pi(4)}] = -\frac{3}{n-3}\,\mathds{E}[x_{\pi(1)}^{2}x_{\pi(2)}x_{\pi(3)}],\] \[\mathds{E}[x_{\pi(1)}^{2}x_{\pi(2)}x_{\pi(3)}] = -\frac{1}{n-2}\,\mathds{E}[x_{\pi(1)}^{2}x_{\pi(2)}^{2}+x_{\pi(1) }^{3}x_{\pi(2)}],\] \[\mathds{E}[x_{\pi(1)}^{3}x_{\pi(2)}] = -\frac{1}{n-1}\,\mathds{E}[x_{\pi(1)}^{4}]\] \[\text{and }\,\mathds{E}[x_{\pi(1)}^{2}x_{\pi(2)}^{2}] = \frac{1}{n-1}\sum_{i\in[n]}x_{i}^{2}\cdot\mathds{E}[x_{\pi(1)}^{2 }]-\frac{1}{n-1}\,\mathds{E}[x_{\pi(1)}^{4}].\]
Simplifying, we get the result.
### The events \(\mathcal{G}_{1,t}\) and \(\mathcal{G}_{2,t}\)
Recall the events \(\mathcal{G}_{1,t}\) and \(\mathcal{G}_{2,t}\) defined in (6). One of the important implications of our next result is that the condition \(\mathcal{A}_{s}^{\rm gs}=\mathcal{A}_{s}\) required for Case 1 in Section 2 is _in fact_ redundant and hence the processes \(\{M_{s}^{\rm gs}\}_{s=0}^{t}\) and \(\{\widetilde{M}_{s}\}_{s=0}^{t}\) are identical (see (15) and (16) for definitions) whenever both \(\mathcal{G}_{1,s}\) and \(\mathcal{G}_{2,s}\) occur for all \(s\in[t]\). Also it turns out that for all such \(s\), \(\delta_{s}\) and \(\eta_{s}\) (see (11)) are "\(\varepsilon_{n}\)-close" to each other. These observations will help us in the next section to _reduce_ the CLT of \(M_{n}^{\rm gs}\) to that of \(M_{n}\) (Proposition 4).
We now proceed to state the lemma. To this end let us define, for any \(t\in[n]\),
\[\mathcal{E}_{1,t}=\bigcap_{s\in[t]}\mathcal{G}_{1,s}\text{ and }\mathcal{E}_{2,t}= \bigcap_{s\in[t]}\mathcal{G}_{2,s}. \tag{20}\]
Suppose that \(t\in[n]\) satisfies
\[t<n+1-6C_{1}\zeta^{2}\kappa \tag{21}\]
where \(\kappa=\frac{n}{\lambda_{\min}(Y+Y)}\) as already defined in Theorem 1. Then on the event \(\mathcal{E}_{1,t}\cap\mathcal{E}_{2,t}\), one has \(\mathcal{A}_{s}^{\rm gs}=\mathcal{A}_{s}\) and consequently \(\delta_{s}^{\rm gs}=\delta_{s}\) and \(M_{s}^{\rm gs}=\widetilde{M_{s}}\) for all \(s\in[t]\). Furthermore, we have
\[\max\{|\delta_{s}^{+}-1|,|\delta_{s}^{-}-1|\}\leqslant\varepsilon_{n} \tag{22}\]
for all \(s\in[t]\).
We will show this via induction. Recall from Section 2 that \(\mathcal{A}_{1}^{\rm gs}=\mathcal{A}_{1}=[n]\) and \(\mathbf{z}_{0}^{\rm gs}=\mathbf{z}_{0}=\mathbf{0}\). Thus the events \(\mathcal{G}_{1,1}=\mathcal{E}_{1,1}\) and \(\mathcal{G}_{2,1}=\mathcal{E}_{2,1}\) occur almost surely in view of (6). On the other hand, we obtain from (9) that \(\delta_{1}^{+}=\delta_{1}^{-}=1\). Hence the base case of induction is covered. Now fix some positive integer \(t<n-6C_{1}\zeta^{2}\kappa\) and suppose that the event \(\mathcal{E}_{1,t+1}\cap\mathcal{E}_{2,t+1}\) occurs. Since \(\mathcal{E}_{1,t+1}\cap\mathcal{E}_{2,t+1}\subset\mathcal{E}_{1,t}\cap \mathcal{E}_{2,t}\), we have from our induction hypothesis
\[\mathcal{A}_{s}^{\rm gs}=\mathcal{A}_{s}\]
as well as
\[\max\{|\delta_{s}^{+}-1|,|\delta_{s}^{-}-1|\}\leqslant\varepsilon_{n}\;\; \forall\;\;s\in[t].\]
Therefore it suffices to show that \(\mathcal{A}_{t+1}^{\rm gs}=\mathcal{A}_{t+1}\) and \(\max\{|\delta_{t+1}^{+}-1|,|\delta_{t+1}^{-}-1|\}\leqslant\varepsilon_{n}\) on the event \(\mathcal{E}_{1,t+1}\cap\mathcal{E}_{2,t+1}\).
Since we are in the purview of Case 1 on the event \(\mathcal{E}_{1,t}\cap\mathcal{E}_{2,t}\) (\(\supset\mathcal{E}_{1,t+1}\cap\mathcal{E}_{2,t+1}\)) at round \(t\) by our induction hypothesis, we have \(p_{t}^{\rm gs}=p_{t}=p\) (say) and \(\mathbf{z}_{t}^{\rm gs}=\mathbf{z}_{t}\). Therefore in view of (12), it suffices to prove that
\[\{i\in\mathcal{A}_{t}^{\rm gs}:|\mathbf{z}_{t}^{\rm gs}[i]|=1\}=\{i\in\mathcal{A}_ {t}:|\mathbf{z}_{t}[i]|=1\}=\{p_{t}\}\]
(recall from Observation 2 that \(\|\mathbf{z}_{t}^{\rm gs}\|_{\infty}\leqslant 1\)). However, due to (8)-(10), this would follow from
\[\mathbf{z}_{t-1}[\mathcal{A}_{t}\setminus\{p\}]+(1-\mathbf{z}_{t-1}[p]) \mathbf{u}_{t}[\mathcal{A}_{t}\setminus\{p\}] \in(-1,1)^{|\mathcal{A}_{t}\setminus\{p\}|}\;\;\text{and} \tag{23}\] \[\mathbf{z}_{t-1}[\mathcal{A}_{t}\setminus\{p\}]-(1+\mathbf{z}_{t-1}[p]) \mathbf{u}_{t}[\mathcal{A}_{t}\setminus\{p\}] \in(-1,1)^{|\mathcal{A}_{t}\setminus\{p\}|}.\]
(recall that \(\mathbf{u}_{t}^{\rm gs}=\mathbf{u}_{t}\) since we are in Case 1) which we now proceed to show. To this end, we first bound for any \(i\in\mathcal{A}_{t}\setminus\{p\}\),
\[|\mathbf{z}_{t-1}[i]+(1-\mathbf{z}_{t-1}[p])\mathbf{u}_{t}[i]|\leqslant|\mathbf{z}_{t-1}[i]|+ |1-\mathbf{z}_{t-1}[p]|\,|\mathbf{u}_{t}(i)|\leqslant\frac{1}{2}+\frac{3}{2}|\mathbf{u}_{t} (i)|\]
where in the final step we used \(\|\mathbf{z}_{t-1}[\mathcal{A}_{t}]\|_{\infty}<\varepsilon_{n}\leqslant\frac{1}{2}\) as we are on the event \(\mathcal{G}_{1,t}\supset\mathcal{E}_{1,t+1}\cap\mathcal{E}_{2,t+1}\) (recall (2.6)). The same bound also holds for \(|\mathbf{z}_{t-1}[i]-(1+\mathbf{z}_{t-1}[p])\mathbf{u}_{t}[i]|\). Hence it is enough to show that
\[\max_{i\in\mathcal{A}_{t}\setminus\{p\}}\left(\frac{1}{2}+\frac{3}{2}|\mathbf{u}_ {t}(i)|\right)<1. \tag{3.24}\]
Now recall the definitions of \(Y_{t}\) and \(D_{t}\) from (3.17) and also that \(\mathbf{u}_{t}=\mathbf{u}(p_{t},\mathcal{A}_{t-1})\). Using (3.1) we can write for any \(i\in\mathcal{A}_{t}\setminus\{p\}\),
\[|\mathbf{u}_{t}[i]|\stackrel{{\eqref{eq:Y_t}}}{{=}}\frac{\mathbf{y}_{i} ^{\intercal}D_{t}\mathbf{y}_{p}}{1-\mathbf{y}_{p}^{\intercal}D_{t}\mathbf{y}_{p}} \stackrel{{\eqref{eq:Y_t}}}{{\leqslant}}C_{1}\,\mathbf{y}_{i}^{ \intercal}D_{t}\mathbf{y}_{p}\leqslant C_{1}\|\mathbf{y}_{i}\|\|D_{t}\|_{\mathrm{op}} \|\mathbf{y}_{p}\|\] \[\stackrel{{\eqref{eq:Y_t}}}{{\leqslant}}\frac{C_{1} \,\zeta^{2}}{1+\lambda_{\min}\big{(}Y_{t}^{\intercal}Y_{t}\big{)}} \stackrel{{\eqref{eq:Y_t}}}{{\leqslant}}\frac{2nC_{1}\zeta^{2}}{( n-t+1)\lambda_{\min}\big{(}Y^{\intercal}Y\big{)}}=\frac{2\kappa C_{1}\zeta^{2}}{( n-t+1)}\,\,\left(\,\because\kappa=\frac{n}{\lambda_{\min}(Y^{\intercal}Y)}\right)\]
where in the penultimate step we used the fact that we are on the event \(\mathcal{G}_{2,t}\supset\mathcal{E}_{1,t+1}\cap\mathcal{E}_{2,t+1}\) so that by the Weyl's inequality (see, e.g., Franklin (2012)):
\[\begin{split}\left|\lambda_{\min}(Y_{t}^{\intercal}Y_{t})-\frac{ n-t+1}{n}\lambda_{\min}(Y^{\intercal}Y)\right|&\leqslant\big{\|}Y_{t}^{ \intercal}Y_{t}-\frac{n-t+1}{n}Y^{\intercal}Y\big{\|}_{\mathrm{op}}\\ &\leqslant\frac{n-t+1}{2n}\lambda_{\min}\big{(}Y^{\intercal}Y \big{)}.\end{split} \tag{3.26}\]
However, the final term in (3.25) is bounded above by \(\frac{1}{3}\) in view of our assumption (3.21) yielding (3.24).
It remains to prove that \(\max\{|\delta_{t+1}^{+}-1|,|\delta_{t+1}^{-}-1|\}\leqslant\varepsilon_{n}\). To this end, notice that the same argument applied to round \(t+1\) (recall that we are on the event \(\mathcal{E}_{1,t+1}\cap\mathcal{E}_{2,t+1}\)) gives us
\[\{i\in\mathcal{A}_{t+1}:|\mathbf{z}_{t+1}[i]|=1\}=\{p_{t+1}\}.\]
This in turn implies that the maximum and minimum of the set \(\Delta_{t+1}\) defined in (2.8) is achieved along the coordinate \(p_{t+1}\). Also let us recall that \(\mathbf{u}_{t+1}[p_{t+1}]=1\) as \(\mathbf{u}_{t+1}=\mathbf{u}(p_{t+1},\mathcal{A}_{t+1})\) (see (2.4)) and \(|\mathbf{z}_{t+1}[p_{t+1}]|<\varepsilon_{n}\) since we are on the event \(\mathcal{G}_{1,t+1}\supset\mathcal{E}_{1,t+1}\cap\mathcal{E}_{2,t+1}\). Together these imply that both \(|1-\delta_{t+1}^{+}|=|1-\sup\Delta_{t+1}|\) and \(|1-\delta_{t+1}^{-}|=|1+\inf\Delta_{t+1}|\) are bounded by \(\varepsilon_{n}\).
In our next two results, we give lower bounds on the probabilities of joint occurrence for \(\mathcal{G}_{1,t}\) and \(\mathcal{G}_{2,t}\)'s starting with the latter.
**Lemma 3.4**.: _For any \(t\in[n]\), we have_
\[\mathbb{P}\left[\mathcal{G}_{2,t}^{c}\right]\leqslant Cd\exp\left(-c\frac{n(n -t+1)}{\kappa^{2}\zeta^{4}t}\right). \tag{3.27}\]
_Moreover, the following bounds hold on the event \(\mathcal{G}_{2,t}\),_
\[\|D_{t}\|_{\mathrm{op}}\leqslant\frac{2\kappa}{n-t+1}\,\,\,\text{and}\,\,\,\|Y _{t}\|_{\mathrm{Frob}}^{2}\leqslant\frac{3}{2}\frac{n-t+1}{n}\|Y\|_{\mathrm{ Frob}}^{2}\,. \tag{3.28}\]
Proof.: Notice that
\[\frac{1}{\zeta^{2}}\,Y_{t}^{\intercal}Y_{t}=\frac{1}{\zeta^{2}}\sum_{i\in \mathcal{A}_{t}}\mathbf{y}_{i}\mathbf{y}_{i}^{\intercal}.\]
Now in view of Observation 2.2, \(\mathcal{A}_{t}\) is a random sample of size \(n-t+1\) from \([n]\) without replacement. Also since \(\max_{i\in[n]}\|\mathbf{y}_{i}\|\leqslant\zeta\), the operator norm of each \(\frac{1}{\zeta^{2}}\mathbf{y}_{i}\mathbf{y}_{i}^{\intercal}\) is at most \(1\). Therefore, we are exactly in the setting of Proposition 3.2 whereby we obtain for any \(x\geqslant 0\),
\[\mathbb{P}\left[\left\|Y_{t}^{\intercal}Y_{t}-\frac{n-t+1}{n}Y^{\intercal}Y \right\|_{\mathrm{op}}\geqslant\zeta^{2}x\right]\leqslant 2d\exp\left(-\frac{nx^{2} }{2(n-t+1)(t-1)}\right).\]
We plug \(x=\frac{n-t+1}{2n\zeta^{2}}\lambda_{\min}\bigl{(}Y^{\intercal}Y\bigr{)}\) into the above display to obtain
\[\mathbb{P}[\mathcal{G}_{2,t}^{c}]\leqslant 2d\exp\Big{(}-\frac{(n-t+1) \lambda_{\min}^{2}\bigl{(}Y^{\intercal}Y\bigr{)}}{8n(t-1)\zeta^{4}}\Big{)}\]
which leads to (3.27) by substituting \(\kappa\) for \(\frac{n}{\lambda_{\min}(Y^{\intercal}Y)}\).
Proof of the first bound in (3.28) has already been given in (3.25)-(3.26). For the second bound, we can write
\[\|Y_{t}\|_{\mathrm{Frob}}^{2}=\mathrm{Tr}(Y_{t}^{\intercal}Y_{t}) =\sum_{j\in[d]}\lambda_{j}(Y_{t}^{\intercal}Y_{t})\] \[\leqslant\frac{n-t+1}{n}\sum_{j\in[d]}\lambda_{j}(Y^{\intercal}Y) +\frac{n-t+1}{2n}\sum_{j\in[d]}\lambda_{\min}(Y^{\intercal}Y)\] \[=\frac{3}{2}\frac{n-t+1}{n}\cdot\mathrm{Tr}(Y^{\intercal}Y)=\frac {3}{2}\frac{n-t+1}{n}\|Y\|_{\mathrm{Frob}}^{2}\]
where \(\lambda_{j}(\cdot)\) denotes the \(j\)-th largest eigenvalue of the corresponding (hermitian) matrix and in the third step we used the Weyl's inequality (cf. (3.26)).
We will in fact use this result for the proof of our next lemma.
**Lemma 3.5**.: _We have for any \(t\in[n]\),_
\[\mathbb{P}\left[\Big{(}\bigcap_{s\in[t]}\mathcal{G}_{1,s}\Big{)}^{c}\right] \leqslant\frac{\mathbb{E}\left[\max_{s\in[t],\,i\in\mathcal{A}_{\setminus}\{p_ {s}\}}|\mathbf{z}_{s}[i]|^{2}\right]}{\varepsilon_{n}^{2}} \tag{3.29}\]
_where_
\[\mathbb{E}\left[\max_{s\in[t]}\,\max_{i\in\mathcal{A}_{\setminus}\{p_{s}\}}| \mathbf{z}_{s}[i]|^{2}\right]\leqslant\frac{C\zeta^{2}\kappa^{2}}{n-t}+C\zeta^{2} nd\exp\left(-c\,\frac{n(n-t+1)}{\kappa^{2}\zeta^{4}t}\right). \tag{3.30}\]
Proof.: The bound (3.29) is just an application of the Markov's inequality in view of the definition of \(\mathcal{G}_{1,s}\) in (2.6). Towards proving (3.30), let us first recall that we have
\[\mathbf{z}_{s}=\sum_{j\in[s]}\delta_{j}\mathbf{u}_{j}\]
and in view of (3.1) we can write for any \(i\in\mathcal{A}_{j}\setminus\{p_{j}\}\),
\[\mathbf{u}_{j}[i]=-\frac{\mathbf{y}_{i}^{\intercal}D_{j}\mathbf{y}_{p_{j}}}{1-\mathbf{y}_{p_{j }}^{\intercal}D_{j}\mathbf{y}_{p_{j}}}.\]
Therefore we have for any \(i\in\mathcal{A}_{s}\setminus\{p_{s}\}\),
\[\mathbf{z}_{s}[i]=-\mathbf{y}_{i}^{\intercal}\sum_{j\in[s]}\frac{D_{j}\mathbf{y}_{p_{j}}}{ 1-\mathbf{y}_{p_{j}}^{\intercal}D_{j}\mathbf{y}_{p_{j}}}\,\delta_{j}.\]
Within this proof let us denote
\[\mathbf{v}_{j}=\frac{D_{j}\mathbf{y}_{p_{j}}}{1-\mathbf{y}_{p_{j}}^{\intercal}D_{j}\mathbf{y}_{p_{ j}}}\text{ and }\mathbf{V}_{s}=\sum_{j\in[s]}\delta_{j}\mathbf{v}_{j}.\]
Just like the martingales in Section 2.1, it is routine to check that \(\{\mathbf{V}_{t}\}_{t=1}^{n}\) is a vector-valued martingale adapted to the filtration \((\mathcal{F}_{t})_{t=0}^{n}\). Equipped with the above notations we can write for any \(i\in\mathcal{A}_{s}\setminus\{p_{s}\}\),
\[|\mathbf{z}_{s}[i]|\leqslant\|\mathbf{V}_{s}\|\|y_{i}\|\leqslant\zeta\|\mathbf{V}_{s}\|\]
where we used the Cauchy-Schwarz inequality and hence,
\[\max_{i\in\mathcal{A}_{s}\setminus\{p_{s}\}}|\mathbf{z}_{s}[i]|^{2}\leqslant\zeta ^{2}\|\mathbf{V}_{s}\|^{2}.\]
Since \(\{\mathbf{V}_{t}\}_{t=1}^{n}\) is a martingale, \(\{\|\mathbf{V}_{t}\|^{2}\}_{t=1}^{n}\) becomes a submartingale sequence. Therefore, we can use the Doob's maximal inequality to deduce
\[\mathbb{E}\left[\max_{s\in[t],\;i\in\mathcal{A}_{s}\setminus\{p_{s}\}}|\mathbf{z} _{s}[i]|^{2}\right]\leqslant\mathbb{E}\,\|\mathbf{V}_{t}\|^{2}. \tag{3.31}\]
Further we can write,
\[\mathbb{E}\left[\|\mathbf{V}_{t}\|^{2}\right] =\sum_{s\in[t]}\mathbb{E}\left[\|\delta_{s}\mathbf{v}_{s}\|^{2} \right]\leqslant C\sum_{s\in[t]}\mathbb{E}\left[\|\mathbf{v}_{s}\|^{2}\right]=C \sum_{s\in[t]}\mathbb{E}\left[\frac{\mathbf{y}_{p_{s}}^{\intercal}D_{s}^{2}\mathbf{y} _{p_{s}}}{(1-\mathbf{y}_{p_{s}}^{\intercal}D_{s}\mathbf{y}_{p_{s}})^{2}}\right]\] \[\overset{\eqref{eq:C}}{\leqslant}C\sum_{s\in[t]}\mathbb{E}\left[ \mathbf{y}_{p_{s}}^{\intercal}D_{s}^{2}\mathbf{y}_{p_{s}}\right]\]
where in the second step we used the fact that \(|\delta_{s}|\leqslant 2\) almost surely. We will now bound the expectations on the right by first partitioning into the events \(\mathcal{G}_{2,s}\) and \(\mathcal{G}_{2,s}^{c}\) and then bounding the corresponding terms separately.
\[\sum_{s\in[t]}\mathbb{E}\left[\mathbf{y}_{p_{s}}^{\intercal}D_{s}^{2}\mathbf{y}_{p_{s }}\mathcal{1}_{\mathcal{G}_{2,s}}\right]\leqslant\sum_{s\in[t]}\mathbb{E} \left[\|D_{s}\|_{\mathrm{op}}^{2}\|\mathbf{y}_{p_{s}}\|^{2}1_{\mathcal{G}_{2,s}} \right]\overset{\eqref{eq:C}}{\leqslant}C\sum_{s\in[t]}\frac{\kappa^{2}}{(n-s +1)^{2}}\,\mathbb{E}\left[\|\mathbf{y}_{p_{s}}\|^{2}\right]\]
\[\overset{\mathrm{Obs.\leavevmode\nobreak\ \ref{eq:C}}}{=}C\kappa^{2}\sum_{s\in[t]} \frac{1}{(n-s+1)^{2}}\,\frac{1}{n}\sum_{i\in[n]}\|\mathbf{y}_{i}\|^{2}\overset{ \eqref{eq:C}}{\leqslant}C\zeta^{2}\kappa^{2}\sum_{s\in[t]}\frac{1}{(n-s+1)^{2 }}\leqslant\,\frac{C\zeta^{2}\kappa^{2}}{n-t}.\]
On the other hand, we can write
\[\sum_{s\in[t]}\mathbb{E}\left[\mathbf{y}_{p_{s}}^{\intercal}D_{s}^{2}\mathbf{y}_{p_{s }}1_{\mathcal{G}_{2,s}^{c}}\right]\leqslant\zeta^{2}\sum_{s\in[t]}\mathbb{P} \left[\mathcal{G}_{2,s}^{c}\right]\overset{\eqref{eq:C}}{\leqslant}C\zeta^{2} nd\exp\left(-c\,\frac{n(n-t+1)}{\kappa^{2}\zeta^{4}t}\right)\]
where in the first inequality we used the fact that
\[\mathbf{y}_{p}^{\intercal}D_{s}^{2}\mathbf{y}_{p}\leqslant\|D_{s}\|_{\mathrm{op}}^{2} \|\mathbf{y}_{p}^{\intercal}\|^{2}\overset{\eqref{eq:C}}{\leqslant}\zeta^{2}.\]
Adding the previous two bounds yields us (3.30).
## 4 Equivalence of CLT for different processes
The main result of this section is the equivalence of CLT between \(M_{n}^{\rm gs}\) and \(M_{n}\) (see Section 2.1 for the definitions of all the relevant martingales).
**Proposition 4.1**.: _Under the same assumptions as in Theorem 1.3, we have_
\[\frac{M_{n}^{\rm gs}}{\|\mathbf{v}\|}\xrightarrow[n\to\infty]{\rm law}N(0,1)\,\,\, \text{if and only if}\,\,\frac{M_{n}}{\|\mathbf{v}\|}\xrightarrow[n\to\infty]{\rm law}N(0,1). \tag{4.1}\]
We need some intermediate results capturing the closeness between different related random variables in order to prove Proposition 4.1. Our first result gives an upper bound on the probability that \(M_{n-m}^{\rm gs}\) and \(\widetilde{M}_{n-m}\) are different.
**Lemma 4.2**.: _For any \(m\in[n]\) satisfying \(m\geqslant 6C_{1}\zeta^{2}\kappa\), we have_
\[\mathbb{P}\left[|M_{n-m}^{\rm gs}-\widetilde{M}_{n-m}|>0\right]\leqslant\frac {C\zeta^{2}\kappa^{2}}{m}+C\,nd\exp\left(-c(\kappa\zeta^{2})^{-2}m\right). \tag{4.2}\]
Proof.: We know from Lemma 3.3 that \(M_{n-m}^{\rm gs}\) and \(M_{n-m}\) are identical on the event \(\mathcal{E}_{1,n-m}\cap\mathcal{E}_{2,n-m}\)_provided \(n-m<n+1-6C_{1}\zeta^{2}\kappa\), i.e., \(m\geqslant 6C_{1}\zeta^{2}\kappa\)_. Hence we only need to show that the probability of the event \((\mathcal{E}_{1,n-m}\cap\mathcal{E}_{2,n-m})^{c}\) is bounded by the right-hand side in (4.2). But this follows from Lemmas 3.4 and 3.5 along with a union bound for bounding the probability of the event \(\mathcal{E}_{2,t}^{c}\).
Our next lemma gives an upper bound on the \(\ell^{2}\)-distance between \(M_{n}^{\rm gs}\) and \(M_{n-m}^{\rm gs}\) as well as between \(\widetilde{M}_{n}\) and \(\widetilde{M}_{n-m}\).
**Lemma 4.3**.: _For any \(m\in[n]\), we have_
\[\max\left(\mathbb{E}\left[(M_{n}^{\rm gs}-M_{n-m}^{\rm gs})^{2}\right],\, \mathbb{E}\left[(\widetilde{M}_{n}-\widetilde{M}_{n-m})^{2}\right]\right) \leqslant Cd\max(1,\zeta^{2})\|\mathbf{v}\|_{\infty}^{2}m^{3}. \tag{4.3}\]
Proof.: Let us examine the difference
\[\Delta_{m,n}\coloneqq M_{n}^{\rm gs}-M_{n-m}^{\rm gs}. \tag{4.4}\]
In view of the first item under Case 2 in our definition of the relevant processes, we can write
\[\Delta_{m,n}=\sum_{t=n-m+1}^{n}\delta_{t}^{\rm gs}\left\langle B\mathbf{u}_{t}^{ \rm gs},\genfrac{[}{]}{0.0pt}{}{\mathbf{v}}{0}\right\rangle\mathds{1}_{\{\tau>t \}} \tag{4.5}\]
with
\[\tau\coloneqq\min\{t>n-m:\mathcal{A}_{t}^{\rm gs}=\emptyset\}\]
where we adopt the convention that the minimum of an empty set is \(n\), _i.e.,_ the limit of the above summation is empty. Henceforth, in this proof we will drop the superscript "\({\rm gs}\)" in order to avoid the notational clutter. This is particularly appropriate because our argument relies only on properties shared by \(\{\widetilde{M}_{t}^{\rm gs}\}_{t=0}^{n}\) and \(\{\widetilde{M}_{t}\}_{t=0}^{n}\) and hence works _mutatis mutandis_ for \(\{\widetilde{M}_{t}\}_{t=0}^{n}\).
Since the event \(\{\tau>t\}\) is \(\mathcal{F}_{t-1}\) measurable (2.14), it follows from the definitions of the terms involved (see Section 2) that the partial sums of \(\Delta_{m,n}\) form a martingale sequence
relative to \((\mathcal{F}_{t})_{t=0}^{n}\). Hence, in view of orthogonality of martingale differences, we can write:
(4.6) \[\begin{split}&\mathbb{E}[\Delta_{m,n}^{2}]\\ &=\sum_{t=n-m+1}^{n}\mathbb{E}\left[\delta_{t}^{2}\left\langle B \boldsymbol{u}_{t},\begin{bmatrix}\boldsymbol{v}\\ 0\end{bmatrix}\right\rangle^{2}\mathds{1}_{\{\tau>t\}}\right]\leqslant C\sum_{ t=n-m+1}^{n}\mathbb{E}\left[\left\langle B\boldsymbol{u}_{t},\begin{bmatrix} \boldsymbol{v}\\ 0\end{bmatrix}\right\rangle^{2}\mathds{1}_{\{\tau>t\}}\right]^{2}\\ &\stackrel{{\eqref{eq:def
As for the first term in (4.8), we can similarly write
\[\sum_{t=n-m+1}^{n}\mathbb{E}\left[\mathbf{v}[\mathcal{A}_{t}]^{\intercal} \mathbf{e}_{p}[\mathcal{A}_{t}]\mathbf{e}_{p}[\mathcal{A}_{t}]^{\intercal}\mathbf{v}[ \mathcal{A}_{t}]\mathds{1}_{\{\tau>t\}}\right]\leqslant\sum_{t=n-m+1}^{n} \mathbb{E}\left[\mathbf{v}[\mathcal{A}_{t}]^{\intercal}\mathbf{v}[\mathcal{A}_{t}]\right]\] \[\leqslant \|v\|_{\infty}^{2}\sum_{t=n-m+1}^{n}\mathbb{E}|\mathcal{A}_{t}| \leqslant\|v\|_{\infty}^{2}m^{2}.\]
Therefore combining the bounds from the previous two displays and plugging it into (4.8) and subsequently into (4.6), we obtain
\[\mathbb{E}\left[\Delta_{m,n}^{2}\right]\leqslant Cd\max(1,\zeta^{2})\|\mathbf{v} \|_{\infty}^{2}m^{3}.\]
This gives us the required bound in (4.3) for both \(\{M_{t}\}_{t=0}^{n}\) and \(\{\widetilde{M}_{t}\}_{t=0}^{n}\) (see the discussion following the definition of \(\Delta_{m,n}\) in (4.5)).
Finally we need the following result on the \(\ell^{2}\)-distance between \(M_{n}\) and \(\widetilde{M}_{n}\).
**Lemma 4.4**: _For any integer \(m\in[n]\) we have the following bound,_
\[\mathbb{E}[(\widetilde{M}_{n}-M_{n})^{2}] \tag{4.9}\] \[\leqslant C\cdot C_{1}^{3}\left(\left(m+(d+\kappa)\log en\right) \|\mathbf{v}\|_{\infty}^{2}+Cdn\zeta^{2}\exp\left(-c\frac{m}{\kappa^{2}\zeta^{4}} \right)\right)+C\varepsilon_{n}^{2}\|\mathbf{v}\|^{2}.\]
Proof: Recalling the definitions of \(M_{t}\) and \(\widetilde{M}_{t}\) from (2.16), we can write
\[\widetilde{M}_{t}-M_{t}\] \[=\underbrace{\sum_{s\in[t]}\delta_{s}\left(1-\|B\mathbf{u}_{s}\|^{-1 }\right)\left\langle B\mathbf{u}_{s},\begin{bmatrix}\mathbf{v}\\ 0\end{bmatrix}\right\rangle}_{A_{t}}+\underbrace{\sum_{s\in[t]}(\delta_{s}- \eta_{s})\|B\mathbf{u}_{s}\|^{-1}\left\langle B\mathbf{u}_{s},\begin{bmatrix}\mathbf{v}\\ 0\end{bmatrix}\right\rangle}_{B_{t}}.\]
Consequently,
\[\mathbb{E}[(\widetilde{M}_{n}-M_{n})^{2}]\leqslant 2\mathbb{E}\left[A_{n}^{2} \right]+2\mathbb{E}\left[B_{n}^{2}\right].\]
We now observe that both \(A_{t}\) and \(B_{t}\) are \((\mathcal{F}_{t})_{t=0}^{n}\)-martingales with mean \(0\). Hence by the orthogonality of martingale differences, we can write
\[\mathbb{E}\left[A_{n}^{2}\right]=\sum_{t\in[n]} \mathbb{E}\left[\delta_{t}^{2}\left(1-\|B\mathbf{u}_{t}\|^{-1}\right)^{2} \left\langle B\mathbf{u}_{t},\begin{bmatrix}\mathbf{v}\\ 0\end{bmatrix}\right\rangle^{2}\right]\] \[\leqslant C\sum_{t\in[n]} \mathbb{E}\left[\left(1-\|B\mathbf{u}_{t}\|^{-2}\right)^{2}\left\langle B \mathbf{u}_{t},\begin{bmatrix}\mathbf{v}\\ 0\end{bmatrix}\right\rangle^{2}\right]\] \[= C\sum_{t\in[n]} \mathbb{E}\left[\left(\|B\mathbf{u}_{t}\|^{2}-1\right)^{2}\frac{1}{\|B \mathbf{u}_{t}\|^{4}}\left\langle B\mathbf{u}_{t},\begin{bmatrix}\mathbf{v}\\ 0\end{bmatrix}\right\rangle^{2}\right]\] \[\overset{\eqref{eq:C_1}}{=}C\sum_{t\in[n]} \mathbb{E}\left[(\|B\mathbf{u}_{t}\|^{2}-1)^{2}\mathcal{Q}_{t}\right]\overset{ \eqref{eq:C_1}}{\leqslant}C\cdot C_{1}\sum_{t\in[n]}\mathbb{E}\left[(\|B\mathbf{u} _{t}\|^{2}-1)\mathcal{Q}_{t}\right]\]
where in the first inequality we used the facts that \(|\delta_{t}|\leqslant 2\) almost surely and that \(\|B\mathbf{u}_{t}\|\geqslant 1\) which we noted after (3.2).
We now proceed to bound the expectations in the last line above. In the sequel, we will use the notation \(a_{t}=n-t+1=|\mathcal{A}_{t}|\) (recall Observation 2.2) and also use \(p\) instead of \(p_{t}\). For any \(t\in[n]\), we obtain from (5.24) proved in Section 5 below that
\[\mathbb{E}\left[\left(\|B\boldsymbol{u}_{t}\|^{2}-1\right)\mathcal{Q}_{t}\,| \,\mathcal{F}_{t-1}\right]\leqslant\frac{2C_{1}}{n-t+1}\sum_{j\in\mathcal{A}_ {t}}v_{j}^{2}\boldsymbol{y}_{j}^{\intercal}D_{t}\boldsymbol{y}_{j}+\frac{2C_ {1}^{2}}{n-t+1}\,\boldsymbol{v}[\mathcal{A}_{t}]^{\intercal}Y_{t}D_{t}Y_{t}^{ \intercal}\boldsymbol{v}[\mathcal{A}_{t}].\]
We bound the expectation of the quantity on the right-hand side above (summed over \(t\) in \([n]\)) in a _delicate_ bound (5.20) proved as part of Lemma 5.4 in the next section which finally yields
\[\mathbb{E}\left[A_{n}\right]^{2}\leqslant C\cdot C_{1}^{3}\left((m+\kappa \log en)\|\boldsymbol{v}\|_{\infty}^{2}+d\log en\|\boldsymbol{v}\|_{\infty}^{ 2}+Cdn\zeta^{2}\exp(-c(\kappa\zeta^{2})^{-2}m)\right). \tag{4.10}\]
Next we will bound \(\mathbb{E}\left[B_{n}^{2}\right]\). By the martingale property, we can write
\[\mathbb{E}\left[B_{n}\right]^{2}\] \[=\sum_{t\in[n]}\mathbb{E}\left[(\delta_{t}-\eta_{t})^{2}\|B \boldsymbol{u}_{t}\|^{-2}\left\langle B\boldsymbol{u}_{t},\begin{bmatrix} \boldsymbol{v}\\ 0\end{bmatrix}\right\rangle^{2}\right]\leqslant C\varepsilon_{n}^{2}\sum_{t \in[n]}\mathbb{E}\left[\eta_{t}^{2}\|B\boldsymbol{u}_{t}\|^{-2}\left\langle B \boldsymbol{u}_{t},\begin{bmatrix}\boldsymbol{v}\\ 0\end{bmatrix}\right\rangle^{2}\right]\] \[\overset{\eqref{eq:cond
(recall that \(d\leqslant n\)) which follows from the first part of assumption (1.11) in Theorem 1.3. We also need to show that \(m\) as chosen above is at most \(n\) for all \(n\) large enough. But this follows from assumption (1.11) in view of the facts that \(\frac{1}{\sqrt{n}}\leqslant\frac{\|\mathbf{v}\|_{\infty}}{\|\mathbf{v}\|}\) and
\[\kappa=\frac{n}{\lambda_{\min}(Y\tau Y)}\geqslant\frac{dn}{\|Y\|_{\mathrm{Frob }}^{2}}\overset{\eqref{eq:m}}{\geqslant}\frac{dn}{n\zeta^{2}}=\zeta^{-2} \tag{4.14}\]
(notice that \(\zeta^{-1}\) is bounded (2.2) since \(\phi\) is bounded away from \(0\) by our assumption).
In order to show (4.12), we again set
\[m=C\kappa^{2}\zeta^{4}\log en\]
for some suitably large constant \(C\) and write
\[M_{n}^{\mathrm{gs}}-\widetilde{M}_{n}=\underbrace{M_{n}^{\mathrm{gs}}-M_{n-m}^ {\mathrm{gs}}}_{T_{1}}+\underbrace{M_{n-m}^{\mathrm{gs}}-\widetilde{M}_{n-m}}_ {T_{2}}+\underbrace{\widetilde{M}_{n-m}-\widetilde{M}_{n}}_{T_{3}}. \tag{4.15}\]
We can use Lemma 4.3 and a similar reasoning as before to argue that both \(\|\mathbf{v}\|^{-2}\mathbb{E}\left[T_{1}^{2}\right]\) and \(\|\mathbf{v}\|^{-2}\mathbb{E}\left[T_{3}^{2}\right]\) converge to \(0\) under assumption (1.11). This implies that \(T_{1}\) and \(T_{3}\) converge to \(0\) in probability. On the other hand, we have \(m\geqslant 6C_{1}\zeta^{2}\kappa\) for large enough \(n\) in view of (4.14). Hence we can use Lemma 4.2 to show that \(\mathbb{P}[T_{2}>0]\to 0\) which implies that \(T_{2}\) converges to \(0\) in probability as \(n\to\infty\).
Together these yield (4.12) in view of (4.15) completing the proof of Proposition 4.1.
## 5 Asymptotic normality of \(M_{n}\) and the proof of Theorem 1.3
In this section, we conclude the proof of Theorem 1.3 by combining Proposition 4.1 from Section 4 and the CLT for \(M_{n}\) proved in Proposition 5.1 below. The latter is the main result of this section.
**Proposition 5.1**: _Under the same assumptions as in Theorem 1.3, we have_
\[\frac{M_{n}}{\|\mathbf{v}\|}\overset{\mathrm{law}}{\underset{n\to\infty}{ \longrightarrow}}\mathrm{N}(0,1).\]
The proof of Proposition 5.1 occupies the majority of the remainder of this article. But before that let us finish the proof of Theorem 1.3 assuming this result.
Proof of Theorem 1.3.: We obtain from Propositions 4.1 and 5.1 that,
\[\frac{M_{n}^{\mathrm{gs}}}{\|\mathbf{v}\|}\overset{\mathrm{law}}{\underset{n\to \infty}{\longrightarrow}}\mathrm{N}(0,1) \tag{5.1}\]
(recall that \(\langle\mathbf{z}_{n}^{\mathrm{gs}},\mathbf{v}\rangle=M_{n}^{\mathrm{gs}}\)). However, since \(\phi\) is bounded away from \(0\) and the matrix \(B\) in definition \(2.1\) and in Harshaw et al. (2022) (see (1.4)) only differ by a factor \(1/\sqrt{\phi}\), it follows from (Harshaw et al., 2022, Theorem 6.1) that \(\frac{M_{n}^{\mathrm{gs}}}{\|\mathbf{v}\|}\) is a subgaussian variable with variance parameter \(1/\phi\). In particular, the random variables \(\frac{(M_{n}^{\mathrm{gs}})^{2}}{\|\mathbf{v}\|^{2}}\) are _uniformly integrable_ in \(n\). Hence by classical results on uniform integrability, this implies together with (5.1):
\[\lim_{n\to\infty}\frac{\mathrm{Var}\left[M_{n}^{\mathrm{gs}}\right]}{\|\mathbf{v} \|^{2}}=1\text{ and hence }\frac{M_{n}^{\mathrm{gs}}}{\sqrt{\mathrm{Var}\left[M_{n}^{\mathrm{gs}} \right]}}\overset{\mathrm{law}}{\underset{n\to\infty}{\longrightarrow}} \mathrm{N}(0,1)\]
which finishes the proof.
We now return to the
Proof of Proposition 5.1.: We will verify Lindeberg's conditions for the CLT of the triangular array of martingales \(\{M_{t}\}_{t=0}^{n}\) (recall that we keep the dependence on \(n\) implicit). To this end, let us define the _conditional quadratic variation_\(Q_{n}\) of \(M_{n}\) as
\[Q_{n}=\sum_{t\in[n]}\mathbb{E}[\Delta_{t}^{2}\mid\mathcal{F}_{t-1}]\text{ where }\Delta_{t}\coloneqq M_{t}-M_{t-1}=\eta_{t}\left\langle\frac{B\mathbf{u}_{t}}{\|B\mathbf{u}_{t }\|},\begin{bmatrix}\mathbf{v}\\ 0\end{bmatrix}\right\rangle \tag{5.2}\]
(recall (2.16) as well the recursive definition of the \(\sigma\)-algebra \(\mathcal{F}_{t}\) from (2.14) in Section 2). Notice that \(\mathbb{E}[Q_{n}]=\mathrm{Var}[M_{n}]\eqqcolon\sigma_{n}^{2}\). Lindeberg's conditions are implied by Lyapunov's moment condition which in our case are summarized below (see, e.g., Hall and Heyde (1980)):
1. \(\frac{\sigma_{n}}{\|\mathbf{v}\|}\to 1\),
2. \(\frac{1}{\|\mathbf{v}\|^{4}}\sum_{t\in[n]}\mathbb{E}[\Delta_{t}^{4}]\to 0,\text{ and}\)
3. \(\frac{Q_{n}}{\|\mathbf{v}\|^{2}}\to 1\) in probability
as \(n\to\infty\). We now verify each of these three conditions.
#### Verifying condition (i)
The following observation is crucial for our proof:
\[\sum_{t\in[n]}\left\langle\frac{B\mathbf{u}_{t}}{\|B\mathbf{u}_{t}\|}, \begin{bmatrix}\mathbf{v}\\ 0\end{bmatrix}\right\rangle^{2}=\|\mathbf{v}\|^{2}. \tag{5.3}\]
Let us first show (i) using this claim. In view of the description of the underlying stochastic processes in Section 2, observe that \(|\eta_{t}|=1\) for all \(t\in[n]\) and hence
\[\mathbb{E}\left[\Delta_{t}^{2}\right]=\mathbb{E}\left[\eta_{t}^{2}\left\langle \frac{B\mathbf{u}_{t}}{\|B\mathbf{u}_{t}\|},\begin{bmatrix}\mathbf{v}\\ 0\end{bmatrix}\right\rangle^{2}\right]=\mathbb{E}\left[\left\langle\frac{B\bm {u}_{t}}{\|B\mathbf{u}_{t}\|},\begin{bmatrix}\mathbf{v}\\ 0\end{bmatrix}\right\rangle^{2}\right].\]
Therefore, by (5.3) and the orthogonality of martingale differences, we get
\[\sigma_{n}^{2}=\mathbb{E}[M_{n}^{2}]=\sum_{t\in[n]}\mathbb{E}[ \Delta_{t}^{2}]=\mathbb{E}\left[\|\mathbf{v}\|^{2}\right]=\|\mathbf{v}\|^{2} \tag{5.4}\]
which finishes the verification of condition (i).
It remains to verify (5.3). Recall from the discussion after (2.16) that the vectors \(\left(\frac{B\mathbf{u}_{t}}{\|B\mathbf{u}_{t}\|},\ldots,\frac{B\mathbf{u}_{n}}{\|B\mathbf{u}_{ n}\|}\right)\) form an ONB for \(\mathrm{ColSp}(B)\) where
\[B=\begin{bmatrix}I_{n}\\ Y^{\intercal}\end{bmatrix}\]
(recall (2.1)) and hence
\[\sum_{t\in[n]}\left\langle\frac{B\mathbf{u}_{t}}{\|B\mathbf{u}_{t}\|}, \begin{bmatrix}\mathbf{v}\\ 0\end{bmatrix}\right\rangle^{2}=\|\mathrm{Proj}_{\mathrm{ColSp}(B)}(\mathbf{v})\|^{ 2}.\]
However, since
\[\begin{bmatrix}\mathbf{v}\\ 0\end{bmatrix}=B\mathbf{v}\]
(recall that \(\mathbf{v}^{\intercal}Y=\mathbf{0}\)), (5.3) follows.
#### Verifying condition (ii)
We need the following moment bound.
**Lemma 5.2**.: _For any \(m\in[n]\), we have_
\[\sum_{t\in[n]}\mathbb{E}\left[\Delta_{t}^{4}\right]\] \[\leqslant CC_{1}^{2}\|\boldsymbol{v}\|_{\infty}^{2}\|\boldsymbol {v}\|^{2}+CdC_{1}^{2}\zeta^{8}\left(\|\boldsymbol{v}\|_{\infty}^{4}\ m^{3}+\| \boldsymbol{v}\|_{\infty}^{4}\frac{\kappa^{4}}{m}+n^{3}\|\boldsymbol{v}\|^{4} \exp\left(-c(\kappa\zeta^{2})^{-2}m\right)\right).\]
One can now check that by setting
\[m=C\kappa^{2}\zeta^{4}\log en\]
for a suitably large enough constant \(C\), the right hand side above converges to \(0\) as soon as
\[\lim_{n\to\infty}d^{1/4}\frac{\|\boldsymbol{v}\|_{\infty}}{\|\boldsymbol{v}\|} \kappa^{3/2}(\log n)^{3/4}=0 \tag{5.5}\]
(recall that \(d\leqslant n\) and \(\kappa\geqslant\zeta^{-2}\) by (4.14)) which is implied by the first part of assumption (1.11) in Theorem 1.3. However, we also need to ensure that \(m\), as chosen above, is at most \(n\) for all large enough \(n\). But this follows from (5.5) in view of the facts that \(\frac{1}{\sqrt{n}}\leqslant\frac{\|\boldsymbol{v}\|_{\infty}}{\|\boldsymbol{v}\|}\) and (4.14). We now proceed to the
Proof of Lemma 5.2.: Since \(\boldsymbol{u}_{t}=\boldsymbol{u}(p_{t},\mathcal{A}_{t})\), we have in view of (3.3) and (3.17),
\[\left\langle B\boldsymbol{u}_{t},\begin{bmatrix}\boldsymbol{v}\\ 0\end{bmatrix}\right\rangle=\|B\boldsymbol{u}_{t}\|^{2}\,\boldsymbol{v}[ \mathcal{A}_{t}]^{\intercal}(I_{a_{t}}-Y_{t}D_{t}Y_{t}^{\intercal}) \boldsymbol{e}_{p}[\mathcal{A}_{t}]. \tag{5.6}\]
where, like in the previous sections, we use the shorthands \(a_{t}=|\mathcal{A}_{t}|=n-t+1\) (Observation 2.2) and \(p=p_{t}\). Therefore,
\[\Delta_{t}^{2} \leqslant C\,\|B\boldsymbol{u}_{t}\|^{-2}\,\left\langle B \boldsymbol{u}_{t},\begin{bmatrix}\boldsymbol{v}\\ 0\end{bmatrix}\right\rangle^{2}\overset{\eqref{eq:C_1}}{=}C\,\|B\boldsymbol{u }_{t}\|^{2}\,(\boldsymbol{v}[\mathcal{A}_{t}]^{\intercal}(I_{a_{t}}-Y_{t}D_{t} Y_{t}^{\intercal})\boldsymbol{e}_{p}[\mathcal{A}_{t}])^{2} \tag{5.7}\] \[\overset{\eqref{eq:C_1}}{=}C\,\|B\boldsymbol{u}_{t}\|^{2}\, \mathcal{Q}_{t}\]
where in the first inequality we used the fact that \(|\delta_{t}|\leqslant 2\) almost surely and in the final step,
\[\mathcal{Q}_{t}\coloneqq(\boldsymbol{v}[\mathcal{A}_{t}]^{\intercal}(I_{a_{t }}-Y_{t}D_{t}Y_{t}^{\intercal})\boldsymbol{e}_{p}[\mathcal{A}_{t}])^{2}\,.\]
Consequently,
\[\sum_{t\in[n]}\mathbb{E}\left[\Delta_{t}^{4}\right]\leqslant C\sum_{t\in[n]} \mathbb{E}\left[\|B\boldsymbol{u}_{t}\|^{4}\,\mathcal{Q}_{t}^{2}\right] \overset{\eqref{eq:C_1}}{\leqslant}C\cdot C_{1}^{2}\sum_{t\in[n]}\mathbb{E }\left[\mathcal{Q}_{t}^{2}\right]. \tag{5.8}\]
Let us now expand \(\mathcal{Q}_{t}^{2}\).
\[\mathcal{Q}_{t}^{2}\] \[=\left(\boldsymbol{v}[\mathcal{A}_{t}]^{\intercal}(I_{a_{t}}-Y_{t }D_{t}Y_{t}^{\intercal})\boldsymbol{e}_{p}[\mathcal{A}_{t}]\right)^{4} \leqslant C\left((\boldsymbol{v}[\mathcal{A}_{t}]^{\intercal}e_{p}[\mathcal{A }_{t}])^{4}+(\boldsymbol{v}[\mathcal{A}_{t}]^{\intercal}Y_{t}D_{t}Y_{t}^{ \intercal}e_{p}[\mathcal{A}_{t}])^{4}\right)\] \[=C\left(v_{p}^{4}+(\boldsymbol{v}[\mathcal{A}_{t}]^{\intercal}Y_ {t}D_{t}\boldsymbol{y}_{p})^{4}\right)=C\left(v_{p}^{4}+\left(\boldsymbol{y}_ {p}^{\intercal}D_{t}Y_{t}^{\intercal}\boldsymbol{v}[\mathcal{A}_{t}] \boldsymbol{v}[\mathcal{A}_{t}]^{\intercal}Y_{t}D_{t}\boldsymbol{y}_{p}\right) ^{2}\right).\]
We will estimate the expected values of each of the two terms separately. The first term can be dealt with in a simple manner.
\[\mathbb{E}\left[v_{p}^{4}\right] =\mathbb{E}\left[\mathbb{E}\left[v_{p}^{4}\,|\,\mathcal{F}_{t-1} \right]\right]=\mathbb{E}\left[\frac{1}{n-t+1}\sum_{i\in\mathcal{A}_{t}}v_{i}^ {4}\right]\leqslant\|\boldsymbol{v}\|_{\infty}^{2}\ \ \mathbb{E}\left[\frac{1}{n-t+1}\sum_{i\in \mathcal{A}_{t}}v_{i}^{2}\right]\] \[\stackrel{{\text{Obs.\ref{eq:1}}}}{{=}}\| \boldsymbol{v}\|_{\infty}^{2}\,\frac{\|\boldsymbol{v}\|^{2}}{n}.\]
The second term, on the other hand, requires a more delicate treatment. In particular, we will take advantage of the _cancellations_ inherent in the relation \(\boldsymbol{v}^{\intercal}Y=0\) (see (2.3)) in order to obtain _favorable_ bounds on the moments of \(\boldsymbol{v}[\mathcal{A}_{t}]^{\intercal}Y_{t}\). This is needed because we stipulated very weak upper bound on the density of the vector \(\boldsymbol{v}\) in (1.11) as already discussed in the introduction. To see this, consider the "good" scenario when \(d=1\) and \(y_{i}\)'s are same (observe that \(y_{i}\)'s are scalars in this case). A simple algebra then reveals that the best possible bound one can obtain in this generality is the following:
\[\mathbb{E}\left[\left(\boldsymbol{y}_{p}^{\intercal}D_{t}Y_{t}^{\intercal} \boldsymbol{v}[\mathcal{A}_{t}]\boldsymbol{v}[\mathcal{A}_{t}]^{\intercal}Y _{t}D_{t}\boldsymbol{y}_{p}\right)^{2}\right]\leqslant C(\zeta)\,\|\boldsymbol {v}\|_{\infty}^{4}\,\frac{|\mathcal{A}_{t}|^{4}}{|\mathcal{A}_{t}|^{4}}=C( \zeta)\,\|\boldsymbol{v}\|_{\infty}^{4}.\]
Summing it over \(t\in[n]\), we get
\[\lim_{n\to\infty}n\,\frac{\|\boldsymbol{v}\|_{\infty}^{4}}{\| \boldsymbol{v}\|^{4}}=0\]
which is a _much_ stronger requirement than the one we stipulated in (1.11).
In order to exploit the relation \(\boldsymbol{v}^{\intercal}Y=0\), we will use Proposition 3.1 to derive the following lemma.
**Lemma 5.3**: _For any \(t\in[n]\), we have_
\[\mathbb{E}\left[\|\boldsymbol{v}[\mathcal{A}_{t}]^{\intercal}Y_{ t}\|^{4}\right] \leqslant Cd\|\boldsymbol{v}\|_{\infty}^{4}\left(\frac{(n-t+1)^{2}(t-1)^{2}}{ n^{4}}\|Y\|_{\mathrm{Frob}}^{4}+\frac{(n-t+1)(t-1)}{n^{2}}\zeta^{2}\|Y\|_{ \mathrm{Frob}}^{2}\right)\] \[\mathbb{E}\left[\|\boldsymbol{v}[\mathcal{A}_{t}]^{\intercal}Y_{ t}\|^{2}\right] \leqslant\frac{(n-t+1)(t-1)}{n(n-1)}\sum_{j\in[d]}\sum_{i\in[n]}v_{i}^{2}y_{ij}^ {2}\leqslant\frac{(n-t+1)(t-1)}{n(n-1)}\|\boldsymbol{v}\|_{\infty}^{2}\cdot \|Y\|_{\mathrm{Frob}}^{2}.\]
Firstly we can bound,
\[\|\boldsymbol{v}[\mathcal{A}_{t}]^{\intercal}Y_{t}\|^{4}=\left( \sum_{j\in[d]}\left(\sum_{i\in\mathcal{A}_{t}}v_{i}y_{ij}\right)^{2}\right)^{2 }\leqslant d\sum_{j\in[d]}\left(\sum_{i\in\mathcal{A}_{t}}v_{i}y_{ij}\right)^ {4}\]
where we used the Cauchy-Schwarz inequality. Now taking expectation on both sides and using \(\boldsymbol{v}^{\intercal}Y=0\), we obtain from the second formula given in (3.19):
\[\mathbb{E}\left[\|\boldsymbol{v}[\mathcal{A}_{t}]^{\intercal}Y_{ t}\|^{4}\right]\] \[\leqslant Cd\sum_{j\in[d]}\left(\frac{(n-t+1)^{2}(t-1)^{2}}{n^{4} }\left(\sum_{i\in[n]}v_{i}^{2}y_{ij}^{2}\right)^{2}+\frac{(n-t+1)(t-1)}{n^{2}} \sum_{i\in[n]}v_{i}^{4}y_{ij}^{4}\right).\]
Next, we note that
\[\sum_{j\in[d]}\left(\sum_{i\in[n]}v_{i}^{2}y_{ij}^{2}\right)^{2} \leqslant\|\mathbf{v}\|_{\infty}^{4}\sum_{j\in[d]}\left(\sum_{i\in[n]}y _{ij}^{2}\right)^{2}\] \[\leqslant\|\mathbf{v}\|_{\infty}^{4}\left(\sum_{j\in[d]}\sum_{i\in[n] }y_{ij}^{2}\right)^{2}=\|\mathbf{v}\|_{\infty}^{4}\|Y\|_{\mathrm{Frob}}^{4}.\]
Also,
(5.12) \[\sum_{j\in[d]}\sum_{i\in[n]}v_{i}^{4}y_{ij}^{4}\leqslant\|\mathbf{v}\|_{\infty}^{ 4}\sum_{j\in[d]}\sum_{i\in[n]}y_{ij}^{4}\leqslant\zeta^{2}\|\mathbf{v}\|_{\infty}^ {4}\sum_{j\in[d]}\sum_{i\in[n]}y_{ij}^{2}\overset{\eqref{eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eqeqeq:eq:eqeq:eq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eq
where in the final step we used the fact that \(\|Y\|_{\mathrm{Frob}}^{2}\leqslant n\zeta^{2}\) (2.2).
Next we deal with the sum over \(t\in[n-m]\) which requires additional work. To this end, let us recall the event \(\mathcal{G}_{2,t}\) from (2.6):
\[\mathcal{G}_{2,t}=\left\{\|Y_{t}^{\intercal}Y_{t}-\frac{n-t+1}{n}Y^{\intercal} Y\|_{\mathrm{op}}\geqslant\frac{n-t+1}{2n}\lambda_{\min}\big{(}Y^{\intercal}Y \big{)}\right\}.\]
We can bound the expectation _on_ the event \(\mathcal{G}_{2,t}\) as follows:
\[\sum_{t=1}^{n-m}\mathbb{E}\left[\|\mathbf{y}_{p}\|^{4}\left(\mathbf{v}[ \mathcal{A}_{t}]^{\intercal}Y_{t}D_{t}^{2}Y_{t}^{\intercal}\mathbf{v}[\mathcal{ A}_{t}]\right)^{2}\mathds{1}_{\mathcal{G}_{2,t}}\right]\] \[=\sum_{t=1}^{n-m}\mathbb{E}\left[\mathbb{E}\left[\|\mathbf{y}_{p}\|^{ 4}\left(\mathbf{v}[\mathcal{A}_{t}]^{\intercal}Y_{t}D_{t}^{2}Y_{t}^{\intercal}\bm {v}[\mathcal{A}_{t}]\right)^{2}\,|\,\mathcal{F}_{t-1}\right]\mathds{1}_{ \mathcal{G}_{2,t}}\right]\] \[\stackrel{{\eqref{eq:
where in the last step we again used the fact that \(\|Y\|_{\mathrm{Frob}}^{2}\leqslant n\zeta^{2}\) (2.2). Bounding the same expectation _on_ the _unlikely_ event \(\mathcal{G}_{2,t}^{c}\), is relatively simpler in view of Lemma 3.4.
(5.18) \[\begin{split}&\sum_{t=1}^{n-m}\mathbb{E}\left[\|\mathbf{y}_{p}\|^{4} \left(\mathbf{v}[\mathcal{A}_{t}]^{\mathsf{T}}Y_{t}D_{t}^{2}Y_{t}^{\mathsf{T}}\bm {v}[\mathcal{A}_{t}]\right)^{2}\mathds{1}_{\mathcal{G}_{2,t}^{c}}\right]\\ &\overset{\eqref{eq:cond_eq_
_Furthermore, we have the following expressions or bounds for the mean and variance of \(T_{n}\):_
\[\mathbb{E}[T_{n}]=\|\boldsymbol{v}\|^{2} \tag{5.21}\]
_and_
\[\frac{\operatorname{Var}\left[T_{n}\right]}{\|\boldsymbol{v}\|^{4}}\leqslant \frac{\|\boldsymbol{v}\|_{\infty}^{2}}{\|\boldsymbol{v}\|^{2}}(\log en)^{2}+ \frac{1}{n-1}. \tag{5.22}\]
Now let us set
\[m=C\kappa^{2}\zeta^{4}\log n\]
for an appropriately large enough constant \(C\) so that, by (5.20),
\[\lim_{n\to\infty}\frac{\mathbb{E}\left[\mathcal{Q}\mathcal{F}\right]}{\| \boldsymbol{v}\|^{2}}=0\text{ as soon as }\lim_{n\to\infty}\max(m^{1/2},d^{1/2}(\log n)^{1/2},\kappa^{1/2}(\log n)^{1/ 2})\frac{\|\boldsymbol{v}\|_{\infty}}{\|\boldsymbol{v}\|}=0\]
which follows from the first part of assumption (1.11) in Theorem 1. Therefore under the same assumption, \(|Q_{n}-T_{n}|\) converges to \(0\) in probability as \(n\to\infty\). Similarly, by (5.21) and (5.22), the assumption implies \(\frac{T_{n}}{\|\boldsymbol{v}\|^{2}}\) converges to \(1\) in probability. Together they imply condition (ii). We also need to check that \(m\), as defined above, is at most \(n\) for all large enough \(n\) which has already been verified under assumption (1.11) in the previous part.
Proof of Lemma 5.4.: _Proof of (5.19)._ To this end, let us recall from (3.4):
\[\|B\boldsymbol{u}_{t}\|^{-2}\left\langle B\boldsymbol{u}_{t}, \begin{bmatrix}\boldsymbol{v}\\ 0\end{bmatrix}\right\rangle^{2} =\|B\boldsymbol{u}_{t}\|^{2}\left(\boldsymbol{v}[\mathcal{A}_{t} ]^{\intercal}(I_{a_{t}}-Y_{t}D_{t}Y_{t}^{\intercal})\boldsymbol{e}_{p}[ \mathcal{A}_{t}]\right)^{2} \tag{5.23}\] \[=\|B\boldsymbol{u}_{t}\|^{2}\,\mathcal{Q}_{t}=\mathcal{Q}_{t}+( \|B\boldsymbol{u}_{t}\|^{2}-1)\mathcal{Q}_{t}\]
with \(\mathcal{Q}_{t}=(\boldsymbol{v}[\mathcal{A}_{t}]^{\intercal}(I_{a_{t}}-Y_{t} D_{t}Y_{t}^{\intercal})\boldsymbol{e}_{p}[\mathcal{A}_{t}])^{2}\). Let us deal with the second term first.
\[(\|B\boldsymbol{u}_{t}\|^{2}-1)\mathcal{Q}_{t}\overset{\eqref{eq:B \boldsymbol{u}_{t}}}{\leqslant}C_{1}\boldsymbol{y}_{p}^{\intercal}D_{t} \boldsymbol{y}_{p}\boldsymbol{v}[\mathcal{A}_{t}]^{\intercal}(I_{a_{t}}-Y_{t }D_{t}Y_{t}^{\intercal})\boldsymbol{e}_{p}[\mathcal{A}_{t}]\boldsymbol{e}_{p} [\mathcal{A}_{t}]^{\intercal}(I_{a_{t}}-Y_{t}D_{t}Y_{t}^{\intercal}) \boldsymbol{v}[\mathcal{A}_{t}]\] \[\overset{\eqref{eq:B\boldsymbol{u}_{t}}}{\leqslant}2C_{1} \boldsymbol{v}[\mathcal{A}_{t}]^{\intercal}\boldsymbol{e}_{p}[\mathcal{A}_{t} ]\boldsymbol{y}_{p}^{\intercal}D_{t}\boldsymbol{y}_{p}\boldsymbol{e}_{p}[ \mathcal{A}_{t}]^{\intercal}\boldsymbol{v}[\mathcal{A}_{t}]+2C_{1}^{2} \boldsymbol{v}[\mathcal{A}_{t}]^{\intercal}Y_{t}D_{t}Y_{t}^{\intercal} \boldsymbol{e}_{p}[\mathcal{A}_{t}]\boldsymbol{e}_{p}[\mathcal{A}_{t}]^{ \intercal}Y_{t}D_{t}Y_{t}^{\intercal}\boldsymbol{v}[\mathcal{A}_{t}].\]
We now take conditional expectations on both sides with respect to \(\mathcal{F}_{t-1}\) for each of the two terms separately (recall that \(|\mathcal{A}_{t}|=a_{t}=n-t+1\)).
\[\mathbb{E}\left[\boldsymbol{v}[\mathcal{A}_{t}]^{\intercal} \boldsymbol{e}_{p}[\mathcal{A}_{t}]\boldsymbol{y}_{p}^{\intercal}D_{t} \boldsymbol{y}_{p}\boldsymbol{e}_{p}[\mathcal{A}_{t}]^{\intercal}\boldsymbol {v}[\mathcal{A}_{t}]\,|\,\mathcal{F}_{t-1}\right]=\frac{1}{n-t+1}\,\sum_{j \in\mathcal{A}_{t}}v_{j}^{2}\boldsymbol{y}_{j}^{\intercal}D_{t}\boldsymbol{y }_{j},\text{ whereas}\] \[\mathbb{E}\left[\boldsymbol{v}[\mathcal{A}_{t}]^{\intercal}Y_{t}D _{t}Y_{t}^{\intercal}\boldsymbol{e}_{p}[\mathcal{A}_{t}]\boldsymbol{e}_{p}[ \mathcal{A}_{t}]^{\intercal}Y_{t}D_{t}Y_{t}^{\intercal}\boldsymbol{v}[ \mathcal{A}_{t}]\,|\,\mathcal{F}_{t-1}\right]=\frac{1}{n-t+1}\,\boldsymbol{v}[ \mathcal{A}_{t}]^{\intercal}Y_{t}D_{t}Y_{t}^{\intercal}Y_{t}D_{t}Y_{t}^{ \intercal}\boldsymbol{v}[\mathcal{A}_{t}]\] \[\overset{\eqref{eq:B\boldsymbol{u}_{t}}}{=}\frac{1}{n-t+1} \,\boldsymbol{v}[\mathcal{A}_{t}]^{\intercal}Y_{t}D_{t}(D_{t}^{-1}-I_{d})D_{t }Y_{t}^{\intercal}\boldsymbol{v}[\mathcal{A}_{t}]\leqslant\frac{1}{n-t+1}\, \boldsymbol{v}[\mathcal{A}_{t}]^{\intercal}Y_{t}D_{t}Y_{t}^{\intercal} \boldsymbol{v}[\mathcal{A}_{t}].\]
where in the last step we used the fact that \(D_{t}-D_{t}^{2}\) is at most \(D_{t}\) in Loewner order. Putting together the last two displays, we obtain
\[\mathbb{E}\left[(\|B\boldsymbol{u}_{t}\|^{2}-1)\mathcal{Q}_{t}\,|\, \mathcal{F}_{t-1}\right]\leqslant \frac{2C_{1}}{n-t+1}\,\sum_{j\in\mathcal{A}_{t}}v_{j}^{2}\boldsymbol{y}_{j}^ {\intercal}D_{t}\boldsymbol{y}_{j}+\frac{2C_{1}^{2}}{n-t+1}\,\boldsymbol{v}[ \mathcal{A}_{t}]^{\intercal}Y_{t}D_{t}Y_{t}^{\intercal}\boldsymbol{v}[ \mathcal{A}_{t}].\]
As to the first term on the right hand side of (5.23), we can write
\[\mathbb{E}\left[\mathcal{Q}_{t}\,|\,\mathcal{F}_{t-1}\right] =\frac{1}{n-t+1}\boldsymbol{v}[\mathcal{A}_{t}]^{\intercal}(I_{a_ {t}}-Y_{t}D_{t}Y_{t}^{\intercal})^{2}\boldsymbol{v}[\mathcal{A}_{t}]\] \[=\frac{1}{n-t+1}\boldsymbol{v}[\mathcal{A}_{t}]^{\intercal}(I_{a _{t}}-2Y_{t}D_{t}Y_{t}^{\intercal}+Y_{t}D_{t}Y_{t}^{\intercal}Y_{t}D_{t}Y_{t }^{\intercal})\boldsymbol{v}[\mathcal{A}_{t}]\] \[\overset{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:
bound in Lemma 5.3,
\[\begin{split}&\mathbb{E}\left[\sum_{t=n-m+1}^{n}\frac{1}{n-t+1}\, \|\mathbf{v}[\mathcal{A}_{t}]^{\intercal}Y_{t}\|^{2}\right]\\ &=\sum_{t=n-m+1}^{n}\frac{1}{n-t+1}\frac{(n-t+1)(t-1)}{n(n-1)}\| \mathbf{v}\|_{\infty}^{2}\cdot\|Y\|_{\mathrm{Frob}}^{2}\leqslant\frac{\|\mathbf{v}\|_ {\infty}^{2}\cdot\|Y\|_{\mathrm{Frob}}^{2}}{n(n-1)}\,nm\\ &=\frac{\|\mathbf{v}\|_{\infty}^{2}\cdot\|Y\|_{\mathrm{Frob}}^{2} \cdot m}{n-1}\leqslant C\zeta^{2}\|\mathbf{v}\|_{\infty}^{2}m\end{split} \tag{5.27}\]
where in the last step we used the fact that \(\|Y\|_{\mathrm{Frob}}^{2}\leqslant n\zeta^{2}\).
Next we bound the expectation of \(S_{n-m}\) towards which we will once again use the event \(\mathcal{G}_{2,t}\) from (2.6) and its consequences as in (3.28). We can write
\[\begin{split} S_{n-m}&\leqslant\sum_{t=1}^{n-m} \frac{1}{n-t+1}\,\|D_{t}\|_{\mathrm{op}}\|\mathbf{v}[\mathcal{A}_{t}]^{\intercal}Y _{t}\|^{2}\\ &\overset{\eqref{eq:S_n-m}}{\leqslant}\sum_{t=1}^{n-m}\frac{1}{n -t+1}\,\frac{2\kappa}{n-t+1}\,\|\mathbf{v}[\mathcal{A}_{t}]^{\intercal}Y_{t}\|^{2} +\sum_{t=1}^{n-m}\frac{1}{n-t+1}\,\|\mathbf{v}[\mathcal{A}_{t}]^{\intercal}Y_{t}\| ^{2}\mathds{1}_{\mathcal{G}_{2,t}^{\varepsilon}}\\ &=:S_{n-m}^{\star}+\mathrm{Rem}.\end{split} \tag{5.28}\]
Using the second moment bound in Lemma 5.3, we then obtain
\[\begin{split}\mathbb{E}\left[S_{n-m}^{\star}\right]& \leqslant\frac{C\|\mathbf{v}\|_{\infty}^{2}\kappa}{n(n-1)}\cdot\|Y\|_{ \mathrm{Frob}}^{2}\sum_{t=1}^{n-m}\,\frac{t-1}{n-t+1}\leqslant\frac{C\|\mathbf{v} \|_{\infty}^{2}\kappa\cdot\log en}{n-1}\|Y\|_{\mathrm{Frob}}^{2}\\ &\leqslant C\zeta^{2}\|\mathbf{v}\|_{\infty}^{2}\kappa\log en.\end{split} \tag{5.29}\]
where again in the last step we used the fact that \(\|Y\|_{\mathrm{Frob}}^{2}\leqslant n\zeta^{2}\).
As to \(\mathbb{E}\left[\mathrm{Rem}\right]\), first notice that for _any possible choice_ of \(\mathcal{A}_{t}\subset[n]\),
\[\begin{split}\|\mathbf{v}[\mathcal{A}_{t}]^{\intercal}Y_{t}\|^{2}& =\sum_{j\in[d]}\|\mathbf{v}[\mathcal{A}_{t}]^{\intercal}Y_{t}[\,:j]\|^ {2}\leqslant\sum_{j\in[d]}\|\mathbf{v}[\mathcal{A}_{t}]\|^{2}\|Y_{t}[\,:j]\|^{2} \leqslant\|\mathbf{v}\|^{2}\sum_{j\in[d]}\|Y_{t}[\,:j]\|^{2}\\ &=\|\mathbf{v}\|^{2}\|Y_{t}\|_{\mathrm{Frob}}^{2}\overset{\eqref{eq:S_n -m}}{\leqslant}(n-t+1)\zeta^{2}\|\mathbf{v}\|^{2},\end{split}\]
where in the second step we used the Cauchy-Schwarz inequality. Plugging this bound into the expression of \(\mathrm{Rem}\), we get
\[\mathbb{E}\left[\mathrm{Rem}\right]\leqslant\zeta^{2}\|\mathbf{v}\|^{2}\sum_{t=1} ^{n-m}\mathbb{P}[\mathcal{G}_{2,t}^{c}]\overset{\eqref{eq:S_n-m}}{\leqslant }Cdn\zeta^{2}\|\mathbf{v}\|^{2}\exp\left(-c\,(\kappa\zeta^{2})^{-2}m\right).\]
Combining this with (5.29) and plugging the resulting bound into (5.28) we obtain after taking expectations on both sides,
\[\mathbb{E}\left[S_{n-m}\right]\leqslant C\zeta^{2}\|\mathbf{v}\|_{\infty}^{2} \kappa\log en+Cdn\zeta^{2}\|\mathbf{v}\|^{2}\exp\left(-c\,(\kappa\zeta^{2})^{-2}m \right).\]
Together with (5.27) and subsequently (5.26), this implies
\[\mathbb{E}\left[S_{n}\right]\leqslant C\zeta^{2}\|\mathbf{v}\|_{\infty}^{2}(m+ \kappa\log en)+Cdn\zeta^{2}\|\mathbf{v}\|^{2}\exp\left(-c\,(\kappa\zeta^{2})^{-2}m \right).\]
Along with (5.25), the above bound yields (5.20).
Proof of (5.21).: Finally it remains to show the facts about \(T_{n}\). Let us denote the random variable in question by \(T_{n}\). Clearly,
\[T_{n}=\sum_{t\in[n]}v_{p_{t}}^{2}\cdot\sum_{s\in[t]}\frac{1}{n-s+1}=\sum_{t\in[n ]}v_{p_{t}}^{2}\cdot H(t)\]
where
\[H(t)=\sum_{s\in[t]}\frac{1}{n-s+1}=\sum_{s\in[n]}\frac{1}{s}\cdot\mathds{1}_{t +s>n}\leqslant H(n)\text{ for all }t.\]
Hence,
\[T_{n}=\sum_{j\in[n]}v_{j}^{2}\cdot H(\theta_{j}) \tag{5.30}\]
where \(\theta=\pi^{-1}\) is the inverse permutation of \(\pi\) defined as \(\pi(t)=p_{t}\). Since \(\theta\) is also a uniformly random permutation, note that,
\[\mathbb{E}\left[H(\theta_{1})\right]=\frac{1}{n}\sum_{t\in[n]}\sum_{s\in[n]} \frac{1}{s}\cdot\mathds{1}_{t+s>n}=\frac{1}{n}\sum_{s\in[n]}\frac{1}{s}\cdot \sum_{t\in[n]}\mathds{1}_{t+s>n}=1. \tag{5.31}\]
Also,
\[\mathbb{E}\left[H(\theta_{1})^{2}\right]=\frac{1}{n}\sum_{t\in[n]}H(t)^{2} \leqslant H(n)^{2} \tag{5.32}\]
and
\[\begin{split}\mathbb{E}\left[H(\theta_{1})H(\theta_{2})\right]& =\frac{1}{n(n-1)}\sum_{t\in[n]}H(t)\cdot\sum_{s\neq t}H(s)=\frac{1 }{n(n-1)}\sum_{t\in n}H(t)(n-H(t))\\ &=\frac{1}{n-1}(n-\mathbb{E}\left[H(\theta_{1})^{2}\right]) \leqslant\frac{n}{n-1}.\end{split} \tag{5.33}\]
Thus \(\mathbb{E}\left[T_{n}\right]=\left\|\mathbf{v}\right\|^{2}\) in view of (5.30) and (5.31). Also,
\[\begin{split}\operatorname{Var}[T_{n}]&=\mathbb{E }\left[T_{n}^{2}\right]-\left\|\mathbf{v}\right\|^{4}=\mathbb{E}\Big{[}\sum_{j\in[ n]}v_{j}^{2}H(\theta_{j})\Big{]}^{2}-\left\|\mathbf{v}\right\|^{4}\\ &=\Big{(}\sum_{t\in[n]}v_{t}^{4}\Big{)}\mathbb{E}\left[H(\theta _{1})^{2}\right]+\Big{(}\sum_{1\leqslant s\neq t\leqslant n}v_{s}^{2}v_{t}^{2} \Big{)}\cdot\mathbb{E}\left[H(\theta_{1})H(\theta_{2})\right]-\left\|\mathbf{v} \right\|^{4}\\ &\overset{\eqref{eq:Var}+\eqref{eq:Var}}{\leqslant}\|\mathbf{v}\|_{ \infty}^{2}\|\mathbf{v}\|^{2}H(n)^{2}+\Big{(}\frac{n}{n-1}-1\Big{)}\|\mathbf{v}\|^{4}\\ &\leqslant\|\mathbf{v}\|^{4}\Big{(}\frac{\|\mathbf{v}\|_{\infty}^{2}}{\| \mathbf{v}\|^{2}}(\log en)^{2}+\frac{1}{n-1}\Big{)}.\end{split}\]
This finishes the proof of this lemma.
## 6 Outlook
Our analysis in this paper raises a few questions, which we mention here as possible directions to pursue for the future.
1. It will be interesting to get a sub-Gaussian tail probability bound with the improved limiting variance that we get here.
2. Incorporating the simplification from the skeletal process in the algorithm itself might reduce the running time.
3. Incorporating randomization in an online version of the algorithm might help analyze the online process under _milder_ assumptions, but it is not immediately apparent. In this article, we studied the offline version of the GSW design algorithm, and pivot randomization played a crucial role in the analysis.
4. The GSW design algorithm can be thought of as a discrete localization process where we want to sample a random assignment vector from a distribution "minimizing" a given cost function. In the present case, the GSW design satisfies \[\max\left\{\phi\cdot\left\|\mathrm{Cov}(\mathbf{z})\right\|,(1-\phi)\cdot\left\| \mathrm{Cov}(\xi^{-1}X^{\intercal}\mathbf{z})\right\|\right\}\leqslant 1,\] and the algorithm behaves as a discrete coordinate-wise localization process. In general, the distribution of the random assignment vector minimizing the cost function will be unknown; it will depend on the covariate matrix \(X\) and the robustness parameter \(\phi\). It will be interesting to incorporate the techniques from localization literature to develop an algorithmic construction for a general cost function.
5. Our approach to finding and analyzing the skeletal process is general. This can be used to analyze the Horvitz-Thompson estimator based on the GSW design in non-uniform probability assignment cases (see Remark 1.1). It is, therefore, natural to ask if one can generalize the strategy in different directions, e.g., in handling non-linear constraints.
**Acknowledgments.** SC was supported by the NSF Grant DMS-1916375. PD was supported partially by the Campus Research Board Grant RB23016. SG's research was supported by the SERB grant SRG/2021/000032, a grant from the Department of Atomic Energy, Government of India, under project 12-R&D-TFR-5.01-0500 and in part by a grant from the Infosys Foundation as a member of the Infosys-Chandrasekharan virtual center for Random Geometry. A significant part of this research was accomplished when SC visited the School of Mathematics at the Tata Institute of Fundamental Research (TIFR), Mumbai. SG and SC are also grateful to the International Centre for Theoretical Sciences (ICTS), Bengaluru, for their kind hospitality during the early phase of the project. The authors thank Dan Spielman, Fredrik Savje, Christopher Harshaw and Peng Zhang for their encouraging and valuable comments on the first version of the manuscript.
|
2303.01186
|
Discrete-time Competing-Risks Regression with or without Penalization
|
Many studies employ the analysis of time-to-event data that incorporates
competing risks and right censoring. Most methods and software packages are
geared towards analyzing data that comes from a continuous failure time
distribution. However, failure-time data may sometimes be discrete either
because time is inherently discrete or due to imprecise measurement. This paper
introduces a novel estimation procedure for discrete-time survival analysis
with competing events. The proposed approach offers two key advantages over
existing procedures: first, it expedites the estimation process for a large
number of unique failure time points; second, it allows for straightforward
integration and application of widely used regularized regression and screening
methods. We illustrate the benefits of our proposed approach by conducting a
comprehensive simulation study. Additionally, we showcase the utility of our
procedure by estimating a survival model for the length of stay of patients
hospitalized in the intensive care unit, considering three competing events:
discharge to home, transfer to another medical facility, and in-hospital death.
|
Tomer Meir, Malka Gorfine
|
2023-03-02T11:57:10Z
|
http://arxiv.org/abs/2303.01186v2
|
# Discrete-time Competing-Risks Regression with or without Penalization
###### Abstract
Many studies employ the analysis of time-to-event data that incorporates competing risks and right censoring. Most methods and software packages are geared towards analyzing data that comes from a continuous failure time distribution. However, failure-time data may sometimes be discrete either because time is inherently discrete or due to imprecise measurement. This paper introduces a novel estimation procedure for discrete-time survival analysis with competing events. The proposed approach offers two key advantages over existing procedures: first, it accelerates the estimation process; second, it allows for straightforward integration and application of widely used regularized regression and screening methods. We illustrate the benefits of our proposed approach by conducting a comprehensive simulation study. Additionally, we showcase the utility of our procedure by estimating a survival model for the length of stay of patients hospitalized in the intensive care unit, considering three competing events: discharge to home, transfer to another medical facility, and in-hospital death.
Competing events; Regularized Regression; Penalized Regression; Sure Independent Screening; Survival Analysis
## 1 Introduction
Most survival analysis methods and software are designed for data with continuous failure time distributions. However, there are cases where failure times are discrete, either because the time unit is discrete or due to measurement inaccuracies. For instance, in the US, the shift in presidential party control only happens every four years in January [1]. In some cases, events can happen at any point in time, but only the time interval in which each event occurred is recorded in available data. For instance, death from cancer recorded in months
since diagnosis [2]. It is commonly recognized that using standard continuous-time models on discrete-time data without proper adjustments can lead to biased estimators for discrete-time models [2, 3].
Competing events occur when individuals are susceptible to several types of events but can only experience at most one event at a time. If multiple events can happen simultaneously, they can be treated as a separate event type [4]. For instance, competing risks in a study of hospital length of stay could be discharge and in-hospital death, where the occurrence of one of these events prevents observation of the other event for the same patient. Another classic example of competing risks is cause-specific mortality, such as death from heart disease, cancer, or other causes [4, 5].
The motivation for this project is to analyze data of length of stay (LOS) of patients in healthcare facilities. LOS typically refers to the number of days a patient stays in the hospital during a single admission [6, 7]. Accurate prediction of LOS is crucial for hospital management and planning of bed capacity, as it affects healthcare delivery access, quality, and efficiency [6]. In particular, hospitalizations in intensive care units (ICU) consume a significant amount of hospital resources per patient [8]. In this study, we use the publicly available Medical Information Mart for Intensive Care (MIMIC) - IV (version 2.0) data [9, 10] to develop a model for predicting LOS in ICU based on patients' characteristics upon arrival in ICU. The study involves 25,170 ICU admissions from 2014 to 2020 with only 28 unique times, resulting in many tied events at each time point. The three competing events analyzed were: discharge to home (69.0%), transfer to another medical facility (21.4%), and in-hospital death (6.1%). Patients who left the ICU against medical advice (1.0%) were considered censored, and administrative censoring was imposed for patients hospitalized for more than 28 days (2.5%).
Regression analysis of continuous-time survival data with competing risks can be performed using standard non-competing events tools because the likelihood function for the continuous-time setting can be factored into likelihoods for each cause-specific hazard function [4]. However, this is not the case for discrete-time data with competing risks (see Lee et al. [2] and references therein). Limited work has been done on discrete-time data with competing risks. Most existing works are based on simultaneously estimating all the parameters via the full likelihood function, which are computationally time consuming. In contrast, Lee et al. [2] showed that if one naively treats competing events as censoring in the discrete-time likelihood, separate estimation of cause-specific hazard models for different event types may be accomplished using a collapsed likelihood which is equivalent to fitting a generalized linear model to repeated binary outcomes. Moreover, the maximum collapsed-likelihood estimators are consistent and asymptotically normal under standard regularity conditions, which gives rise to Wald confidence intervals and likelihood-ratio tests for the effects of covariates. Wu et al. [3] focused on two competing events and used a different approach than that of Lee et al. [2]. However, they noted that it leads to the same estimators. The contribution of Wu et al. [3] is mainly by allowing an additional fixed effect of medical center in the model.
In this work we provide a new estimation procedure for analysing discrete survival time data with competing events. We simplify and speed up the estimation process based on the collapsed-likelihood approach of Lee et al. [2]. Our approach allows for the use of common penalized regression methods like LASSO and elastic net among others [11] and enables easy implementation of screening methods for high-dimensional data, such
as sure independent screening [12, 13]. Our Python software, PyDTS [14], implements both our method and the one from Lee et al. [2] and other tools for discrete-time survival analysis.
The rest of the article is structured as follows. Section 2 summarizes the collapsed likelihood of Lee et al. [2] and thoroughly explains the proposed estimation approach. Section 3 presents the results of a comprehensive simulation study, demonstrating the superiority of our method in terms of computational efficiency. Section 4 demonstrates the use of our method on the ICU LOS data of MIMIC. Finally, Section 5 concludes with a discussion.
## 2 Methods
### Notation and Models
Let \(T\) be a discrete event time that can take on only the values \(1,2,\ldots,d\), and let \(J\) represent the type of event, with \(J\in\{1,\ldots,M\}\). Also, consider a \(p\times 1\) vector of time-independent covariates \(Z\). The setting of time-dependent covariates will be discussed later. A general discrete cause-specific hazard function is of the form
\[\lambda_{j}(t|Z)=\Pr(T=t,J=j|T\geq t,Z)\,\ \ \ t=1,2,\ldots,d\,\ \ \ j=1,\ldots,M\,.\]
As described by Allison [1], the semi-parametric models for the hazard functions, based on a regression transformation model, can be represented as
\[h(\lambda_{j}(t|Z))=\alpha_{jt}+Z^{T}\beta_{j}\,\ \ t=1,2,\ldots,d\,\ \ j=1, \ldots,M\,,\]
where \(h\) is a known function. The total number of unknown parameters is \(M(d+p)\). Having a shared \(Z\) among the \(M\) models does not necessitate the use of identical covariates in all the models. Because the regression coefficients \(\beta_{j}\) are specific to each event type, any coefficient can be zeroed out to exclude its associated covariate. We adopt the popular logit function \(h(a)=\log\{a/(1-a)\}\) and get
\[\lambda_{j}(t|Z)=\frac{\exp(\alpha_{jt}+Z^{T}\beta_{j})}{1+\exp(\alpha_{jt}+Z^ {T}\beta_{j})}\,. \tag{1}\]
Leaving \(\alpha_{jt}\) unspecified is similar to an unspecified baseline hazard function in the Cox proportional hazard model [15]. Thus, the model described above is considered a semi-parametric model in discrete time.
Let \(S(t|Z)=\Pr(T>t|Z)\) be the overall survival given \(Z\). Then, the probability of experiencing event of type \(j\) at time \(t\), \(t=1,\ldots,d\), \(j=1,\ldots,M\), equals
\[\Pr(T=t,J=j|Z)=\lambda_{j}(t|Z)S(t-1|Z)=\lambda_{j}(t|Z)\prod_{k=1}^{t-1} \left\{1-\sum_{j^{\prime}=1}^{M}\lambda_{j^{\prime}}(k|Z)\right\}\]
and the probability of event type \(j\) by time \(t\) given \(Z\), also known as the cumulative incident function (CIF) of cause \(j\) is given by
\[F_{j}(t|Z)=\sum_{k=1}^{t}\lambda_{j}(k|Z)\prod_{l=1}^{k-1}\left\{1-\sum_{j^{ \prime}=1}^{M}\lambda_{j^{\prime}}(l|Z)\right\}\,.\]
Finally, the marginal probability of event type \(j\), given \(Z\), equals
\[\Pr(J=j|Z)=\sum_{t=1}^{d}\lambda_{j}(t|Z)\prod_{k=1}^{t-1}\left\{1-\sum_{j^{ \prime}=1}^{M}\lambda_{j^{\prime}}(k|Z)\right\}\,.\]
Our goal is estimating the parameters
\[\Omega=(\alpha_{11},\ldots,\alpha_{1d},\beta_{1}^{T},\ldots,\alpha_{M1}, \ldots,\alpha_{Md},\beta_{M}^{T})\,.\]
### The Collapsed Log-Likelihood Approach of Lee et al.
For clarity, the estimation method of Lee et al. [2] that employs a collapsed log-likelihood approach is briefly summarized. For simplicity, we temporarily assume two competing events, i.e., \(M=2\), with the aim of estimating \(\Omega=(\alpha_{11},\ldots,\alpha_{1d},\beta_{1}^{T},\alpha_{21},\ldots, \alpha_{2d},\beta_{2}^{T})\). The data consist of \(n\) independent observations, each with \((X_{i},\delta_{i},J_{i},Z_{i})\) where \(X_{i}=\min(C_{i},T_{i})\), \(C_{i}\) is a right-censoring time, \(\delta_{i}=I(T_{i}\leq C_{i})\) is the event indicator and \(J_{i}\in\{0,1,2\}\), where \(J_{i}=0\) if and only if \(\delta_{i}=0\), \(i=1,\ldots,n\). It is assumed that given the covariates, the censoring and failure times are independent and non-informative. Then, the likelihood function is proportional to
\[L = \prod_{i=1}^{n}\left\{\frac{\lambda_{1}(X_{i}|Z_{i})}{1-\lambda_{ 1}(X_{i}|Z_{i})-\lambda_{2}(X_{i}|Z_{i})}\right\}^{I(J_{i}=1)}\left\{\frac{ \lambda_{2}(X_{i}|Z_{i})}{1-\lambda_{1}(X_{i}|Z_{i})-\lambda_{2}(X_{i}|Z_{i}) }\right\}^{I(J_{i}=2)}\prod_{t=1}^{X_{i}}\{1-\lambda_{1}(t|Z_{i})-\lambda_{2} (t|Z_{i})\}\,.\]
Equivalently,
\[L = \prod_{i=1}^{n}\left[\prod_{j=1}^{2}\prod_{t=1}^{X_{i}}\left\{ \frac{\lambda_{j}(t|Z_{i})}{1-\lambda_{1}(t|Z_{i})-\lambda_{2}(t|Z_{i})} \right\}^{\delta_{jit}}\right]\prod_{t=1}^{X_{i}}\{1-\lambda_{1}(t|Z_{i})- \lambda_{2}(t|Z_{i})\}\]
and the log-likelihood (up to a constant) becomes
\[\log L = \sum_{i=1}^{n}\left[\sum_{j=1}^{2}\sum_{t=1}^{X_{i}}\left[\delta _{jit}\log\lambda_{j}(t|Z_{i})-\delta_{jit}\{1-\lambda_{1}(t|Z_{i})-\lambda_{ 2}(t|Z_{i})\}\right]+\sum_{t=1}^{X_{i}}\log\{1-\lambda_{1}(t|Z_{i})-\lambda_{ 2}(t|Z_{i})\}\right]\] \[= \sum_{i=1}^{n}\sum_{t=1}^{X_{i}}\left[\delta_{1it}\log\lambda_{1 }(t|Z_{i})+\delta_{2it}\log\lambda_{2}(t|Z_{i})+\{1-\delta_{1it}-\delta_{2it} \}\log\{1-\lambda_{1}(t|Z_{i})-\lambda_{2}(t|Z_{i})\}\right]\,,\]
where \(\delta_{jit}\) equals one if subject \(i\) experienced event of type \(j\) at time \(t\); and 0 otherwise. Evidently, in contrast to the continuous-time setting with competing events, \(L\) cannot be decomposed into separate likelihoods for each cause-specific hazard function \(\lambda_{j}\). Estimating the vector of parameters \(\Omega\) through maximizing \(\log L\)
would entail maximizing with respect to \(M(d+p)\) parameters simultaneously, leading to a time-consuming process.
Alternatively, Lee et al. [2] suggested the following collapsed log-likelihood approach. The dataset is expanded such that for each observation \(i\) the expanded dataset includes \(X_{i}\) rows, i.e., pseudo observations, one row for each time \(t\), \(t\leq X_{i}\); see Table 1. At each time point \(t\), the pseudo observations can be considered as random variables from a conditional multinomial distribution with one of three possible outcomes \(\{\delta_{1it},\delta_{2it},1-\delta_{1it}-\delta_{2it}\}\). Then, estimation of \((\alpha_{11},\ldots,\alpha_{1d},\beta_{1}^{T})\) is based on a collapsed log-likelihood such that \(\delta_{2it}\) and \(1-\delta_{1it}-\delta_{2it}\) are combined. The collapsed log-likelihood for cause \(j=1\) based on a binary regression model with \(\delta_{1it}\) as the outcome is given by
\[\log L_{1}=\sum_{i=1}^{n}\sum_{t=1}^{X_{i}}\left[\delta_{1it}\log\lambda_{1}(t |Z_{i})+(1-\delta_{1it})\log\{1-\lambda_{1}(t|Z_{i})\}\right]\,.\]
Similarly, the collapsed log-likelihood for cause \(j=2\) with \(\delta_{2it}\) as the outcome becomes
\[\log L_{2}=\sum_{i=1}^{n}\sum_{t=1}^{X_{i}}\left[\delta_{2it}\log\lambda_{2}(t |Z_{i})+(1-\delta_{2it})\log\{1-\lambda_{2}(t|Z_{i})\}\right]\,,\]
and one can fit the two models, separately.
In general, for \(M\) competing events, the estimators of \((\alpha_{j1},\ldots,\alpha_{jd},\beta_{j}^{T})\), \(j=1,\ldots,M\), are the respective values that maximize
\[\log L_{j}=\sum_{i=1}^{n}\sum_{t=1}^{X_{i}}\left[\delta_{jit}\log\lambda_{j}(t |Z_{i})+(1-\delta_{jit})\log\{1-\lambda_{j}(t|Z_{i})\}\right]\quad j=1,\ldots,M\,. \tag{2}\]
Namely, each maximization \(j\) consists of \(d+p\) parameters. Lee et al. showed that the estimators are asymptotically multivariate normally distributed and the covariance matrix can be consistently estimated.
### The Proposed Approach
We use the collapsed log-likelihood approach and simplify the estimation process further. The proposed method offers two improvements over Lee et al. [2]: (1) substantial reduction in computation time, especially for high values of \(d\); (2) easy integration of penalized regression techniques (ridge, LASSO, elastic net, among others) and screening methods.
We denote by \(\tilde{X}\) the new column of times of the expanded dataset (see Table 1). For each event type \(j\), a conditional logistic regression approach [16, 17] is replacing Eq. (2), while stratifying the expanded dataset according to \(\tilde{X}\) and conditioning on the number of events within each stratum. This allows estimating each vector \(\beta_{j}\) separately from \(\alpha_{jt}\). The conditional likelihoods of the expanded data are given by
\[L_{j}^{\mathcal{C}}(\beta_{j})=\prod_{t=1}^{d}\frac{\exp(\sum_{i\in\mathcal{C} _{t}}\delta_{jit}Z_{i}^{T}\beta_{j})}{\sum_{d_{ji}\in\mathcal{S}_{t}}\exp(\sum _{i\in\mathcal{C}_{t}}d_{jit}Z_{i}^{T}\beta_{j})}\,\,\,\,,\,\,\,j=1,\ldots,M\,, \tag{3}\]
where \(\mathcal{C}_{t}\) is the set of all pseudo observations with \(\tilde{X}\) equals \(t\), \(\mathcal{S}_{t}\) is the set of all possible combinations of \(\sum_{i=1}^{n}\delta_{jit}\) ones and \(\sum_{i=1}^{n}(1-\delta_{jit})\) zeros, \(d_{jt}\) is a vector in \(\mathcal{S}_{t}\), \(d_{jit}\) equals to 0 or 1 with \(\sum_{i}\delta_{jit}=\sum_{i}d_{jit}\), and \(d_{jit}\) is a component of \(d_{jt}\). Since Eq. (3) has a form of partial likelihood of a Cox regression model when ties are present (see, for example, Eq. (8.4.3) of Klein [5]), an available Cox model routine can be used for estimating \(\beta_{j}\), \(j=1,\ldots,M\). The clogit function of R uses this trick and estimates a logistic regression model by maximizing the conditional likelihood. It creates the necessary dummy variable of times and the strata, then calls coxph. The clogit function uses the Breslow approximation for the conditional likelihood as a default, but the exact form and other common approximations for ties are also available.
Using the estimators of \(\beta_{j}\), \(\widehat{\beta}_{j}\), \(j=1,\ldots,M\), we suggest estimating \(\alpha_{jt}\), \(j=1,\ldots,M\), \(t=1,\ldots,d\), through a series of \(Md\) single-dimensional optimization algorithms applied to the original (i.e., non-expanded) dataset such that
\[\widehat{\alpha}_{jt}=\text{argmin}_{a}\left\{\frac{1}{Y.(t)}\sum_{i=1}^{n}I( X_{i}\geq t)\frac{\exp(a+Z_{i}^{T}\widehat{\beta}_{j})}{1+\exp(a+Z_{i}^{T} \widehat{\beta}_{j})}-\frac{N_{j}(t)}{Y.(t)}\right\}^{2}\ \,\ \ j=1,\ldots,M\,\ \ t=1, \ldots,d\ \, \tag{4}\]
where \(Y.(t)=\sum_{i=1}^{n}I(X_{i}\geq t)\) and \(N_{j}(t)=\sum_{i=1}^{n}I(X_{i}=t,J_{i}=j)\). Eq. (4) involves minimizing the squared difference between the observed proportion of failures of type \(j\) at time \(t\), i.e., \(N_{j}(t)/Y.(t)\), and the expected proportion of failures, as determined by Model (1) and \(\widehat{\beta}_{j}\).
In summary, the proposed estimation procedure consists of the following two speedy steps:
1. Using the expanded dataset, estimate each vector \(\beta_{j}\) individually, \(j=1,\ldots,M\), by maximizing Eq. (3) using a stratified Cox routine, such as the clogit function in the survival R package.
2. Using \(\widehat{\beta}_{j}\), \(j=1,\ldots,M\), of Step 1 and the original non-expanded dataset, estimate each \(\alpha_{jt}\), \(j=1,\ldots,M\), \(t=1,\ldots,d\), separately, by Eq. (4).
The simulation results in Section 3 show that the above two-step procedure performs well in terms of bias and provides similar standard errors to those of Lee et al. However, for large values of \(d\), the two-step procedure leads to a substantial gain in computation time compared to estimating \(p+d\) parameters simultaneously. Estimating \(d\) single-dimensional parameters and one \(p\)-dimensional parameter is often faster than estimating \(p+d\) parameters together.
Consistency and asymptotic normality of conditional maximum likelihood estimators are given by Andersen [18], but under a different setting in which the dataset has a group structure, observations between groups are independent, and the \(\alpha\)'s represent the group-specific parameters. Then, the asymptotic results hold under some regularity conditions, as the number of groups goes to infinity. Moreover, the inverse of the information matrix based on the conditional-likelihood function provides an asymptotic covariance matrix for the conditional maximum likelihood estimators. In our setting, since the groups are defined by the time \(t\), \(t=1,\ldots,d\), the pseudo-observations between groups are not independent. However, our conjecture is that in our case as well, the above conditional maximum likelihood estimators are consistent and asymptotically normal as \(n\) goes to infinity, with a consistent variance estimator based on the inverse of the information
matrix of the conditional likelihood function. This speculation is supported by a comprehensive simulation study summarized in Section 3.
The advancement in data collection technologies has resulted in a significant increase in the number of potential predictors. Separating the estimation of \(\beta_{j}\) and \(\alpha_{jt}\) is highly relevant in dimension reduction or model selection regression problems, as, for instance, applying methods that keep a subset of predictors and discard the rest would only involve working with \(\beta_{j}\). Here are two examples:
1. Regularized regression [11]. Penalized regression methods (e.g., LASSO, adaptive LASSO, elastic net) place a constraint on the size of the regression coefficients. We propose to apply penalized regression methods in Lagrangian form based on Eq. (3) by minimizing \[-\log L_{j}^{\mathcal{C}}(\beta_{j})+\eta_{j}P(\beta_{j})\;\;,\;\;j=1,\ldots,M\,,\] (5) where \(P\) is a penalty function and \(\eta_{j}>0\) is a shrinkage tuning parameter. The parameters \(\alpha_{jt}\) are estimated once the regularization step is completed and a model is selected. Clearly, any routine of regularized Cox regression model can be used for estimating \(\beta_{j}\), \(j=1,\ldots,M\), based on Eq. (5) (e.g., glmnet of R or CoxPHFitter of Python).
2. Sure independent screening. Under ultra-high dimension settings, most of the regularized methods suffer from the curse of dimensionality, high variance and over-fitting [11, 19]. To overcome these issues, the marginal screening technique, sure independent screening (SIS) has been shown to filter out many uninformative variables under an ordinary linear model with normal errors [12]. Subsequently, penalized variable selection methods are often applied to the remaining variables. The key idea of the SIS procedure is to rank all predictors by using a utility measure between the response and each predictor and then to retain the top variables. The SIS procedure has been extended to various models and data types such as generalized linear models [20], additive models [21], and Cox regression models [13] among others. We propose to adopt the screening method for Cox regression of Zhao and Li [13] based on Eq. (3) (since it has a form of partial likelihood of a Cox model with a particular data structure). Namely, the objective is to maximize \(L_{j}^{\mathcal{C}}\) for each covariate, one at a time. The final screened model for event type \(j\) is the set of covariates whose absolute standardized estimated coefficients exceed a pre-determined threshold. We recommend using a random permutation of the observations to obtain a data-driven threshold [22].
The above proposed estimation procedure can easily handle covariates or coefficients that change over time, \(Z(t)\) and \(\beta_{j}(t)\), respectively. Similarly to the Cox model with continuous survival time, the simplest way to code time-dependent covariates uses intervals of time. Then, the data is encoded by breaking the individual's time into multiple time intervals, with one row of data for each interval. Hence combining this data expansion step with the expansion described in Table 1 is straightforward. For time-dependent coefficients, \(\beta_{j}(t)\), Eq. (3) is replaced by
\[L_{j}^{\mathcal{C}}(\beta_{j}(t))=\frac{\exp\{\sum_{i\in\mathcal{C}_{t}}\delta _{jit}Z_{i}^{T}\beta_{j}(t)\}}{\sum_{d_{j}\in\mathcal{S}_{t}}\exp\{\sum_{i\in \mathcal{C}_{t}}d_{jit}Z_{i}^{T}\beta_{j}(t)\}}\;\;,\;\;j=1,\ldots,M\,,\,t=1, \ldots,d\,.\]
Clearly, one can easily combine time-dependent covariate with time-dependent coefficients.
## 3 Simulation Study
To evaluate and demonstrate the utility of our proposed approach, we performed a comprehensive simulation study and compared it with Lee et al. [2]. Both methods were implemented in Python using the PyDTS package [14]. Our simulation study included an evaluation of the estimation performance for two competing events, followed by a setting with three competing events. We also compared the computation time of both estimation methods and concluded with a simulation setting including a LASSO regularization.
We considered sample sizes of \(n=5,000,10,000,15,000\) and \(20,000\). The vector of covariates \(Z\) is of \(p=5\) dimension, and each covariate was sampled from a standard uniform distribution. For each observation, based on the sampled covariate \(Z\) and the true model of Eq.(1), the event type was sampled, and then the failure time, with \(d=30\). The parameters' values were set to be \(\alpha_{1t}=-2.0-0.2\log t\), \(\alpha_{2t}=-2.2-0.2\log t\), \(t=1,\ldots,30\), \(\beta_{1}=-(\log 0.8,\log 3,\log 3,\log 2.5,\log 2)\), and \(\beta_{2}=-(\log 1,\log 3,\log 4,\log 3,\log 2)\). Finally, the censoring times were sampled from a discrete uniform distribution with probability 0.01 at each \(t\leq 30\). The simulation results are based on 200 repetitions of each setting. Table 2 and Fig. 1 summarise the results of \(\beta_{j}\) and \(\alpha_{jt}\), respectively, for two competing risks, in terms of mean and standard errors. The respective results with three competing risks are provided in tables 3 - 4 and Figure 2. Evidently, both methods perform similarly in terms of bias and standard errors. In addition, the empirical coverage rates of 95% Wald-type confidence intervals for each regression coefficient, based on the proposed approach, are reasonably close to 95%.
For demonstrating the reduction in computation time as a function of \(d\), a sample size of \(n=20,000\) observations was considered with \(p=10\) covariates, two competing events and \(d=25,50,75,100,125\) and \(150\). Furthermore, \(\alpha_{1t}=-2.5-0.3\log t\), \(\alpha_{2t}=-2.8-0.3\log t\), \(t=1,\ldots,d\), \(\beta_{1}=-0.5(\log 0.8,\log 3,\log 3,\log 2.5,\log 4,\log 1,\log 3,\log 2,\log 2, \log 3)\), and \(\beta_{2}=-0.5(\log 1,\log 3,\log 2,\log 1,\log 4,\log 3,\log 4,\log 3,\log 3,\log 2)\). Sampling and estimation were repeated 10 times for each value of \(d\). The median and the interquartile range of the fitting times are presented in Figure 3. These results are based on a single CPU out of 40 CPUs server of type Intel Xeon Silver 4114 CPU @ 2.20GHz X 2 and 377GB RAM. Evidently, under low values of \(d\), the computation times of the two approaches are comparable. However, as \(d\) increases, the advantage of the proposed approach, in terms of computation time, increases as well. Moreover, when running this analysis using hardware with 16GB RAM, the estimation procedure of Lee et al. reached a memory error at a low value of \(d\), while the two-step procedure was completed successfully.
Finally, we provide simulation results of the proposed two-step approach with LASSO regularization. A sample size of \(n=10,000\) observations with \(p=100\) covariates were considered. Two settings of zero-mean normally distributed covariates were considered: (i) independent covariates, each with variance 0.4; (ii) the following covariances were updated in setting (i) \(Cov(Z_{1},Z_{9})=0.1\), \(Cov(Z_{2},Z_{10})=0.3\), \(Cov(Z_{4},Z_{8})=-0.3\), and \(Cov(Z_{5},Z_{12})=-0.1\). In order to get appropriate survival probabilities based on Eq. (1), covariates were truncated to be within \([-1.5,1.5]\). The parameters of the model were set to be
\(\alpha_{1t}=-3.4-0.1\log t\), \(\alpha_{2t}=-3.4-0.2\log t\), \(t=1,\ldots,15\). The first five components of \(\beta_{1}\) and \(\beta_{2}\) were set to be \((1.2,1.5,-1,-0.3,-1.2)\) and \((-1.2,1,1,-1,1.4)\), respectively, and the rest of the coefficients were set to zero.
The tuning parameters \(\eta_{j}\), \(j=1,\ldots,M\), control the amount of regularization, hence their values play a crucial role. In this work (and in our Python package PyDTS) the values of \(\eta_{j}\), \(j=1,\ldots,M\), are selected by K-fold cross validation while the criterion is to maximize the out-of-sample global area under the receiver operating characteristics curve (AUC). Appendix A provides the definitions and estimators of the area under the receiver operating characteristics curve and Brier score for discrete-survival data with competing risks and right censoring. This includes the cause-specific AUC and Brier score at each time \(t\), AUC\({}_{j}(t)\) and BS\({}_{j}(t)\), respectively; integrated cause-specific AUC and Brier score, AUC\({}_{j}\) and BS\({}_{j}\); and global AUC and Brier score, AUC and BS.
Fig. 4 demonstrates the results of one simulated dataset under setting (i) of independent covariates with 5-fold cross validation and \(\log\eta_{j}\) varies between -8 to -2.5 with a step size of 0.25. The selected values \(\log\eta_{1}=-5.75\) and \(\log\eta_{2}=-5.5\) are shown by vertical dashed lines on panels **a-d**. Panels **a-b** show the mean number of non-zero coefficients for events 1 and 2, respectively, over 5-folds and the true value 5 is shown in grey horizontal dashed line. The values of the estimated \(\beta_{j}\) as a function of \(\eta_{j}\) are shown in panels **c** and **d**. Panel **e** shows the mean (and SD bars) of the 5 folds \(\widehat{\text{AUC}}_{1}(t)\) under the selected values of \(\eta_{j}\). Number of observed events of event type 1 are shown in blue bars. Finally, panel **f** shows the respective results of event 2. The SD bars attached to the mean of the 5 folds \(\widehat{\text{AUC}}_{j}(t)\) do not take into account the variability in the model estimation. The respective results of setting (ii) are given in Fig. 5.
Based on the one simulated dataset of setting (i) (depicted in Fig. 4) and the selected values of \(\eta_{j}\), \(j=1,2\), the means and standard deviations (SD) of the 5 folds integrated cause-specific \(\widehat{\text{AUC}}_{j}\) were \(\widehat{\text{AUC}}_{1}=0.796\) (SD=0.007) and \(\widehat{\text{AUC}}_{2}=0.803\) (SD=0.007), with a mean global \(\widehat{\text{AUC}}=0.8\) (SD=0.003). The mean global AUC of the non-regularized procedure was \(\widehat{\text{AUC}}=0.795\) (SD=0.002). Looking at this specific example, we observe a substantial reduction in the number of covariates selected by the LASSO penalty, without a significant change in the discrimination performance as measured by the AUC. The mean integrated cause-specific Brier Scores were \(\widehat{\text{BS}}_{1}=0.045\) (SD=0.002) and \(\widehat{\text{BS}}_{2}=0.044\) (SD=0.003), with a mean global Brier Score \(\widehat{\text{BS}}=0.044\) (SD=0.002). Similar results were observed for the one simulated dataset of setting (ii) (depicted in Fig. 5): \(\widehat{\text{AUC}}_{1}=0.796\) (SD=0.007), \(\widehat{\text{AUC}}_{2}=0.801\) (SD=0.008), \(\widehat{\text{AUC}}=0.799\) (SD=0.005), and \(\widehat{\text{AUC}}=0.794\) (SD=0.005). The mean Brier Scores were \(\widehat{\text{BS}}_{1}=0.046\) (SD=0.002), \(\widehat{\text{BS}}_{2}=0.043\) (SD=0.003), and \(\widehat{\text{BS}}=0.045\) (SD=0.001).
Based on 100 repetitions of setting (i), the mean and empirical standard errors of the selected values of the tuning parameters \(\eta_{1}\) and \(\eta_{2}\), in \(\log\) scale, were -5.68 (SD=0.174) and -5.49 (SD=0.150). The mean of true- and false-positive discoveries for each event type, TP\({}_{j}\) and FP\({}_{j}\), \(j=1,2\), under the selected values of \(\eta_{j}\) were TP\({}_{1}=4.99\), FP\({}_{1}=0.01\), TP\({}_{2}=5\), and FP\({}_{2}=0\). The results indicate that the correct model was selected in all the 100 repetitions, except for a single run for \(j=1\).
## 4 MIMIC Data Analysis - Length of Hospital Stay in ICU
Although the MIMIC dataset records admission and discharge times in minute-level resolution, it is advisable to conduct survival analysis in discrete units of days. This is because the times of admission and discharge within a day are heavily influenced by hospital staff and regulations, and are less indicative of the patients' health status. The present study examines 25,170 ICU admissions (48.8% female) between 2014 and 2020, with a total of 28 unique days. The study considers three competing events: discharge to home (\(J=1\), 69.0%), transfer to another medical facility (\(J=2\), 21.4%), and in-hospital death (\(J=3\), 6.1%). Patients who left the ICU against medical advice (1.0%) were treated as censored data, and administrative censoring was applied for patients who were hospitalized for more than 28 days (2.5%).
The following analysis is restricted to admissions classified as "emergency", with a distinction between direct emergency and emergency ward (EW). In case of multiple admissions per patient, the latest one is included. Emergency admission history is included by two covariates: the number of previous emergency admissions (named Admissions Number), and a dummy variable indicating whether the previous admission ended within 30 days prior to the last one (named Recent Addmission). Additional features included in the analysis are: year of admission (available in resolution of three years); standardized age at admission; a binary variable indicating night admission (between 20:00 to 8:00); ethnicity (Asian, Black, Hispanic, White, Other); and lab test results (normal or abnormal) performed upon arrival and with results within the first 24 hours of admission. The analysis includes 36 covariates in total. Tables 5 and 6 summarize the covariates distribution.
The data were analysed by three methods: Lee et al., the proposed two-step approach, and the proposed two-step approach with LASSO. For the latter, the selection of \(\eta_{j}\), \(j=1,2,3\), were carried out using 4-fold cross validation, and by maximizing the out-of-sample global AUC. In this application, \(\log\eta_{j}\) was allowed to vary from -12 to -1, in steps of 1. The resulting selected values of \(\log\eta_{j}\), \(j=1,2,3\), were -5, -9 and -11. The results of the three procedures are presented in Tables 7 - 9 and Fig. 6. The parameters' estimators were similar between Lee et al.'s approach and the two-step procedure without regularization, as expected.
The global AUCs of the proposed approach without and with LASSO penalty were highly similar, \(\widehat{\text{AUC}}=0.649\) (SD=0.003) and \(\widehat{\text{AUC}}=0.651\) (SD=0.003) (the SDs in parentheses are based on the 4 folds and do not take into account the variability due to the model estimation). By adding LASSO regularization, the number of predictors for each event type was reduced, but the corresponding estimators for \(\alpha_{jt}\) remained highly similar. It is worth noting that the estimators of \(\alpha_{jt}\) tend to increase as the number of observed events of type \(j\) at time \(t\) increases.
The estimated coefficients for lab tests in the discharge-to-home (\(j=1\)) model were all negative, consistent with the expected result that abnormal test results at admission reduce the hazard of home discharge. Older age and recent admission were also found to reduce this hazard, while being married and having Medicare or "other" insurance increased it. Female gender, admission number, direct emergency admission, and night admission had a relatively small impact on this hazard. LASSO regularization excluded several features from the model, including admissions number, night admission, direct emergency admission, ethnicity, Medicare insurance, single or widowed status, sex, and certain lab tests (Anion Gap, MCH, MCV, Magnesium, Phosphate, Platelet count, and Potassium).
The hazard of being discharged for further treatment (\(j=2\)) is primarily increased by admissions number, White ethnicity, Medicare insurance, single or widowed marital status, and older age. Direct emergency admission and being married decrease the hazard. Most lab test results had a minor impact on the hazard, except for white blood cell count, RDW, platelet count, glucose, creatinine, and bicarbonate, which reduced the hazard of being discharged for further treatment when abnormal. LASSO regularization excluded only a few lab tests (Anion Gap, Chloride, MCHC, and MCV) and recent admission. The main factors that increased the hazard and were included in the model were admissions number, single or widowed marital status, Medicare insurance, and older age, while direct emergency admission, being married, and abnormal results of bicarbonate, creatinine, glucose, and platelet count decreased the hazard.
The hazard of in-hospital death (\(j=3\)) had the lowest number of observed events, resulting in noisier estimators, especially in later times. The LASSO penalty had only a minor effect in terms of the number of excluded features. Lab test results that increased the hazard of in-hospital death were abnormal Anion Gap, Bicarbonate, Creatinine, Magnesium, White Blood Cells, RDW, and Sodium. Some of these lab test results had already been identified as predictors of in-hospital mortality in previous studies [23, 24, 25, 26]. Other lab test results that increased the hazard of in-hospital death were abnormal Calcium total, Chloride, Glucose, Phosphate, Platelet Count, Potassium, Urea Nitrogen, and Red Blood Cells. Admissions number, "other" ethnicity, married status, recent admission, and older age also increased the hazard of in-hospital death. Direct emergency admission, black, Hispanic, or white ethnicity, and Medicare or "other" insurance types decreased the hazard of in-hospital death.
In this work, we applied the standard LASSO regularization. Alternatively, one may apply group-LASSO [27], for example, for the features that includes multiple dummy variables.
For the first 14 days of hospitalization the results of the \(\text{AUC}_{j}(t)\) were higher than for later days. This may be due to several reasons. Firstly, the number of observed events for these days is higher. Second, short length of stay can be a consequence of the severity of illness, with short-term in-hospital death occurring for the severe cases and short-term discharge for the mild cases, making them easier to identify. Lastly, as treatment progresses, the effect of the initial condition may decrease while the treatment effect increases, making it difficult to distinguish between events occurring during days 14-28 based on the covariates measured upon admission. The integrated cause-specific AUCs were \(\widetilde{\text{AUC}}_{1}=0.642\) (SD=0.002), \(\widetilde{\text{AUC}}_{2}=0.655\) (SD=0.012), and \(\widetilde{\text{AUC}}_{3}=0.740\) (SD=0.006), with a global \(\widetilde{\text{AUC}}=0.651\) (SD=0.003). The integrated cause-specific Brier Scores were \(\widehat{\text{BS}}_{1}=0.105\) (SD=0.002), \(\widehat{\text{BS}}_{2}=0.042\) (SD=0.001), and \(\widehat{\text{BS}}_{3}=0.010\) (SD=0.001), with a global Brier Score of \(\widehat{\text{BS}}=0.085\) (SD=0.001).
## 5 Discussion
This work provides a new estimation procedure for a semi-parametric logit-link survival model of discrete time with competing events. The proposed two-stage estimation procedure simplifies the collapsed log-likelihood approach of Lee et al. [2] by separating the estimation of \(\beta_{j}\) and \(\alpha_{jt}\). Our procedure has two main advantages. It is significantly faster than existing methods and requires a smaller amount of memory resources. Additionally, it allows including modern machine-learning model-selection procedures, such as
regularization and screening. These additions can be highly useful in datasets that include a large set of covariates. Extensive simulation study demonstrated that the proposed approach performs well in terms of empirical bias and coverage rates.
## Data and Code Availability Statement
Code is available under the GNU GPLv3 at the PyDTS package repository ([https://github.com/tomer1812/pydts/](https://github.com/tomer1812/pydts/)) and at the repository ([https://github.com/tomer1812/DiscreteTimeSurvivalPenalization](https://github.com/tomer1812/DiscreteTimeSurvivalPenalization)). The MIMIC dataset is accessible at [https://physionet.org/content/mimiciv/2.0/](https://physionet.org/content/mimiciv/2.0/) and subjected to PhysioNet credentials.
## Competing Interests Statement
The authors declare no competing interests.
## Acknowledgements
M.G. work was supported by the ISF 767/21 grant and Malag competitive grant in data science (DS).
|
2310.20070
|
Beliaev damping in Bose gas
|
According to the Bogoliubov theory the low energy behaviour of the Bose gas
at zero temperature can be described by non-interacting bosonic quasiparticles
called phonons. In this work the damping rate of phonons at low momenta, the
so-called Beliaev damping, is explained and computed with simple arguments
involving the Fermi Golden Rule and Bogoliubov's quasiparticles.
|
Jan Dereziński, Ben Li, Marcin Napiórkowski
|
2023-10-30T22:50:18Z
|
http://arxiv.org/abs/2310.20070v2
|
# Beliaev damping in Bose gas
###### Abstract.
According to the Bogoliubov theory the low energy behaviour of the Bose gas at zero temperature can be described by non-interacting bosonic quasiparticles called phonons. In this work the damping rate of phonons at low momenta, the so-called Beliaev damping, is explained and computed with simple arguments involving the Fermi Golden Rule and Bogoliubov's quasiparticles.
## 1. Introduction
The Bose gas near the zero temperature has curious properties that can be partly explained from the first principles by a beautiful argument that goes back to Bogoliubov [5]. In Bogoliubov's approach the Bose gas at zero temperature can be approximately described by a gas of weakly interacting quasiparticles. The dispersion relation of these quasiparticles, that is, their energy in function of the momentum is described by a function \(\mathbf{k}\mapsto e_{\mathbf{k}}\) with an interesting shape. At low momenta these quasiparticles are called phonons and \(e_{\mathbf{k}}\approx ck\), where \(c>0\) and \(k:=|\mathbf{k}|\). Thus the low-energy dispersion relation is very different from the non-interacting, quadratic one. It is responsible for superfluidity of the Bose gas.
It is easy to see that phonons could be metastable, because the energy-momentum conservation may not prohibit them to decay into two or more phonons. This decay rate was first computed in perturbation theory by Beliaev [2], hence the name _Beliaev damping_. According to his computation, the imaginary part of the dispersion relation behaves for small momenta as \(-c_{\mathrm{Bel}}k^{5}\). This implies the exponential decay of phonons with the decay rate \(2c_{\mathrm{Bel}}k^{5}\). The Beliaev damping has been observed in experiments, and appears to be consistent with its theoretical predictions [25, 22].
In our paper we present a systematic derivation of Beliaev damping. Our presentation differs in several points from similar accounts found in the physics literature. We try to make all the arguments as transparent as possible, without hiding some of less rigorous steps. We avoid using diagrammatic techniques, in favor of a mathematically much clearer picture involving a Bogoliubov transformation and the 2nd order perturbation computation (the so-called Fermi Golden Rule) applied to what we call the effective Friedrichs Hamiltonian. We use the grand-canonical picture instead of the canonical one found in a part of the literature. This is a minor difference; on this level both pictures should lead to the same final result. We believe that the derivation of Beliaev damping is a beautiful illustration of methods many-body quantum physics, which is quite convincing even if not fully rigorous.
In the remaining part of the introduction we provide a brief sketch of the main steps of Beliaev's argument. In the main body of our article we discuss these steps in more detail, indicating which parts can be easily made rigorous.
Let \(v\) be a real function satisfying \(v(x)=v(-x)\) that decays fast at infinity. (Later we will need more assumptions on \(v\).) The homogeneous Bose gas of \(N\) particles interacting
## 1. Introduction
In this paper we consider the \(L^{2}\)-norm of the Hamiltonian \(H\) on the space \(\mathbb{R}^{3}\) of functions \(\{x_{i},x_{j}\}\) on \(\mathbb{R}^{3}\). We consider the Hamiltonian \(H\) on the space \(\mathbb{R}^{3}\) of functions \(\{x_{i},x_{j}\}\) on \(\mathbb{R}^{3}\).
\[e_{\mathbf{k}} := \sqrt{\frac{1}{4}|\mathbf{k}|^{4}+\frac{\hat{v}(\mathbf{k})}{\hat{v}( 0)}\mu|\mathbf{k}|^{2}}. \tag{6}\]
Thus, the Bogoliubov approximation states that
\[H\approx E_{\mathrm{Bog}}+H_{\mathrm{Bog}} \tag{7}\]
where \(E_{\mathrm{Bog}}\) is a constant, which will not be relevant for our analysis. The operator \(b_{\mathbf{k}}^{*}\) is the creation operator of the _quasiparticle_ with momentum \(\mathbf{k}\). It is a linear combination of \(a_{\mathbf{k}}^{*},a_{-\mathbf{k}}\). (5) is sometimes called a _Bogoliubov Hamiltonian_. It describes independent quasiparticles with the _dispersion relation_\(e_{\mathbf{k}}\). The _Bogoliubov vacuum_, annihilated by \(b_{\mathbf{k}}\) and denoted \(\Omega_{\mathrm{Bog}}\), is its ground state, and can be treated as an approximate ground state of the many-body system. The Bogoliubov Hamiltonian is still translation invariant: in fact, it commutes with the total momentum, described (without any approximation) by
\[P=\sum_{\mathbf{k}\neq 0}\mathbf{k}b_{\mathbf{k}}^{*}b_{\mathbf{k}}. \tag{8}\]
It is easy to describe the thermodynamic limit of (5) and (8): we simply replace the summation by integration, without changing the dispersion relation:
\[H_{\mathrm{Bog}} =\int e_{\mathbf{k}}b_{\mathbf{k}}^{*}b_{\mathbf{k}}\,\mathrm{d} \mathbf{k}, \tag{9}\] \[P =\int\mathbf{k}b_{\mathbf{k}}^{*}b_{\mathbf{k}}\,\mathrm{d} \mathbf{k}. \tag{10}\]
It is interesting to visualize possible energy-momentum values of the system or, in a more precise mathematical language, the joint spectrum of the total momentum \(P\) and the Bogoliubov Hamiltonian \(H_{\mathrm{Bog}}\). On the 1-quasiparticle space this joint spectrum is given by the graph of the function \(\mathbf{k}\mapsto e_{\mathbf{k}}\). On fig. 1 we show a typical form of the dispersion relation in the low momentum region, marked with the black line. The green line denotes the bottom of the 2-quasiparticle spectrum, that is the joint spectrum of \((H_{\mathrm{Bog}},P)\) in the 2-quasiparticle sector. The bottom of the full joint spectrum of \((H_{\mathrm{Bog}},P)\) is marked with a red dashed line.
One can perform an additional step in the Bogoliubov approach. If the potential \(v\) has a very small support, one can argue that \(\frac{\hat{v}(\mathbf{k})}{\hat{v}(0)}\) can be approximated by 1. One then usually says that the interaction is given by contact potentials, which are presented in the position representation as \(v(x)=a\delta(x)\), where \(a\) is a constant, called the
Figure 1. Joint spectrum of \((H_{\mathrm{Bog}},P)\) for generic potentials
however, strictly speaking is not correct. The delta function needs a renormalization to become a well-defined interaction in the two-body case; in the \(N\)-body case the situation is even more problematic. Anyway, in this approximation, which is valid in the dilute case, we obtain a simpler dispersion relation
\[e_{\mathbf{k}}=\sqrt{\frac{1}{4}|\mathbf{k}|^{4}+\mu|\mathbf{k}|^{2}}. \tag{11}\]
On fig. 2 we show the energy-momentum spectrum corresponding to (11).
The Hamiltonian \(H_{\mathrm{Bog}}\), both with the dispersion relation (6) and (11) has remarkable physical consequences. Note first that the dispersion relation \(\mathbf{k}\mapsto e_{\mathbf{k}}\) has a linear cusp at the bottom. It also has a positive critical velocity, that is,
\[c_{\mathrm{crit}}:=\sup\{c\mid e_{\mathbf{k}}\geq ck,\quad\mathbf{k}\in \mathrm{R}^{3}\}>0. \tag{12}\]
In other words, the graph \(\mathbf{k}\mapsto e_{\mathbf{k}}\) is above \(\mathbf{k}\mapsto c_{\mathrm{crit}}k\). The full joint spectrum \(\sigma(P,H_{\mathrm{Bog}})\) is still above \(\mathbf{k}\mapsto c_{\mathrm{crit}}k\). This is interpreted as one of the most important properties of superfluidity: a droplet of the Bose gas travelling with velocity less than \(c_{\mathrm{crit}}k\) has negligible friction (see e.g. [11]).
Of course, \(H_{\mathrm{Bog}}\) yields only an approximate description of the Bose gas. In reality, one cannot treat the quasiparticles given by \(b_{\mathbf{k}}^{*},b_{\mathbf{k}}\) as fully independent. In the derivation of the Bogoliubov Hamiltonian various terms were neglected. In particular, terms of the third and fourth degree in \(b_{\mathbf{k}}^{*},b_{\mathbf{k}}\) were dropped. Replacing \(v\) by \(\kappa v\) we obtain an (artificial) coupling constant, to be set to \(1\) at the end. The third order terms are multiplied by \(\sqrt{\kappa}\) and the quartic terms by \(\kappa\). We argue that the quartic terms are of lower order and can be dropped. The third order terms have the form
\[\frac{1}{\sqrt{L^{3}}}\sum_{\mathbf{k},\mathbf{p},\mathbf{k}+ \mathbf{p}\neq 0}u_{\mathbf{k},\mathbf{p}}b_{\mathbf{k}}^{*}b_{\mathbf{p}}^{*}b_{ \mathbf{k}+\mathbf{p}}+\overline{u_{\mathbf{k},\mathbf{p}}}b_{\mathbf{k}+ \mathbf{p}}b_{\mathbf{k}}^{*}b_{\mathbf{p}}^{*} \tag{13}\] \[+\frac{1}{\sqrt{L^{3}}}\sum_{\mathbf{k},\mathbf{p},\mathbf{k}+ \mathbf{p}\neq 0}w_{\mathbf{k},\mathbf{p}}b_{\mathbf{k}}^{*}b_{\mathbf{p}}^{*}b_{ -\mathbf{k}-\mathbf{p}}^{*}+\overline{w_{\mathbf{k},\mathbf{p}}}b_{-\mathbf{k }-\mathbf{p}}b_{\mathbf{k}}b_{\mathbf{p}}. \tag{14}\]
We will argue (see Section 6) that triple creation and triple annihilation terms do not contribute to the decay of phonons. Thus we drop also (14).
Let us investigate what happens with the quasiparticle state \(b_{\mathbf{k}}^{*}\Omega_{\mathrm{Bog}}\) under the perturbation (13). The state \(b_{\mathbf{k}}^{*}\Omega_{\mathrm{Bog}}\) couples only to the \(2\)-quasiparticle sector. By taking
Figure 2. Joint spectrum of \((H_{\mathrm{Bog}},P)\) for contact potentials
thermodynamic limit we can assume that the variable \(\mathbf{k}\) is continuous. Thus the perturbed quasiparticle can be described by the space \(\mathbb{C}\oplus L^{2}(\mathbb{R}^{3})\) with the Hamiltonian
\[H_{\mathrm{Fried}}(\mathbf{k}):=\begin{bmatrix}e_{\mathbf{k}}&(h_{\mathbf{k}} |\\ |h_{\mathbf{k}})&e_{\mathbf{p}}+e_{\mathbf{k}-\mathbf{p}}\end{bmatrix}, \tag{15}\]
and \(h_{\mathbf{k}}\) can be derived from (13). Hamiltonians similar to this one are well understood. They are often used as toy models in quantum physics and are sometimes called _Friedrichs Hamiltonians_.
It is important to notice that, if we set \(\hat{v}=0\), so that the off-diagonal terms in (15) disappear, the unperturbed quasiparticle energy \(e_{\mathbf{k}}\) lies inside the continuous spectrum of \(2\)-quasiparticle excitations \(\{e_{\mathbf{p}}+e_{\mathbf{k}-\mathbf{p}}\mid\mathbf{p}\in\mathbb{R}^{3}\}\), at least for small momenta. (To be able to say this we need thermodynamic limit which makes the momentum continuous.) To see this, note that if \(\mathbf{k}\mapsto e_{\mathbf{k}}\) is convex we have a particularly simple expression (cf. Lemma 1) for the infimum of the \(2\)-quasiparticle spectrum:
\[\inf_{\mathbf{p}}\{e_{\mathbf{p}}+e_{\mathbf{k}-\mathbf{p}}\}=2e_{\mathbf{k}/ 2}. \tag{16}\]
Now (11) is strictly convex, hence \(e_{\mathbf{k}}\) lies inside the continuous spectrum of \(2\)-quasiparticle excitations. The generic dispersion relation (6) is convex for small momenta, hence this property is true at least for small momenta.
Because of that, one can expect that the position of the singularity of the resolvent of (15) becomes complex--it describes a resonance and not a bound state. This is interpreted as the unstability of the quasiparticle: its decay rate is twice the imaginary part of the resonance.
The second order perturbation theory, often called the _Fermi Golden Rule_, says that in order to compute the (complex) energy shift of an eigenvalue we need to find the so-called self-energy \(\Sigma_{\mathbf{k}}(z)\), which for \(z\not\in\mathbb{R}\) in our case is given by the integral
\[\Sigma_{\mathbf{k}}(z)=\frac{1}{(2\pi)^{3}}\int\frac{h_{\mathbf{k}}^{2}( \mathbf{p})\,\mathrm{d}\mathbf{p}}{(z-e_{\mathbf{p}}-e_{\mathbf{k}-\mathbf{p}} )}. \tag{17}\]
Then \(\Sigma_{\mathbf{k}}(e_{\mathbf{k}}+\mathrm{i}0)\) should give the energy shift of the eigenvalue \(e_{\mathbf{k}}\).
The imaginary part of this shift is much easier to compute. In fact, applying the Sochocki-Plemelj formula \(\frac{1}{x+\mathrm{i}0}=\mathcal{P}\frac{1}{x}-\mathrm{i}\pi\delta(x)\) we obtain
\[\mathrm{Im}\Sigma_{\mathbf{k}}(e_{\mathbf{k}}+\mathrm{i}0)=\frac{1}{8\pi^{2} }\int h^{2}(\mathbf{p})\delta(e_{\mathbf{k}}-e_{\mathbf{p}}-e_{\mathbf{k}- \mathbf{p}})\,\mathrm{d}\mathbf{p}. \tag{18}\]
In Theorem 2 we prove that if \(e_{\mathbf{k}}\) is given by (11), then
\[\mathrm{Im}\Sigma_{\mathbf{k}}(e_{k}+\mathrm{i}0)=-c_{\mathrm{Bel}}k^{5}+O(k^ {6})\qquad\text{as}\qquad k\to 0,\qquad c_{\mathrm{Bel}}=\frac{3\hat{v}(0)}{64 0\pi^{2}\mu}k^{5}. \tag{19}\]
In fact, our result could be also extended to the case of (6), but for the sake of clarity of the presentation we present the proof only for (11). Physically (19) means that quasiparticles are almost stable for small \(k\) with the lifetime proportional to \(k^{-5}\). (19) is the main result of our paper.
We remark that our analysis is based on the grand-canonical approach where \(\mu\) is the chemical potential. One can go back to the canonical picture. To this end one determines the chemical potential as a function of the density. In the Bogoliubov approximation one obtains to leading order that \(\rho\approx\mu/\hat{v}(0)\). Furthermore, also \(\rho\approx\rho_{0}\), where \(\rho_{0}\) is the condensate density, holds to leading order and thus the proportionality constant can be written as
\[c_{\mathrm{Bel}}=\frac{3}{640\pi^{2}\rho_{0}}, \tag{20}\]
which is the form of this result which is usually stated in the physics literature ([36, 19, 28, 13]).
One could naively expect that the same method gives the correction to the real part of the dispersion relation. Unfortunately, \(\operatorname{Re}\Sigma_{\mathbf{k}}(z)\) obtained from (17) is ill defined because of the divergence of the integral at large momenta. One can impose a cut-off and try to renormalize. For instance, one can replace \(h_{\mathbf{k}}(\mathbf{p})\) by
\[h_{\mathbf{k}}^{\Lambda}(\mathbf{p}):=h_{\mathbf{k}}(\mathbf{p})\theta( \Lambda-\mathbf{p}-|\mathbf{k}-\mathbf{p}|), \tag{21}\]
where \(\theta\) is the Heaviside function. (Note that the details of the cutoff are not physically relevant; (21) is especially convenient for computations, because it respects the natural symmetry of the problem). The cut-off self-energy
\[\Sigma_{\mathbf{k}}^{\Lambda}(z)=\frac{1}{(2\pi)^{3}}\int\frac{(h_{\mathbf{k} }^{\Lambda}(\mathbf{p}))^{2}\operatorname{d}\!\mathbf{p}}{(z-e_{\mathbf{p}}-e _{\mathbf{k}-\mathbf{p}})} \tag{22}\]
is well defined.
Let us now try to remove the dependence of the self-energy on the cutoff. The most satisfactory renormalization scenario would be to find a counterterm \(c^{\Lambda}\) independent of \(\mathbf{k}\) so that
\[\lim_{\Lambda\to\infty}\big{(}\Sigma_{\mathbf{k}}^{\Lambda}(z)-c^{\Lambda} \big{)}\quad\text{ exists.} \tag{23}\]
An initial positive result suggests that one can hope for a removal of the ultraviolet cutoff in the self-energy: there exists the limit
\[\lim_{\Lambda\to\infty}\big{(}\Sigma_{\mathbf{k}}^{\Lambda}(z)-\Sigma_{ \mathbf{k}}^{\Lambda}(0)\big{)}. \tag{24}\]
Unfortunately, \(\lim_{\mathbf{k}\to 0}\Sigma_{\mathbf{k}}^{\Lambda}(0)=\infty\), which implies that finding a \(c^{\Lambda}\) such that (23) is true is impossible. This is the content of Theorem 3. Thus the physical meaning of the quantity (24) is dubious, because the counterterm \(\Sigma_{\mathbf{k}}^{\Lambda}(0)\) depends on the momentum \(\mathbf{k}\). We leave the interpretation of this result open.
One can conclude that perturbation theory around the Bogoliubov Hamiltonian provides a reasonable method to find the second order imaginary correction to the dispersion relation. However, by this method we seem not able to compute its real part. This is not very surprising. It is a general property of Friedrichs Hamiltonians with singular off-diagonal terms: the imaginary part of the perturbed eigenvalue can be computed much more reliably than its real part. We describe this briefly in Sections 2 and 3.
The above problem is an indication of the crudeness of the Bogoliubov approximation. Throwing out the zero mode from the picture (or, which is essentially the same, treating it as a classical quantity), as well as throwing out higher order terms, is a very violent act and we should not be surprised by a punishment. By the way, one expects that the true dispersion relation of phonons goes to zero as \(\mathbf{k}\to 0\). This is the content of the so called "Hugenholtz-Pines Theorem" [24], which is a (non-rigorous) argument based on the gauge invariance. Perturbation theory around the Bogoliubov Hamiltonian is compatible with this theorem where it comes to the imaginary part. For the real part it fails.
Let us now make a few remarks about the literature. The orginal paper of Bogoliubov [5] was heuristic, however in recent years there have been many rigorous papers justifying Bogoliubov's approximation in several cases. The first result justifying (7) has been obtained in the mean-field scaling by Seiringer in [35] (see also [26, 17, 20, 32] for related results). Recently, corresponding results have been obtained in the Gross-Pitaevskii regime [3, 10, 33] and even beyond [9]. A time-dependent version of Bogoliubov theory has been successful in describing the dynamics of Bose-Einstein condensates and excitations thereof (see [30, 34] for reviews).
As explained above, to describe damping one has to go beyond Bogoliubov theory. In the mean-field regime this has been done for the ground state energy expansion in [31, 8] and for the dynamics in [7]. Very recently, the extension of [8] to singular interactions has been obtained in [6].
None of the above rigorous papers, with exception of [17], addressed the energy-momentum spectrum. In fact, it is very difficult to study rigorously the dispersion relation in thermodynamic limit--which is essentially necessary to analyze phonon damping.
The quasiparticle picture of the Bose gas at low temperatures has been confirmed in experiments. The dispersion relation of \({}^{4}\)He can be observed in neutron scattering experiments, and is remarkably sharp. It has been measured within a large range of wave numbers covering not only phonons, but also the so-called maxons and rotons, see e.g. [21]. In particular, one can see that the dispersion relation is slightly higher than the 2-quasiparticle spectrum for low wave numbers. The quasiparticle picture has also been confirmed by experiments on Bose Einstein condensates involving alkali atoms. The Beliaev damping has been observed in experiments on Bose Einstein condensates. The results are consistent with theoretical predictions [25, 22]. Note, however, that the precise prediction (18) is difficult to verify experimentally. Bose-Einstein condensates created in labs are not very large, so it is difficult to probe the large wavelength region.
Let us mention that there exists another phenomenon found in Bose-Einstein condensates, the so-called Landau damping, which involves instability of quasiparticles due to thermal excitations. The Landau damping is absent at zero temperature and becomes dominant at higher temperatures. The Beliaev damping occurs at zero temperature, and for very small temperatures it is still stronger than the Landau damping.
In the physics literature, the damping of phonons was first computed by Beliaev [2]. Lanadu damping has been for the first time computed by Hohenberg and Martin in [23] (see also [29]). Both these results have been reproduced in [36], also using the formalism of Feynman diagrams and many-body Green's functions. In [28] the damping rate was derived starting from an effective action in the spirit of Popov's hydrodynamical approach. [19] repeated the same computation in the time-dependent mean-field approach. In [13] the mean-field and hydrodynamic approaches were applied to the 2D case. Our derivation is consistent with the above works, however, in our opinion, avoids some unnecessary elements obscuring the simple mechanism of the Beliaev damping.
The plan of the paper is as follows. Sections 2 and 3 concern general well-known facts about about 2nd order perturbation theory of embedded eigenvalues. In Section 4 we define the Bose gas Hamiltonian and describe the Bogoliubov approach in the grand-canonical setting. In Section 5 we derive heuristically the effective model that we consider. Then, in Section 6 we discuss the shape of the energy-momentum spectrum and explain why the contribution from term (14) is irrelevant for the damping rate computation, which is the main result of the paper is proven in Section 8 as Theorem 2. The analysis why computing the real part of the self-energy fails by the method of this paper is described in Section 9.
**Acknowledgements.** The work of all authors was supported by the Polish-German NCN-DFG grant Beethoven Classic 3 (project no. 2018/31/G/ST1/01166).
## 2. Friedrichs Hamiltonian
Suppose that \(\mathcal{H}\) is a Hilbert space with a self-adjoint operator \(H\). Let \(\Psi\in\mathcal{H}\) be a normalized vector. We can write \(\mathcal{H}\simeq\mathbb{C}\oplus\mathcal{K}\), where \(\mathbb{C}\simeq\mathbb{C}\Psi\) and \(\mathcal{K}:=\{\Psi\}^{\perp}\). First assume that \(\Psi\) belongs to the domain of \(H\) and set
\[h:=H\Psi,\quad E_{0}:=(\Psi|H\Psi). \tag{25}\]
Let \(K\) denote \(H\) compressed to \(\mathcal{K}\). That means, if \(I:\mathcal{K}\to\mathcal{H}\) is the embedding, then \(K:=I^{*}HI\). Then in terms of \(\mathbb{C}\oplus\mathcal{K}\) we can write
\[H=\begin{bmatrix}E_{0}&(h|\\ |h)&K\end{bmatrix}. \tag{26}\]
Operators of this form were studied by Friedrichs in [18]. Therefore, sometimes they are referred to as _Friedrichs Hamiltonians_.
Let \(z\in\mathbb{C}\). The following identity is a special case of the so-called _Feshbach-Schur formula_:
\[(\Psi|(H-z)^{-1}\Psi) =\frac{1}{E_{0}+\Sigma(z)-z}, \tag{27}\] \[\Sigma(z) =-(h|(K-z)^{-1}h). \tag{28}\]
Following a part of the physics literature, we will call \(\Sigma(z)\) the _self-energy_. For further reference let us rewrite (27) as
\[\Sigma(z)=\frac{1}{(\Psi|(H-z)^{-1}\Psi)}+z-E_{0}, \tag{29}\]
and let us describe the full resolvent:
\[(H-z)^{-1} =\begin{bmatrix}0&0\\ 0&(K-z)^{-1}\end{bmatrix} \tag{30}\] \[+\begin{bmatrix}1\\ (K-z)^{-1}|h\end{bmatrix}\frac{1}{E_{0}+\Sigma(z)-z}\begin{bmatrix}1&(h|(K-z)^ {-1}\end{bmatrix}.\]
We can apply the above formulas also if \(\Psi\) does not belong to the domain of \(H\), but belongs to its form domain, so that \((\Psi|H\Psi)\) is well defined. Note that \(E_{0}\) and \(\Sigma(z)\) are then uniquely defined by (25) and (29)).
If \(\Psi\) does not belong to the form domain of \(H\), then strictly speaking the self-energy is ill defined. In practice in such situations one often introduces a cutoff Hamiltonian \(H^{\Lambda}\), which in some sense approximates \(H\). Then, setting \(h^{\Lambda}:=H^{\Lambda}\Psi\), \(E_{0}^{\Lambda}:=(\Psi|H^{\Lambda}\Psi)\), and denoting by \(K^{\Lambda}\) the operator \(H^{\Lambda}\) compressed to \(\mathcal{K}\), one can use the cutoff version of the Feshbach-Schur formula:
\[(\Psi|(H^{\Lambda}-z)^{-1}\Psi) =\frac{1}{E_{0}^{\Lambda}+\Sigma^{\Lambda}(z)-z}, \tag{31}\] \[\Sigma^{\Lambda}(z) =-(h^{\Lambda}|(K^{\Lambda}-z)^{-1}h^{\Lambda}). \tag{32}\]
The resolvent of the original Hamiltonian \(H\) can be retrieved [16] in the limit \(\Lambda\to\infty\):
\[(H-z)^{-1}=\lim_{\Lambda\to\infty}(H^{\Lambda}-z)^{-1}. \tag{33}\]
Note that \(E_{0}^{\Lambda}\) is a sequence of real numbers, typically converging to \(\infty\). They can be treated as _counterterms_ renormalizing the self-energy \(\Sigma^{\Lambda}(z)\).
## 3. Fermi Golden Rule
The meaning of the self-energy is especially clear in perturbation theory. Again, let \(\Psi\) be a normalized vector in \(\mathcal{H}\). Consider a family of self-adjoint operators \(H_{\lambda}=H_{0}+\lambda V\) such that \(H_{0}\Psi=E_{0}\Psi\) and \((\Psi|V\Psi)=0\). Let \(h:=V\Psi\) and \(K_{\lambda}\) be \(H_{\lambda}\) compressed to \(\mathcal{K}\). Thus we can rewrite (26) as
\[H_{\lambda}=\begin{bmatrix}E_{0}&\lambda(h|\\ \lambda|h)&K_{\lambda}\end{bmatrix}. \tag{34}\]
We extract \(\lambda^{2}\) from the definition of the self-energy, so that (27) and (28) are rewritten as
\[(\Psi|(H_{\lambda}-z)^{-1}\Psi) =\big{(}E_{0}+\lambda^{2}\Sigma_{\lambda}(z)-z\big{)}^{-1}, \tag{35}\] \[\Sigma_{\lambda}(z) :=-(h|(K_{\lambda}-z)^{-1}h)=\Sigma_{0}(z)+O(\lambda). \tag{36}\]
Now (35) has a pole at
\[E_{0}+\lambda^{2}\Sigma_{0}(E_{0}+\mathrm{i}0)+O(\lambda^{3}). \tag{37}\]
This is often formulated as the _Fermi Golden Rule_: the pole of the resolvent, originally at an eigenvalue \(E_{0}\), is shifted in the second order by \(\lambda^{2}\Sigma_{0}(E_{0}+\mathrm{i}0)\). This shift can have a negative imaginary part, and then the eigenvalue disappears. A singularity of the resolvent with a negative imaginary part is usually called a _resonance_.
Resonances describe metastable states. A rigorous meaning of a resonance is provided by the following version of the _weak coupling limit_ ([12], see also [14, 15])
\[\lim_{\lambda\to 0}\big{(}\Psi\big{|}\exp\big{(}-\mathrm{i}\tfrac{t}{\lambda^{2}} (H_{\lambda}-E_{0})\big{)}\big{|}\Psi\big{)}=\mathrm{e}^{-\mathrm{i}t\Sigma_{0 }(E_{0}+\mathrm{i}0)}. \tag{38}\]
If the perturbation is singular, so that \(\Psi\) does not belong to the domain of \(V\), then \(\Sigma_{0}(z)\) is in general ill defined and (37) may lose its meaning. Strictly speaking, one then needs to introduce a cutoff on the perturbation and a counterterm, and only then to apply the appropriately modified Fermi Golden Rule.
Note that it is enough to consider real counterterms. Therefore, if we know that the renormalized energy is close to \(E_{0}\), then we can still expect that (37) gives a correct prediction for the imaginary part of the resonance. In other words, the imaginary part of the singularity of the resolvent \((H_{\lambda}-z)^{-1}\) is
\[\lambda^{2}\mathrm{Im}\Sigma_{0}(E_{0}+\mathrm{i}0)+O(\lambda^{3}), \tag{39}\]
where we do not need to cut off the perturbation.
In practice, we start from a singular expression of the form 34. To make it well-defined we need to choose a cutoff and counterterms. These choices will not affect the imaginary part of the resonance, however in principle, one can add an arbitrary real constant to a counterterm, which will affect the real part of the resonance. Therefore, for singular perturbations it may be more difficult to predict the real part of the resonance.
## 4. Bose gas and Bogoliubov ansatz
We consider a homogeneous Bose gas of \(N\) particles with a two-body potential described by a function \(v:\mathbb{R}^{3}\to\mathbb{R}\) whose Fourier transform \(\hat{v}(\mathbf{k})=\int_{\mathbb{R}^{3}}v(x)\mathrm{e}^{-\mathrm{i}\mathbf{k }\cdot\mathbf{x}}\,\mathrm{d}\mathbf{x}\) is non-negative and rotation invariant. In the grand canonical setting and the momentum representation such a system is governed by the (second quantized) Hamiltonian
\[H=\int\left(\frac{\mathbf{k}^{2}}{2}-\mu\right)a_{\mathbf{k}}^{*}a_{\mathbf{k} }\,\mathrm{d}\mathbf{k}+\frac{\kappa}{2(2\pi)^{3}}\int\,\mathrm{d}\mathbf{p} \int\,\mathrm{d}\mathbf{q}\int\,\mathrm{d}\mathbf{k}\hat{v}(\mathbf{k})a_{ \mathbf{p}-\mathbf{k}}^{*}a_{\mathbf{q}+\mathbf{k}}^{*}a_{\mathbf{p}}a_{ \mathbf{q}}, \tag{40}\]
where \(\mu\geq 0\) is the chemical potential and \(a_{\mathbf{k}}^{*}/a_{\mathbf{k}}\) the creation/annihilation operators for particles of mode \(\mathbf{k}\). It acts on the bosonic Fock space \(\mathcal{F}=\Gamma_{\mathrm{s}}\big{(}L^{2}(\mathbb{R}^{3})\big{)}\), and for each \(N\) it leaves invariant its \(N\)-particle sector \(L^{2}_{\mathrm{s}}\big{(}(\mathbb{R}^{3})^{N}\big{)}\). Recall that the creation and annihilation operators satisfy the canonical commutation relation (CCR):
\[[a_{\mathbf{p}},a_{\mathbf{q}}]=0=[a_{\mathbf{p}}^{*},a_{\mathbf{q}}^{*}],\ \ [a_{ \mathbf{p}},a_{\mathbf{q}}^{*}]=\delta(\mathbf{p}-\mathbf{q}), \tag{41}\]
where \([\,\ ]\) is the usual commutator. We introduce the coupling constant \(\kappa>0\) mostly for bookkeeping purposes; note that in the introduction we set \(\kappa=1\).
For the reasons explained in the introduction, we replace the infinite space \(\mathbb{R}^{3}\) by the torus \([-L/2,L/2]^{3}\) with periodic boundary conditions. In the momentum representation the Hamiltonian becomes
\[H=\sum_{\mathbf{k}\in 2\pi\mathbb{Z}^{3}/L}\left(\frac{\mathbf{k}^{2}}{2}-\mu \right)a_{\mathbf{k}}^{*}a_{\mathbf{k}}+\frac{\kappa}{2L^{3}}\sum_{\mathbf{p}, \mathbf{q},\mathbf{k}\in 2\pi\mathbb{Z}^{3}/L}\hat{v}(\mathbf{k})a_{\mathbf{p}- \mathbf{k}}^{*}a_{\mathbf{q}+\mathbf{k}}^{*}a_{\mathbf{p}}a_{\mathbf{q}}. \tag{42}\]
Note that \(\hat{v}\) is the same function as in (40), however it is now sampled only on the lattice \(2\pi\mathbb{Z}^{3}/L\). The commutation relation involve now the Kronecker delta:
\[[a_{\mathbf{p}},a_{\mathbf{q}}]=0=[a_{\mathbf{p}}^{*},a_{\mathbf{q}}^{*}],\ \ [a_{ \mathbf{p}},a_{\mathbf{q}}^{*}]=\delta_{\mathbf{p},\mathbf{q}}. \tag{43}\]
Let us now pass to the quasiparticle representation. To this end we follow the well-known grand-canonical version of the Bogoliubov approach (see e.g. [11]). It involves two unitary transformations.
The first one is a Weyl transformation that introduces a macroscopic occupation of the zero-momentum mode, the Bose-Einstein condensate. (In the canonical version Bogoliubov approach this corresponds to the c-number substitution [27].) To this end, for \(\alpha\in\mathbb{C}\), we introduce the Weyl operator of the mode \(\mathbf{k}=0\)
\[W_{\alpha}=\exp(-\alpha a_{0}^{*}+\bar{\alpha}a_{0}). \tag{44}\]
Then
\[W_{\alpha}^{*}a_{\mathbf{k}}^{*}W_{\alpha}=a_{\mathbf{k}}^{*}-\bar{\alpha} \delta_{\mathbf{k},0}=:\tilde{a}_{\mathbf{k}}^{*}.\]
The new annihilation operators with tildes kill the "new vacuum" \(\Omega_{\alpha}=W_{\alpha}^{*}\Omega\). We express our Hamiltonian in terms of \(\tilde{a}_{\mathbf{k}}^{*},\tilde{a}_{\mathbf{k}}\). To simplify the notation, in what follows we drop the tildes and we obtain
\[H = -\mu|\alpha|^{2}+\frac{\kappa\hat{v}(0)}{2L^{3}}|\alpha|^{4}+ \left(\frac{\kappa\hat{v}(0)}{L^{3}}|\alpha|^{2}-\mu\right)(\alpha a_{0}^{*}+ \bar{\alpha}a_{0})\] \[+ \sum_{\mathbf{k}}\left(\frac{\mathbf{k}^{2}}{2}-\mu+\frac{\kappa (\hat{v}(\mathbf{k})+\hat{v}(0))}{L^{3}}|\alpha|^{2}\right)a_{\mathbf{k}}^{*} a_{\mathbf{k}}+\sum_{\mathbf{k}}\frac{\kappa\hat{v}(\mathbf{k})}{2L^{3}}\left( \alpha^{2}a_{\mathbf{k}}^{*}a_{-\mathbf{k}}^{*}+\bar{\alpha}^{2}a_{\mathbf{k} }a_{-\mathbf{k}}\right)\] \[+ \frac{\kappa}{L^{3}}\sum_{\mathbf{k}_{1},\mathbf{k}_{2}}\hat{v} (\mathbf{k}_{1})\left(\bar{\alpha}a_{\mathbf{k}_{1}+\mathbf{k}_{2}}^{*}a_{ \mathbf{k}_{1}}a_{\mathbf{k}_{2}}+\alpha a_{\mathbf{k}_{1}}^{*}a_{\mathbf{k}_{ 2}}^{*}a_{\mathbf{k}_{1}+\mathbf{k}_{2}}\right)\] \[+ \frac{\kappa}{2L^{3}}\sum_{\mathbf{k}_{1},\mathbf{k}_{2}, \mathbf{k}_{3},\mathbf{k}_{4}}\delta(\mathbf{k}_{1}+\mathbf{k}_{2}-\mathbf{k }_{3}-\mathbf{k}_{4})\hat{v}(\mathbf{k}_{2}-\mathbf{k}_{3})a_{\mathbf{k}_{1}} ^{*}a_{\mathbf{k}_{2}}^{*}a_{\mathbf{k}_{3}}a_{\mathbf{k}_{4}}.\]
Note that we have
\[(\Omega_{\alpha}|H\Omega_{\alpha})=-\mu|\alpha|^{2}+\frac{\kappa\hat{v}(0)}{2L ^{3}}|\alpha|^{4},\]
and we choose \(\alpha=\sqrt{\frac{\mu L^{3}}{\kappa\hat{v}(0)}}\), so that \(\Omega_{\alpha}\) minimizes this expectation value. This leads to
\[H=\kappa^{-1}H_{0}+H_{2}+\sqrt{\kappa}H_{3}+\kappa H_{4}, \tag{45}\] \[H_{0}:=-\frac{\mu^{2}L^{3}}{2\hat{v}(0)},\] \[H_{2}:=\sum_{\mathbf{k}}\left(\frac{\mathbf{k}^{2}}{2}+\frac{\mu \hat{v}(\mathbf{k})}{\hat{v}(0)}\right)a_{\mathbf{k}}^{*}a_{\mathbf{k}}+\sum_ {\mathbf{k}}\frac{\mu\hat{v}(\mathbf{k})}{2\hat{v}(0)}\left(a_{\mathbf{k}}^{* }a_{-\mathbf{k}}^{*}+a_{\mathbf{k}}a_{-\mathbf{k}}\right),\] \[H_{3}:=\frac{1}{L^{3/2}}\sum_{\mathbf{k}_{1},\mathbf{k}_{2}}\frac {\hat{v}(\mathbf{k}_{1})\sqrt{\mu}}{\sqrt{\hat{v}(0)}}\left(a_{\mathbf{k}_{1}+ \mathbf{k}_{2}}^{*}a_{\mathbf{k}_{1}}a_{\mathbf{k}_{2}}+a_{\mathbf{k}_{1}}^{* }a_{\mathbf{k}_{2}}^{*}a_{\mathbf{k}_{1}+\mathbf{k}_{2}}\right),\] \[H_{4}:=\frac{1}{2L^{3}}\sum_{\mathbf{k}_{1},\mathbf{k}_{2}, \mathbf{k}_{3},\mathbf{k}_{4}}\delta(\mathbf{k}_{1}+\mathbf{k}_{2}-\mathbf{k }_{3}-\mathbf{k}_{4})\hat{v}(\mathbf{k}_{2}-\mathbf{k}_{3})a_{\mathbf{k}_{1} }^{*}a_{\mathbf{k}_{2}}^{*}a_{\mathbf{k}_{3}}a_{\mathbf{k}_{4}}.\]
We extract from the above Hamiltonian all terms containing only non-zero modes:
\[H_{2}=\frac{\mu}{2}(a_{0}^{*2}+a_{0}^{2}+2a_{0}^{*}a_{0})+H_{2}^{ \text{exc}},\] \[H_{2}^{\text{exc}}:=\sum_{\mathbf{k}\neq 0}\left(\frac{\mathbf{k}^{2}}{2 }+\frac{\mu\hat{v}(\mathbf{k})}{\hat{v}(0)}\right)a_{\mathbf{k}}^{*}a_{\mathbf{ k}}+\sum_{\mathbf{k}\neq 0}\frac{\mu\hat{v}(\mathbf{k})}{2\hat{v}(0)}\left(a_{ \mathbf{k}}^{*}a_{-\mathbf{k}}^{*}+a_{\mathbf{k}}a_{-\mathbf{k}}\right); \tag{46}\] \[H_{3}=\frac{1}{L^{3/2}}\sum_{\mathbf{k}}\sqrt{\mu\hat{v}(0)}(a_{ 0}^{*}+a_{0})a_{\mathbf{k}}^{*}a_{\mathbf{k}}\] \[+\frac{1}{L^{3/2}}\sum_{\mathbf{k}\neq 0}\frac{\sqrt{\mu}\hat{v}( \mathbf{k})}{\sqrt{\hat{v}(0)}}\big{(}(a_{0}^{*}+a_{0})a_{\mathbf{k}}^{*}a_{ \mathbf{k}}+a_{0}a_{\mathbf{k}}^{*}a_{-\mathbf{k}}^{*}+a_{0}^{*}a_{\mathbf{k} }a_{-\mathbf{k}}\big{)}+H_{3}^{\text{exc}},\]
\[H_{3}^{\rm exc} :=\frac{1}{L^{3/2}}\sum_{{\bf k}_{1},{\bf k}_{2},{\bf k}_{1}+{\bf k}_ {2}\neq 0}\frac{\hat{v}({\bf k}_{1})\sqrt{\mu}}{\sqrt{\hat{v}(0)}}\left(a_{{\bf k }_{1}+{\bf k}_{2}}^{*}a_{{\bf k}_{1}}a_{{\bf k}_{2}}+a_{{\bf k}_{1}}^{*}a_{{\bf k }_{2}}^{*}a_{{\bf k}_{1}+{\bf k}_{2}}\right); \tag{47}\] \[H_{4} =\frac{1}{2L^{3}}\hat{v}(0)\Big{(}a_{0}^{*}a_{0}^{*}a_{0}a_{0}a_{ 0}+2\sum_{{\bf k}\neq 0}a_{0}^{*}a_{0}a_{{\bf k}}^{*}a_{{\bf k}}\Big{)}\] \[+\frac{1}{2L^{3}}\sum_{{\bf k}\neq 0}\hat{v}({\bf k})(a_{0}^{*}a_{ 0}^{*}a_{{\bf k}}a_{-{\bf k}}+a_{0}a_{0}a_{{\bf k}}^{*}a_{-{\bf k}}^{*}+2a_{0}^ {*}a_{0}a_{{\bf k}}^{*}a_{{\bf k}})\] \[+\frac{1}{L^{3}}\sum_{{\bf k}_{1},{\bf k}_{2},{\bf k}_{1}+{\bf k} _{2}\neq 0}\hat{v}(k_{1})\big{(}a_{0}^{*}a_{{\bf k}_{1}+{\bf k}_{2}}^{*}a_{{ \bf k}_{1}}a_{{\bf k}_{2}}+a_{0}a_{{\bf k}_{1}}^{*}a_{{\bf k}_{2}}^{*}a_{{\bf k }_{1}+{\bf k}_{2}}\big{)}+H_{4}^{\rm exc},\] \[H_{4}^{\rm exc} :=\frac{1}{2L^{3}}\sum_{{\bf k}_{1},{\bf k}_{2},{\bf k}_{3},{\bf k }_{4}\neq 0}\delta({\bf k}_{1}+{\bf k}_{2}-{\bf k}_{3}-{\bf k}_{4})\hat{v}({ \bf k}_{2}-{\bf k}_{3})a_{{\bf k}_{1}}^{*}a_{{\bf k}_{2}}^{*}a_{{\bf k}_{3}}a _{{\bf k}_{4}}. \tag{48}\]
We are going to apply a Bogoliubov transformation
\[U_{\rm Bog}:=\exp\Bigg{(}\sum_{{\bf k}\neq 0}\beta_{{\bf k}}(a_{{\bf k}}^{*}a_ {-{\bf k}}^{*}-a_{{\bf k}}a_{-{\bf k}})\Bigg{)}, \tag{49}\]
which transforms non-zero mode operators \(a_{{\bf k}}^{*},a_{{\bf k}}\) into quasi-particle operators \(b_{{\bf k}}^{*},b_{{\bf k}}\):
\[b_{{\bf k}} := U_{\rm Bog}a_{{\bf k}}U_{\rm Bog}^{*}=\sigma_{{\bf k}}a_{{\bf k} }+\gamma_{{\bf k}}a_{-{\bf k}}^{*},\] \[b_{{\bf k}}^{*} := U_{\rm Bog}a_{{\bf k}}^{*}U_{\rm Bog}^{*}=\sigma_{{\bf k}}a_{{\bf k }}^{*}+\gamma_{{\bf k}}a_{-{\bf k}}, \tag{50}\]
where
\[\sigma_{{\bf k}}=\frac{\sqrt{\sqrt{e_{{\bf k}}^{2}+B_{{\bf k}}^ {2}+e_{{\bf k}}}}}{\sqrt{2e_{{\bf k}}}},\quad\gamma_{{\bf k}}=\frac{\sqrt{ \sqrt{e_{{\bf k}}^{2}+B_{{\bf k}}^{2}-e_{{\bf k}}}}}{\sqrt{2e_{{\bf k}}}},\] \[e_{{\bf k}}:=\sqrt{\frac{1}{4}|{\bf k}|^{4}+B_{{\bf k}}|{\bf k}| ^{2}},\quad B_{{\bf k}}:=\frac{\hat{v}({\bf k})}{\hat{v}(0)}\mu.\]
The inverse relation is
\[a_{{\bf k}} = \sigma_{{\bf k}}b_{{\bf k}}-\gamma_{{\bf k}}b_{-{\bf k}}^{*},\] \[a_{{\bf k}}^{*} = \sigma_{{\bf k}}b_{{\bf k}}^{*}-\gamma_{{\bf k}}b_{-{\bf k}}.\]
It is well known that (50) diagonalizes \(H_{2}^{\rm exc}\) in terms of the quasi-particle operators:
\[H_{2}^{\rm exc} = E_{\rm Bog}+H_{\rm Bog}, \tag{51}\]
where
\[E_{\rm Bog}:=-\frac{1}{2}\sum_{{\bf k}\neq 0}\left(\frac{1}{2}|{\bf k }|^{2}+\frac{\hat{v}({\bf k})}{\hat{v}(0)}\mu-e_{{\bf k}}\right), \tag{52}\] \[H_{\rm Bog}:=\sum_{{\bf k}\neq 0}e_{{\bf k}}b_{{\bf k}}^{*}b_{{\bf k }}. \tag{53}\]
We also express \(H_{3}^{\rm exc}\) in terms of quasiparticles:
\[H_{3}^{\rm exc} =\frac{1}{L^{3/2}}\sum_{{\bf k}_{1},{\bf k}_{2},{\bf k}_{1}+{\bf k }_{2}\neq 0}\frac{\sqrt{\mu}\hat{v}({\bf k}_{1})}{\sqrt{\hat{v}(0)}}(A({\bf k }_{1},{\bf k}_{2})+A^{*}({\bf k}_{1},{\bf k}_{2}),\] \[A({\bf k}_{1},{\bf k}_{2})= \sigma_{{\bf k}_{1}}\sigma_{{\bf k}_{2}}\sigma_{{\bf k}_{1}+{\bf k }_{2}}b_{{\bf k}_{1}}^{*}b_{{\bf k}_{2}}^{*}b_{{\bf k}_{1}+{\bf k}_{2}}- \gamma_{{\bf k}_{1}}\sigma_{{\bf k}_{2}}\sigma_{{\bf k}_{1}+{\bf k}_{2}}b_{{\bf k }_{2}}^{*}b_{-{\bf k}_{1}}b_{{\bf k}_{1}+{\bf k}_{2}}\] \[-\gamma_{{\bf k}_{1}}\gamma_{{\bf k}_{2}}\sigma_{{\bf k}_{1}+{\bf k }_{2}}b_{{\bf k}_{1}}^{*}b_{-{\bf k}_{2}}b_{{\bf k}_{1}+{\bf k}_{2}}+\gamma_{{ \bf k}_{1}}\gamma_{{\bf k}_{2}}\sigma_{{\bf k}_{1}+{\bf k}_{2}}b_{-{\bf k}_{1}}b_ {-{\bf k}_{2}}b_{{\bf k}_{1}+{\bf k}_{2}}\] \[-\sigma_{{\bf k}_{1}}\sigma_{{\bf k}_{2}}\gamma_{{\bf k}_{1}+{\bf k }_{2}}b_{{\bf k}_{1}}^{*}b_{{\bf k}_{2}}^{*}b_{-{\bf k}_{1}-{\bf k}_{2}}^{*}- \gamma_{{\bf k}_{1}}\sigma_{{\bf k}_{2}}\gamma_{{\bf k}_{1}+{\bf k}_{2}}b_{{\bf k }_{2}}^{*}b_{-{\bf k}_{1}-{\bf k}_{2}}^{*}b_{-{\bf k}_{1}}\] \[+\sigma_{{\bf k}_{1}}\gamma_{{\bf k}_{2}}\gamma_{{\bf k}_{1}+{\bf k }_{2}}b_{{\bf k}_{1}}^{*}b_{-{\bf k}_{1}-{\bf k}_{2}}b_{-{\bf k}_{2}}-\gamma_{{ \bf k}_{1}}\gamma_{{\bf k}_{2}}\gamma_{{\bf k}_{1}+{\bf k}_{2}}b_{-{\bf k}_{1}}^{*}b_{-{ \bf k}_{1}}b_{-{\bf k}_{2}}.\]
Thus
\[H_{3}^{\rm exc}=H_{3,1}^{\rm exc}+H_{3,2}^{\rm exc}, \tag{54}\]
\[H_{3,1}^{\rm exc}=\sum_{{\bf k}_{1},{\bf k}_{2},{\bf k}_{1}+{\bf k}_{2}\neq 0 }\frac{\sqrt{\mu}\hat{v}({\bf k}_{1})}{L^{3/2}\sqrt{\hat{v}(0)}}\Big{(}\sigma_{{ \bf k}_{1}}\sigma_{{\bf k}_{2}}\sigma_{{\bf k}_{1}+{\bf k}_{2}}b_{{\bf k}_{1}}^ {*}b_{{\bf k}_{2}}^{*}b_{{\bf k}_{1}+{\bf k}_{2}}-\gamma_{{\bf k}_{1}}\sigma_{{ \bf k}_{2}}\sigma_{{\bf k}_{1}+{\bf k}_{2}}b_{{\bf k}_{2}}^{*}b_{-{\bf k}_{1}}b_ {{\bf k}_{1}+{\bf k}_{2}}\]
\[-\gamma_{{\bf k}_{1}}\gamma_{{\bf k}_{2}}\sigma_{{\bf k}_{1}+{\bf k}_{2}}b_{{ \bf k}_{1}}^{*}b_{-{\bf k}_{2}}b_{{\bf k}_{1}+{\bf k}_{2}}-\gamma_{{\bf k}_{1}} \sigma_{{\bf k}_{2}}\gamma_{{\bf k}_{1}+{\bf k}_{2}}b_{{\bf k}_{2}}^{*}b_{-{ \bf k}_{1}-{\bf k}_{2}}^{*}b_{-{\bf k}_{1}}\]
\[+\sigma_{{\bf k}_{1}}\gamma_{{\bf k}_{2}}\gamma_{{\bf k}_{1}+{\bf k}_{2}}b_{{ \bf k}_{1}}^{*}b_{-{\bf k}_{1}-{\bf k}_{2}}b_{-{\bf k}_{2}}-\gamma_{{\bf k}_{1} }\gamma_{{\bf k}_{2}}\gamma_{{\bf k}_{1}+{\bf k}_{2}}b_{-{\bf k}_{1}-{\bf k}_{ 2}}^{*}b_{-{\bf k}_{1}}b_{-{\bf k}_{2}}\]
\[+\sigma_{{\bf k}_{1}}\sigma_{{\bf k}_{2}}\sigma_{{\bf k}_{1}+{\bf k}_{2}}b_{{ \bf k}_{1}+{\bf k}_{2}}^{*}b_{{\bf k}_{1}}b_{{\bf k}_{2}}-\gamma_{{\bf k}_{1}} \sigma_{{\bf k}_{2}}\sigma_{{\bf k}_{1}+{\bf k}_{2}}b_{-{\bf k}_{1}}^{*}b_{{\bf k }_{1}+{\bf k}_{2}}^{*}b_{{\bf k}_{2}}\]
\[-\gamma_{{\bf k}_{1}}\gamma_{{\bf k}_{2}}\sigma_{{\bf k}_{1}+{\bf k}_{2}}b_{{ \bf k}_{2}}^{*}b_{{\bf k}_{1}+{\bf k}_{2}}^{*}b_{{\bf k}_{1}}-\gamma_{{\bf k}_{ 1}}\sigma_{{\bf k}_{2}}\gamma_{{\bf k}_{1}+{\bf k}_{2}}b_{-{\bf k}_{1}}b_{{\bf k }_{2}}b_{-{\bf k}_{1}-{\bf k}_{2}}\]
\[+\sigma_{{\bf k}_{1}}\gamma_{{\bf k}_{2}}\gamma_{{\bf k}_{1}+{\bf k}_{2}}b_{-{ \bf k}_{2}}^{*}b_{{\bf k}_{1}}b_{-{\bf k}_{1}-{\bf k}_{2}}-\gamma_{{\bf k}_{1} }\gamma_{{\bf k}_{2}}\gamma_{{\bf k}_{1}+{\bf k}_{2}}b_{-{\bf k}_{1}}b_{{\bf k }_{2}}^{*}b_{-{\bf k}_{1}-{\bf k}_{2}}\Big{)},\]
\[H_{3,2}^{\rm exc}=\sum_{{\bf k}_{1},{\bf k}_{2},{\bf k}_{1}+{\bf k}_{2}\neq 0 }\frac{\sqrt{\mu}\hat{v}({\bf k}_{1})}{L^{3/2}\sqrt{\hat{v}(0)}}\Big{(}\gamma_{ {\bf k}_{1}}\gamma_{{\bf k}_{2}}\sigma_{{\bf k}_{1}+{\bf k}_{2}}b_{-{\bf k}_{1 }}b_{-{\bf k}_{2}}b_{{\bf k}_{1}+{\bf k}_{2}}-\sigma_{{\bf k}_{1}}\sigma_{{\bf k }_{2}}\gamma_{{\bf k}_{1}+{\bf k}_{2}}b_{{\bf k}_{1}}^{*}b_{{\bf k}_{2}}^{*}b_{- {\bf k}_{1}-{\bf k}_{2}}^{*}\]
\[+\gamma_{{\bf k}_{1}}\gamma_{{\bf k}_{2}}\sigma_{{\bf k}_{1}+{\bf k}_{2}}b_{-{ \bf k}_{1}}^{*}b_{-{\bf k}_{2}}^{*}b_{{\bf k}_{1}+{\bf k}_{2}}^{*}-\sigma_{{\bf k }_{1}}\sigma_{{\bf k}_{2}}\gamma_{{\bf k}_{1}+{\bf k}_{2}}b_{{\bf k}_{1}}b_{{ \bf k}_{2}}b_{-{\bf k}_{1}-{\bf k}_{2}}\Big{)}.\]
We could also compute \(H_{4}\), but we will not need it.
## 5. Effective Friedrichs Hamiltonian
Let \(\Omega_{\rm Bog}:=U_{\rm Bog}^{*}\Omega_{\alpha}\) be the quasiparticle vacuum. Introduce the space \(\mathcal{F}^{\rm exc}\) consisting of the Bogoliubov vacuum and quasiparticle excitations, and its \(n\)-quasiparticle sector:
\[\mathcal{F}^{\rm exc}:={\rm Span}^{\rm cl}\{b_{{\bf k}_{1}}^{*}\cdots b_{{\bf k }_{n}}^{*}\Omega_{\rm Bog}\ |\ {\bf k}_{1},\ldots,{\bf k}_{n}\neq 0,\quad n=0,1,\ldots\},\]
\[\mathcal{F}^{\rm exc}_{n}:={\rm Span}^{\rm cl}\{b_{{\bf k}_{1}}^{*}\cdots b_{{ \bf k}_{n}}^{*}\Omega_{\rm Bog}\ |\ {\bf k}_{1},\ldots,{\bf k}_{n}\neq 0\}.\]
The most "violent" approximation that we are going to make is compressing the Hamiltonian \(H\) into the space \(\mathcal{F}^{\rm exc}\). We also drop the uninteresting constant \(\kappa^{-1}H_{0}\) and the (somewhat more interesting) constant \(E_{\rm Bog}\). Thus we introduce the _excitation Hamiltonian_
\[H^{\rm exc}:=I^{\rm exc*}\big{(}H-\kappa^{-1}H_{0}-E_{\rm Bog}\big{)}I^{\rm exc},\]
where \(I^{\rm exc}\) denotes the embedding of \(\mathcal{F}^{\rm exc}\) in \(\mathcal{F}\). Thus \(H^{\rm exc}\) is an operator on \(\mathcal{F}^{\rm exc}\) and
\[H^{\rm exc}=H_{\rm Bog}+\sqrt{\kappa}H_{3}^{\rm exc}+\kappa H_{4}^{\rm exc}, \tag{55}\]
where \(H_{3}^{\rm exc}\) and \(H_{4}^{\rm exc}\) are defined in (47) and 48.
We make two more approximations. We drop \(\kappa H_{4}\), which is of higher order in \(\kappa\) than \(\sqrt{\kappa}H_{3}\). We also drop \(H_{3,2}\), which involves 3-quasiparticle creation/annihilation operators, and does not contribute to the damping rate (see Section 6 for a justification). Thus \(H^{\rm exc}\) is replaced with
\[H^{\rm eff}:=H_{\rm Bog}+\sqrt{\kappa}H_{3,1}^{\rm exc}. \tag{56}\]
To make our following discussion consistent with Sect. 3 about the Fermi Golden Rule, we introduce a new coupling constant
\[\lambda:=\sqrt{\kappa}. \tag{57}\]
Let \({\bf k}\neq 0\). Clearly, \(b_{{\bf k}}^{*}\Omega_{\rm Bog}\) is an eigenstate of \(H^{\rm eff}\) for \(\lambda=0\). We would like to compute the self-energy for the vector \(b_{{\bf k}}^{*}\Omega_{\rm Bog}\) and the Hamiltonian \(H^{\rm eff}\):
\[\lambda^{2}\Sigma^{\rm eff}_{{\bf k}}(z):=\frac{-1}{(b_{{\bf k}}^{*}\Omega_{\rm Bog }[(z-H^{\rm eff})^{-1}b_{{\bf k}}^{*}\Omega_{\rm Bog})}+z-e_{{\bf k}}. \tag{58}\]
Introduce the subspaces of \(\mathcal{F}^{\rm exc}\) and \(\mathcal{F}^{\rm exc}_{n}\) with the total momentum \({\bf k}\):
\[\mathcal{F}^{\rm exc}({\bf k}):={\rm Span}^{\rm cl}\{b_{{\bf k}_{1}}^{*} \cdots b_{{\bf k}_{n}}^{*}\Omega_{\rm Bog},\qquad{\bf k}_{1}+\cdots{\bf k}_{n}={ \bf k},\ {\bf k}_{1},\ldots,{\bf k}_{n}\neq 0,\quad n=0,1,\ldots\},\]
\[\mathcal{F}^{\rm exc}_{n}({\bf k}):={\rm Span}^{\rm cl}\{b_{{\bf k}_{1}}^{*} \cdots b_{{\bf k}_{n}}^{*}\Omega_{\rm Bog},\qquad
\(b_{\mathbf{k}}^{*}\Omega_{\mathrm{Bog}}\) is contained in the space \(\mathcal{F}^{\mathrm{exc}}(\mathbf{k})\), which is preserved by \(H^{\mathrm{eff}}\). Let \(H^{\mathrm{eff}}(\mathbf{k})\) denote the operator \(H^{\mathrm{eff}}\) restricted to \(\mathcal{F}^{\mathrm{exc}}(\mathbf{k})\). Thus we can restrict ourselves to the fiber space \(\mathcal{F}^{\mathrm{exc}}(\mathbf{k})\) and the fiber Hamiltonian \(H^{\mathrm{eff}}(\mathbf{k})\). In particular, in (58) we can replace \(H^{\mathrm{eff}}\) with \(H^{\mathrm{eff}}(\mathbf{k})\).
For our analysis it is enough to know only \(H^{\mathrm{eff}}\) (or \(H^{\mathrm{eff}}(\mathbf{k})\)) compressed to \(\mathcal{F}^{\mathrm{exc}}_{1}(\mathbf{k})\oplus\mathcal{F}^{\mathrm{exc}} _{2}(\mathbf{k})\). Note that the one-quasiparticle state \(b_{\mathbf{k}}^{*}|\Omega_{\mathrm{Bog}}\)) spans \(\mathcal{F}^{\mathrm{exc}}_{1}(\mathbf{k})\), and \(\mathcal{F}^{\mathrm{exc}}_{2}(\mathbf{k})\) is spanned by \(b_{\mathbf{p}}^{*}b_{\mathbf{k}-\mathbf{p}}^{*}\Omega_{\mathrm{Bog}}\) with \(\mathbf{p},\mathbf{k}-\mathbf{p}\neq 0.\) We compute
\[(b_{\mathbf{k}}^{*}\Omega_{\mathrm{Bog}}|H^{\mathrm{eff}}b_{ \mathbf{k}}^{*}\Omega_{\mathrm{Bog}}) =e_{\mathbf{k}},\] \[(b_{\mathbf{p}}^{*}b_{\mathbf{k}-\mathbf{p}}^{*}\Omega_{\mathrm{ Bog}}|H^{\mathrm{eff}}b_{\mathbf{p}}^{*}b_{\mathbf{k}-\mathbf{p}}^{*}\Omega_{ \mathrm{Bog}}) =e_{\mathbf{p}}+e_{\mathbf{k}-\mathbf{p}},\] \[(b_{\mathbf{p}}^{*}b_{\mathbf{k}-\mathbf{p}}^{*}\Omega_{\mathrm{ Bog}}|H^{\mathrm{eff}}b_{\mathbf{k}}^{*}\Omega_{\mathrm{Bog}}) =\frac{\lambda}{L^{3/2}}h_{\mathbf{k}}(\mathbf{p}), \tag{59}\] \[(b_{\mathbf{k}}^{*}\Omega_{\mathrm{Bog}}|H^{\mathrm{eff}}b_{ \mathbf{p}}^{*}b_{\mathbf{k}-\mathbf{p}}^{*}\Omega_{\mathrm{Bog}}) =\frac{\lambda}{L^{3/2}}h_{\mathbf{k}}(\mathbf{p}) \tag{60}\]
with
\[h_{\mathbf{k}}(\mathbf{p}) =2\sqrt{\frac{\mu\hat{v}^{2}(\mathbf{k})}{\hat{v}(0)}}\Big{(} \sigma_{\mathbf{p}}\gamma_{-\mathbf{k}}\gamma_{\mathbf{p}-\mathbf{k}}+\sigma_ {\mathbf{k}-\mathbf{p}}\gamma_{-\mathbf{k}}\gamma_{\mathbf{p}}+\sigma_{ \mathbf{p}}\sigma_{\mathbf{k}-\mathbf{p}}\sigma_{\mathbf{k}} \tag{61}\] \[\qquad-\gamma_{\mathbf{p}}\sigma_{-\mathbf{k}}\sigma_{\mathbf{p}- \mathbf{k}}-\gamma_{\mathbf{k}-\mathbf{p}}\sigma_{-\mathbf{k}}\sigma_{\mathbf{ p}}-\gamma_{\mathbf{p}}\gamma_{\mathbf{k}-\mathbf{p}}\gamma_{\mathbf{k}}\Big{)}.\]
The Hamiltonian \(H^{\mathrm{eff}}\) compressed to \(\mathcal{F}^{\mathrm{exc}}_{1}(\mathbf{k})\oplus\mathcal{F}^{\mathrm{exc}} _{2}(\mathbf{k})\) will be called the _effective Friedrichs Hamiltonian_ (for volume \(L^{3}\) and momentum \(\mathbf{k}\)). It is denoted \(H^{L}_{\mathrm{Fried}}(\mathbf{k})\) and given by
\[H^{L}_{\mathrm{Fried}}(\mathbf{k}) :=\begin{bmatrix}e_{\mathbf{k}}&\frac{\lambda}{L^{3/2}}(h_{ \mathbf{k}}|\\ \frac{\lambda}{L^{3/2}}|h_{\mathbf{k}})&e_{\mathbf{p}}+e_{\mathbf{k}-\mathbf{p} }\end{bmatrix}, \tag{62}\] \[\text{on}\quad\mathcal{F}^{\mathrm{exc}}_{1}(\mathbf{k})\oplus \mathcal{F}^{\mathrm{exc}}_{2}(\mathbf{k}) \simeq\mathbb{C}\oplus l^{2}\Big{(}\frac{2\pi}{L}\mathbb{Z}^{3} \setminus\{0,\mathbf{k}\}\Big{)}, \tag{63}\]
where we explicitly introduced a reference to the volume \(L^{3}\) in the notation. Thus we end up in a situation described in Section 3. According to the Fermi Golden Rule (37) we want to compute
\[\Sigma^{L}_{\mathbf{k}}(z) =\frac{1}{L^{3}}\sum_{\mathbf{p},\mathbf{k}-\mathbf{p}\neq 0}\frac{h_{ \mathbf{k}}^{2}(\mathbf{p})}{(z-e_{\mathbf{p}}-e_{\mathbf{k}-\mathbf{p}})}, \tag{64}\]
Unfortunately, the sum (64) is divergent. To cure the divergence we can introduce a cut-off. The cut-off is to a large extent arbitrary. It is convenient to use \(|\mathbf{p}|+|\mathbf{k}-\mathbf{p}|<\Lambda\). Thus we replace (62), (61) and (64) with
\[H^{L,\Lambda}_{\mathrm{Fried}}(\mathbf{k}) :=\begin{bmatrix}e_{\mathbf{k}}&\frac{\lambda}{L^{3/2}}(h_{ \mathbf{k}}^{\Lambda}|\\ \frac{\lambda}{L^{3/2}}|h_{\mathbf{k}}^{\Lambda})&e_{\mathbf{p}}+e_{\mathbf{ k}-\mathbf{p}}\end{bmatrix}, \tag{65}\] \[h_{\mathbf{k}}^{\Lambda}(\mathbf{p}) :=h(\mathbf{p})\mathds{1}_{\{|\mathbf{p}|+|\mathbf{k}-\mathbf{p}| <\Lambda\}}(\mathbf{p}),\] (66) \[\Sigma^{L,\Lambda}_{\mathbf{k}}(z) :=\frac{1}{L^{3}}\sum_{\mathbf{p},\mathbf{k}-\mathbf{p}\neq 0} \frac{h_{\mathbf{k}}^{\Lambda}(\mathbf{p})^{2}}{(z-e_{\mathbf{p}}-e_{\mathbf{k}- \mathbf{p}})}. \tag{67}\]
The functions \(\mathbf{p}\mapsto e_{\mathbf{p}},h_{\mathbf{k}}(\mathbf{p}),h_{\mathbf{k}}^{ \Lambda}(\mathbf{p})\) are well defined for all \(\mathbf{p}\in\mathbb{R}^{3}\setminus\{0\}\), and not only for \(\frac{2\pi}{L}\mathbb{Z}^{3}\setminus\{0,\mathbf{k}\}\). The expression (67) can be interpreted as the Riemann sum converging as \(L\to\infty\) to the integral
\[\Sigma^{\Lambda}_{\mathbf{k}}(z) =\frac{1}{(2\pi)^{3}}\int\frac{h_{\mathbf{k}}^{\Lambda}(\mathbf{ p})^{2}\,\mathrm{d}\mathbf{p}}{(z-e_{\mathbf{p}}-e_{\mathbf{k}-\mathbf{p}})}. \tag{68}\]
We can also introduce the infinite volume effective Friedrichs Hamiltonian
\[H^{\Lambda}_{\mathrm{Fried}}(\mathbf{k}):=\begin{bmatrix}e_{\mathbf{k}}&\lambda( h^{\Lambda}_{\mathbf{k}}|\\ \lambda|h^{\Lambda}_{\mathbf{k}})&e_{\mathbf{p}}+e_{\mathbf{k}-\mathbf{p}} \end{bmatrix}, \tag{69}\]
\[\text{on}\quad\mathbb{C}\oplus L^{2}(\mathbb{R}^{3}),\]
The Fermi Golden Rule predicts that \(\Sigma^{\Lambda}_{\mathbf{k}}(e_{\mathbf{k}}+i0)\) describes the energy shift of the eigenvalue of the infinite volume cut-off Hamiltonian \(H^{\Lambda}_{\mathrm{Fried}}(\mathbf{k})\). Unfortunately, in our case \(\lim\limits_{\Lambda\to\infty}\mathrm{Re}\Sigma^{\Lambda}_{\mathbf{k}}(e_{ \mathbf{k}}+\mathrm{i}0)\) is infinite. However, we will see that \(\mathrm{Im}\Sigma^{\Lambda}_{\mathbf{k}}(e_{\mathbf{k}}+\mathrm{i}0)\) is finite and for large \(\Lambda\) is independent of \(\Lambda\). Physically it describes the decay of the quasiparticle at momentum \(\mathbf{k}\).
## 6. The shape of the quasiparticle spectrum
If \(\mathbf{k}\mapsto e_{\mathbf{k}}\) is a dispersion relation of quasiparticles, then the infimum of the \(n\)-quasiparticle spectrum is
\[\inf\{e_{\mathbf{p}_{1}}+\cdots e_{\mathbf{p}_{n}}\mid\mathbf{p}_{1}+\cdots+ \mathbf{p}_{n}=\mathbf{k}\}. \tag{70}\]
Sometimes, it is possible to compute (70) exactly, as shown in the following lemma.
**Lemma 1**.: _Let \(\mathbf{k}\mapsto e_{\mathbf{k}}\) be a convex function. Then_
\[\inf_{\mathbf{p}}\{e_{\mathbf{p}}+e_{\mathbf{k}-\mathbf{p}}\}=2e_{\mathbf{k}/ 2}. \tag{71}\]
_In particular,_
\[\inf_{\mathbf{p}}\{e_{\mathbf{p}}+e_{\mathbf{k}-\mathbf{p}}\}\leq e_{\mathbf{k}}. \tag{72}\]
_If in addition \(\mathbf{k}\mapsto e_{\mathbf{k}}\) is a strictly convex function, then_
\[\inf_{\mathbf{p}}\{e_{\mathbf{p}}+e_{\mathbf{k}-\mathbf{p}}\}<e_{\mathbf{k}}, \quad\mathbf{k}\neq 0. \tag{73}\]
Proof.: The left hand side of (71) is called infimal involution and is often denoted as
\[e\Box e(\mathbf{k}):=\inf_{\mathbf{p}}\{e_{\mathbf{p}}+e_{\mathbf{k}-\mathbf{p }}\}. \tag{74}\]
Since \(e_{\mathbf{k}}\) is a convex function so is \(e\Box e(\mathbf{k})\)[1, Chapter 12] and it satisfies
\[(e\Box e)^{*}=e^{*}+e^{*}=2e^{*} \tag{75}\]
where \(e^{*}\) denotes the Legendre-Fenchel transform of \(e\). Hence
\[\inf_{\mathbf{p}}\{e_{\mathbf{p}}+e_{\mathbf{k}-\mathbf{p}}\}=e\Box e( \mathbf{k})=(e\Box e)^{**}(\mathbf{k})=(2e^{*})^{*}(\mathbf{k})=2e_{\mathbf{k} /2}\]
which proves (71). Now (72) follows from convexity. Indeed,
\[2e_{\mathbf{p}/2}=2e_{\mathbf{p}/2+0/2}\leq e_{\mathbf{p}}.\]
Now \(e_{\mathbf{k}}\) in (11) is strictly convex. Therefore, (73) is true, and so the dispersion relation is embedded inside the \(2\)-quasiparticle spectrum.
If \(e_{\mathbf{k}}\) is given by (53), then it is strictly convex for small \(\mathbf{k}\). Therefore, the dispersion relation is embedded inside the \(2\)-quasiparticle spectrum at least for small momenta. The same is true for the cutoff effective Friedrichs Hamiltonian \(H^{\Lambda}_{\mathrm{Fried}}\) for large enough \(\Lambda\).
The Hamiltonian \(H^{\mathrm{exc}}\) couples \(b^{*}_{\mathbf{k}}\Omega_{\mathrm{Bog}}\) with \(4\)-quasiparticle states through \(H^{\mathrm{exc}}_{3,2}\). The bottom of \(4\)-quasiparticle spectrum lies below the dispersion relation (in fact, if it is given by (11), it is equal to \(4e_{\mathbf{k}/4}<e_{\mathbf{k}}\)). However, \(H^{\mathrm{exc}}_{3,2}\) does not couple \(b^{*}_{\mathbf{k}}\Omega_{\mathrm{Bog}}\) to all possible \(4\)-quasiparticle states with the total momentum \(\mathbf{k}\), but only to states of the form \(b_{\mathbf{p}_{1}}b_{\mathbf{p}_{1}}b_{\mathbf{p}_{1}}b_{\mathbf{k}}\Omega_{ \mathrm{Bog}}\) with \(\mathbf{p}_{1}+\mathbf{p}_{2}+\mathbf{p}_{3}=0\). Their energy is
\[e_{\mathbf{k}}+e_{\mathbf{p}_{1}}+e_{\mathbf{p}_{2}}+e_{\mathbf{p}_{3}}\geq e _{\mathbf{k}}. \tag{76}\]
Thus the state \(b_{\mathbf{k}}^{*}\Omega_{\mathrm{Bog}}\) is situated at the boundary of the energy-momentum spectrum and the only coupling is through \(\mathbf{p}_{1}=\mathbf{p}_{2}=\mathbf{p}_{3}=0\). Before going to the thermodynamic limit this is excluded, because on the excited space all momenta are different from zero. Assuming that this effect survives the thermodynamic limit, we expect that the term \(H_{3,2}^{\mathrm{exc}}\) does not lead to damping and we therefore drop it from \(H_{\mathrm{Fried}}^{\Lambda}\), even though in terms of the coupling parameter \(\kappa\) this term is of the same order as \(H_{3,1}^{\mathrm{exc}}\), which we keep in our analysis.
## 7. Computing the self-energy
In the remaining part of our paper, the main goal will be to compute approximately the 3-dimensional integral (68). To do this efficiently it is important to choose a convenient coordinate system.
Let us introduce the notation \(k=|\mathbf{k}|\), \(p=|\mathbf{p}|\), \(l=|\mathbf{l}|\), where \(\mathbf{l}=\mathbf{k}-\mathbf{p}\). One could try to compute (68) using the spherical coordinates for \(\mathbf{p}\) with respect to the axis determined by \(\mathbf{k}\). This means using \(p=|\mathbf{p}|,w=\cos\theta,\phi\), so that \(\mathbf{p}=(p\sqrt{1-w^{2}}\cos\phi,p\sqrt{1-w^{2}}\sin\phi,pw)\). The self-energy in these coordinates is
\[\Sigma_{\mathbf{k}}^{\Lambda}(z)=\frac{1}{(2\pi)^{3}}\int_{0}^{\infty}\int_{- 1}^{1}\int_{0}^{2\pi}\frac{h_{\mathbf{k}}^{\Lambda}(p,w)^{2}p^{2}\,\mathrm{d}p \,\mathrm{d}w\,\mathrm{d}\phi}{(z-e_{p}-e_{l(p,w)})} \tag{77}\]
where, with abuse of notation, \(h_{\mathbf{k}}^{\Lambda}(p,w)\) is the function \(h_{\mathbf{k}}^{\Lambda}(\mathbf{p})\) in the variables \(p,w,\phi\). The variable \(\phi\) can be easily integrated out. \(h_{\mathbf{k}}^{\Lambda}(\mathbf{p})\) depends only on \(k,p,l\) and (77) can be rewritten as
\[\Sigma_{\mathbf{k}}^{\Lambda}(z)=\frac{1}{(2\pi)^{2}}\int_{0}^{\infty}\int_{- 1}^{1}\frac{(h_{k}^{\Lambda}(p,l(p,w)))^{2}p^{2}\,\mathrm{d}p\,\mathrm{d}w}{( z-e_{p}-e_{l(p,w)})},\]
The coordinates \(p,w\) are not convenient because they break the natural symmetry \(\mathbf{p}\to\mathbf{k}-\mathbf{p}\) of the system. Instead of \(p,w\) it is much better to use the variables \(p,l\). Note the constraints
\[|p-l| \leq k, \tag{78}\] \[k \leq p+l, \tag{79}\]
that follow from the triangle inequality. We have \(w=\frac{k^{2}+p^{2}-l^{2}}{2kp}\). The Jacobian is easily computed:
\[p^{2}\,\mathrm{d}p\,\mathrm{d}w=\frac{pl}{k}\,\mathrm{d}p\,\mathrm{d}l=\frac{ 1}{4k}\,\mathrm{d}p^{2}\,\mathrm{d}l^{2}. \tag{80}\]
Let us make another change of variables:
\[t=p+l,\quad s=p-l;\qquad p=\frac{t+s}{2},\quad l=\frac{t-s}{2}; \tag{81}\]
\[\mathrm{d}p^{2}\,\mathrm{d}l^{2}=\frac{t^{2}-s^{2}}{2}\,\mathrm{d}t\,\mathrm{ d}s. \tag{82}\]
The limits of integration following from the constraints (78) and (79) are very easy to impose:
\[\Sigma_{\mathbf{k}}^{\Lambda}(z)=\frac{1}{(2\pi)^{2}}\int_{k}^{\Lambda}\, \mathrm{d}t\int_{-k}^{k}\,\mathrm{d}s\frac{(h_{k}^{\Lambda}(t,s))^{2}(t^{2}-s ^{2})}{8k(z-e_{\frac{t+s}{2}}-e_{\frac{t-s}{2}})}, \tag{83}\]
Another choice of variables can also be useful. If \(k\mapsto e_{k}\) is an increasing function, which is always the case for small \(k\), but also for the important case of constant \(\frac{\hat{v}(\mathbf{k})}{\hat{v}(0)}\), we can use the variables \(u:=e_{p}\) and \(w:=e_{l}\). Set
\[f(e_{k}):=\frac{\mathrm{d}k^{2}}{\mathrm{d}e_{k}^{2}}. \tag{84}\]
Thus we change the variables
\[\frac{1}{4k}\,\mathrm{d}p^{2}\,\mathrm{d}l^{2}=\frac{1}{2k}f(u)f(w)\,\mathrm{d}u^{ 2}\,\mathrm{d}w^{2}. \tag{85}\]
\[\Sigma^{\Lambda}_{\mathbf{k}}(z)=\frac{1}{(2\pi)^{2}}\int\frac{(h^{\Lambda}_{k} (u,w))^{2}f(u)f(w)\,\mathrm{d}u^{2}\,\mathrm{d}w^{2}}{4k(z-u-w)},\]
We then perform a further change of variable
\[x=u+w,\quad y=u-w;\qquad u=\frac{x+y}{2},\quad w=\frac{x-y}{2}; \tag{86}\]
\[\mathrm{d}u^{2}\,\mathrm{d}w^{2}=\frac{x^{2}-y^{2}}{2}\,\mathrm{d}x\,\mathrm{d }y. \tag{87}\]
Now we can write
\[\Sigma^{\Lambda}_{\mathbf{k}}(z)=\frac{1}{8\pi^{2}k}\iint\frac{(h^{\Lambda}_{ k}(x,y))^{2}f(\frac{x+y}{2})f(\frac{x-y}{2})(x^{2}-y^{2})\,\mathrm{d}y\, \mathrm{d}x}{4(z-x)},\]
where the limits of integration are somewhat more difficult to describe.
When \(\frac{\hat{v}(\mathbf{k})}{\hat{v}(0)}\) is a constant, so that
\[e_{k}=k\sqrt{\mu+\frac{k^{2}}{4}},\qquad k^{2}=2\big{(}\sqrt{e_{k}^{2}+\mu^{2} }-\mu\big{)}, \tag{88}\]
we can compute the function \(f\):
\[f(u)=\frac{1}{\sqrt{u^{2}+\mu^{2}}}. \tag{89}\]
We also have
\[\sigma_{k}=\sqrt{\frac{\frac{k^{2}}{2}+\mu+\sqrt{\frac{k^{4}}{4}+\mu k^{2}}}{2 \sqrt{\frac{k^{4}}{4}+\mu k^{2}}}},\quad\gamma_{k}=\sqrt{\frac{\frac{k^{2}}{2 }+\mu-\sqrt{\frac{k^{4}}{4}+\mu k^{2}}}{2\sqrt{\frac{k^{4}}{4}+\mu k^{2}}}}. \tag{90}\]
## 8. Damping rate
The following theorem is the main result of this paper.
**Theorem 2**.: _Suppose that the dispersion relation is given by (11). Then \(\mathrm{Im}\Sigma^{\Lambda}_{\mathbf{k}}\) does not depend on \(\Lambda\) for large \(\Lambda\) and we have_
\[\lim_{\Lambda\to\infty}\mathrm{Im}\Sigma^{\Lambda}_{\mathbf{k}}(e_{k}+\mathrm{ i}0)=-c_{\mathrm{Bel}}k^{5}+O(k^{6})\qquad\text{as}\qquad k\to 0,\qquad c_{\mathrm{Bel}}= \frac{3\hat{v}(0)}{640\pi^{2}\mu}k^{5}. \tag{91}\]
Proof of Theorem 2.: To prove Theorem 2 we will use the variables \(x,y\):
\[\Sigma^{\Lambda}_{\mathbf{k}}(e_{k}+\mathrm{i}0)=\frac{1}{8\pi^{2}k}\iint \frac{(h^{\Lambda}_{\mathbf{k}}(x,y))^{2}(x^{2}-y^{2})\,\mathrm{d}y\,\mathrm{ d}x}{(e_{k}-x+\mathrm{i}0)\sqrt{(x+y)^{2}+4\mu^{2}}\sqrt{(x-y)^{2}+4\mu^{2}}}. \tag{92}\]
It follows from (92) and the Sochocki-Plemelj formula that
\[\Sigma^{\Lambda}_{\mathbf{k}}(e_{k}+\mathrm{i}0) =\mathrm{Re}\Sigma^{\Lambda}_{\mathbf{k}}(e_{k}+\mathrm{i}0)+ \mathrm{iIm}\Sigma^{\Lambda}_{\mathbf{k}}(e_{k}+\mathrm{i}0),\] \[\mathrm{Re}\Sigma_{\mathbf{k}}k^{\Lambda}(e_{k}+\mathrm{i}0) =\frac{1}{8\pi^{2}k}\iint\frac{(h^{\Lambda}_{\mathbf{k}}(x,y))^{ 2}(x^{2}-y^{2})\,\mathrm{d}y\,\mathrm{d}x}{(e_{k}-x)\sqrt{(x+y)^{2}+4\mu^{2}} \sqrt{(x-y)^{2}+4\mu^{2}}} \tag{93}\] \[\mathrm{Im}\Sigma^{\Lambda}_{\mathbf{k}}(e_{k}+\mathrm{i}0) =-\frac{\pi}{8\pi^{2}k}\iint\frac{(h^{\Lambda}_{\mathbf{k}}(x,y)) ^{2}(x^{2}-y^{2})\delta(e_{k}-x)\,\mathrm{d}y\,\mathrm{d}x}{\sqrt{(x+y)^{2}+4 \mu^{2}}\sqrt{(x-y)^{2}+4\mu^{2}}}\] (94) \[=-\frac{\pi}{8\pi^{2}k}\int\frac{(h^{\Lambda}_{\mathbf{k}}(e_{k},y))^{2}(e_{k}^{2}-y^{2})\,\mathrm{d}y}{\sqrt{(e_{k}+y)^{2}+4\mu^{2}}\sqrt{(e_{ k}-y)^{2}+4\mu^{2}}}. \tag{95}\]
Our starting point is the expression (95). Obviously, we first need to establish the integration limits in \(y\). Recall that \(y=e_{p}-e_{l}\) but under the additional constraint that \(e_{k}=e_{p}+e_{l}\) which comes from the constraint \(\delta(x-e_{k})\) in (94). It follows immediately that \(-e_{k}\leq y\leq e_{k}\). Thus, for \(\Lambda\) large enough, \(\operatorname{Im}\Sigma^{\Lambda}_{\mathbf{k}}(e_{k}+\mathrm{i}0)\) will not depend on \(\Lambda\).
Let us first compute \((h_{\mathbf{k}}(x,y))^{2}\). For further reference we will keep \(x\) as a variable. Recall we assume \(\hat{v}(\mathbf{k})=\hat{v}(0)\). From the definition of \(h_{\mathbf{k}}(\mathbf{p})\) we get
\[\frac{h_{\mathbf{k}}(\mathbf{p})}{2\sqrt{\mu\hat{v}(0)}} =\sigma_{k}(\sigma_{p}\sigma_{l}-\sigma_{l}\gamma_{p}-\sigma_{p} \gamma_{l})+\gamma_{k}(\sigma_{p}\gamma_{l}+\sigma_{l}\gamma_{p}-\gamma_{p} \gamma_{l}).\] \[=\frac{\sigma_{k}}{2\sqrt{uw}}\bigg{(}\sqrt{\sqrt{u^{2}+\mu^{2}} +u}\sqrt{\sqrt{w^{2}+\mu^{2}}+w}-\sqrt{\sqrt{w^{2}+\mu^{2}}+w}\sqrt{\sqrt{u^{2 }+\mu^{2}}-u}\] \[-\sqrt{\sqrt{u^{2}+\mu^{2}}+u}\sqrt{\sqrt{w^{2}+\mu^{2}}-w}\bigg{)}\] \[+\frac{\gamma_{k}}{2\sqrt{uw}}\bigg{(}\sqrt{\sqrt{u^{2}+\mu^{2}} +u}\sqrt{\sqrt{w^{2}+\mu^{2}}-w}+\sqrt{\sqrt{w^{2}+\mu^{2}}+w}\sqrt{\sqrt{u^{2 }+\mu^{2}}-u}\] \[-\sqrt{\sqrt{u^{2}+\mu^{2}}-u}\sqrt{\sqrt{w^{2}+\mu^{2}}-w}\bigg{)} \tag{96}\] \[=\frac{1}{2\sqrt{x^{2}-y^{2}}}\bigg{(}\sigma_{k}\sqrt{(A_{1}+x+y) (A_{2}+x-y))}-\gamma_{k}\sqrt{(A_{1}-x-y)(A_{2}-x+y)}\] \[+(\gamma_{k}-\sigma_{k})\sqrt{(A_{1}-x-y)(A_{2}+x-y))}+(\gamma_{k }-\sigma_{k})\sqrt{(A_{1}+x+y)(A_{2}-x+y))}\bigg{)}, \tag{97}\]
where
\[A_{1}:=A_{1}(x,y)=\sqrt{(x+y)^{2}+4\mu^{2}},\qquad A_{2}:=A_{2}(x,y)=\sqrt{(x- y)^{2}+4\mu^{2}}. \tag{98}\]
Therefore the integrand in (92) becomes
\[\frac{(h_{\mathbf{k}}(x,y))^{2}(x^{2}-y^{2})}{\sqrt{(x+y)^{2}+4 \mu^{2}}\sqrt{(x-y)^{2}+4\mu^{2}}} \tag{99}\] \[=\frac{\mu\hat{v}(0)}{A_{1}A_{2}}\bigg{(}\sigma_{k}\sqrt{(A_{1}+x +y)(A_{2}+x-y)})-\gamma_{k}\sqrt{(A_{1}-x-y)(A_{2}-x+y)}\] \[+(\gamma_{k}-\sigma_{k})\sqrt{(A_{1}-x-y)(A_{2}+x-y))}+(\gamma_{k }-\sigma_{k})\sqrt{(A_{1}+x+y)(A_{2}-x+y))}\bigg{)}^{2}.\] \[=\frac{\mu\hat{v}(0)}{A_{1}A_{2}}\bigg{(}\sigma_{k}^{2}\left(3A_{ 1}A_{2}+(x+y)A_{2}+(x-y)A_{1}-(x^{2}-y^{2})-4\mu(A_{1}+A_{2}+2x)+8\mu^{2}\right)\] \[+\gamma_{k}^{2}\left(3A_{1}A_{2}-(x+y)A_{2}-(x-y)A_{1}-(x^{2}-y^{ 2})-4\mu(A_{1}+A_{2}-2x)+8\mu^{2}\right)\] \[+2\sigma_{k}\gamma_{k}\left(4\mu A_{1}+4\mu A_{2}-2A_{1}A_{2}+2(x ^{2}-y^{2})-12\mu^{2}\right)\bigg{)}. \tag{100}\]
Thus
\[\int_{-e_{k}}^{e_{k}}\mathrm{d}y\frac{h_{\mathbf{k}}^{2}(x,y)(x^{2 }-y^{2})}{\sqrt{(x+y)^{2}+4\mu^{2}}\sqrt{(x-y)^{2}+4\mu^{2}}} \tag{101}\] \[=\mu\hat{v}(0)\int_{-e_{k}}^{e_{k}}\mathrm{d}y\bigg{(}\left(3 \sigma_{k}^{2}+3\gamma_{k}^{2}-4\sigma_{k}\gamma_{k}\right)+(\sigma_{k}^{2}- \gamma_{k}^{2})\left(\frac{x-y}{A_{2}}+\frac{x+y}{A_{1}}-\frac{8\mu x}{A_{1}A_{2 }}\right)\] \[+(-\sigma_{k}^{2}-\gamma_{k}^{2}+4\sigma_{k}\gamma_{k})\frac{x^{2 }-y^{2}}{A_{1}A_{2}}-4\mu(\sigma_{k}-\gamma_{k})^{2}\frac{A_{1}+A_{2}}{A_{1}A_{ 2}}+8\mu^{2}(\sigma_{k}^{2}+\gamma_{k}^{2}-3\sigma_{k}\gamma_{k})\frac{1}{A_{1} A_{2}}\bigg{)}. \tag{102}\]
The integrals involving \(\frac{x\pm y}{A_{\pm}}\) and \(\frac{1}{A_{\pm}}\) (where \(A_{+}=A_{1}\) and \(A_{-}=A_{2}\)) can be computed explicitly. Setting \(x=e_{k}\) this implies
\[\int_{-e_{k}}^{e_{k}}\,\mathrm{d}y\frac{e_{k}\pm y}{A_{\pm}(e_{k}, y)} =\int_{-e_{k}}^{e_{k}}\,\mathrm{d}y\left(\frac{e_{k}\pm y}{\sqrt{(e_{k}\pm y)^{2}+4 \mu^{2}}}\right)=2\sqrt{\mu^{2}+e_{k}^{2}}-2\mu, \tag{103}\] \[\int_{-e_{k}}^{e_{k}}\,\mathrm{d}y\frac{1}{A_{\pm}(e_{k},y)} =\int_{-e_{k}}^{e_{k}}\,\mathrm{d}y\left(\frac{1}{\sqrt{(e_{k}\pm y )^{2}+4\mu^{2}}}\right)=\log\left(\frac{e_{k}}{\mu}+\sqrt{1+\frac{e_{k}^{2}}{ \mu^{2}}}\right). \tag{104}\]
This yields
\[\int_{-e_{k}}^{e_{k}}\,\mathrm{d}y\left(\frac{(h_{\mathbf{k}}^{ \Lambda}(x,y))^{2}(x^{2}-y^{2})}{\sqrt{(x+y)^{2}+4\mu^{2}}\sqrt{(x-y)^{2}+4\mu ^{2}}}\right) \tag{105}\] \[=\mu\hat{v}(0)\left(2\left(3\sigma_{k}^{2}+3\gamma_{k}^{2}-4 \sigma_{k}\gamma_{k}\right)e_{k}+4\sqrt{\mu^{2}+e_{k}^{2}}-4\mu-8\mu(\sigma_{k }-\gamma_{k})^{2}\log\left(\frac{e_{k}}{\mu}+\sqrt{1+\frac{e_{k}^{2}}{\mu^{2} }}\right)\right)\] \[+\mu\hat{v}(0)\int_{-e_{k}}^{e_{k}}\,\mathrm{d}y\bigg{(}\frac{-( \sigma_{k}^{2}-4\sigma_{k}\gamma_{k}+\gamma_{k}^{2})(e_{k}^{2}-y^{2})-8\mu e_{ k}+8\mu^{2}(\sigma_{k}^{2}+\gamma_{k}^{2}-3\sigma_{k}\gamma_{k})}{A_{1}A_{2}} \bigg{)}. \tag{106}\]
where two types of integrals, namely
\[\int\left(\frac{-y^{2}}{A_{1}A_{2}}\right)\,\mathrm{d}y\,\,\,\text{and}\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\
\[-\mu\hat{v}(0)\frac{\sqrt{(e_{k}/\mu)^{2}+1}-2}{e_{k}/\mu}\int_{-e_{k}}^{e_{k}} \,\mathrm{d}y\left(\frac{e_{k}^{2}-y^{2}}{A_{1}A_{2}}\right) \tag{111}\]
We expand (110) up to order \(O(e_{k}^{8})\). A tedious computation yields
\[(\ref{eq:110})=\mu\hat{v}(0)\left(2\mu+\frac{e_{k}^{2}}{\mu}+\frac{5e_{k}^{4}} {12\mu^{3}}-\frac{41e_{k}^{6}}{120\mu^{5}}+O(e_{k}^{8})\right). \tag{113}\]
We shall now deal with the terms (111) and (112). To this end we write
\[A_{1}A_{2} =\sqrt{4\mu^{2}+(e_{k}+y)^{2}}\sqrt{4\mu^{2}+(e_{k}-y)^{2}} \tag{114}\] \[=4\mu^{2}\sqrt{1+\left(\frac{e_{k}+y}{2\mu}\right)^{2}}\sqrt{1+ \left(\frac{e_{k}-y}{2\mu}\right)^{2}}\] (115) \[=4\mu^{2}\sqrt{1+\frac{e_{k}^{2}+y^{2}}{2\mu^{2}}+\left(\frac{e_ {k}^{2}-y^{2}}{4\mu^{2}}\right)^{2}}\] (116) \[=4\mu^{2}\sqrt{1+Q_{1}}\] (117) \[=4\mu^{2}\left(1+\frac{1}{2}Q_{1}-\frac{1}{8}Q_{1}^{2}+\frac{1}{ 16}Q_{1}^{3}\right)+O(Q_{1}^{4}). \tag{118}\]
where
\[Q_{1}:=\frac{e_{k}^{2}+y^{2}}{2\mu^{2}}+\left(\frac{e_{k}^{2}-y^{2}}{4\mu^{2}} \right)^{2} \tag{119}\]
Then
\[\frac{1}{A_{1}A_{2}}=\frac{1}{4\mu^{2}(1+Q_{2})}=\frac{1}{4\mu^{2}}(1-Q_{2}+Q _{2}^{2}-Q_{2}^{3})+O(Q_{2}^{4}) \tag{120}\]
where
\[Q_{2}:=\frac{1}{2}Q_{1}-\frac{1}{8}Q_{1}^{2}+\frac{1}{16}Q_{1}^{3}. \tag{121}\]
This leads to
\[\frac{1}{A_{1}A_{2}}=\frac{1}{4\mu^{2}}-\frac{e_{k}^{2}}{16\mu^{4}}+\frac{e_{ k}^{4}}{64\mu^{6}}-\frac{e_{k}^{6}}{256\mu^{8}}-\frac{y^{2}}{16\mu^{4}}+\frac{e_ {k}^{2}y^{2}}{16\mu^{6}}-\frac{9e_{k}^{4}y^{2}}{256\mu^{8}}+\frac{y^{4}}{64\mu ^{6}}-\frac{9e_{k}^{2}y^{4}}{256\mu^{8}}-\frac{y^{6}}{256\mu^{8}}+O(e_{k}^{ \iota_{1}}y^{\iota_{2}}) \tag{122}\]
where \(\iota_{1}+\iota_{2}=7\). In turn
\[\int_{-e_{k}}^{e_{k}}\frac{1}{A_{1}A_{2}}\,\mathrm{d}y=\frac{e_{k}}{2\mu^{2}}- \frac{e_{k}^{3}}{6\mu^{4}}+\frac{19e_{k}^{5}}{240\mu^{6}}-\frac{13e_{k}^{7}}{2 80\mu^{8}}+O(e_{k}^{8}) \tag{123}\]
and
\[\int_{-e_{k}}^{e_{k}}\frac{e_{k}^{2}-y^{2}}{A_{1}A_{2}}\,\mathrm{d}y=\frac{e_ {k}^{3}}{3\mu^{2}}-\frac{e_{k}^{5}}{10\mu^{4}}+\frac{11e_{k}^{7}}{280\mu^{6}}+ O(e_{k}^{8}). \tag{124}\]
This implies
\[(\ref{eq:111})=-\mu\hat{v}(0)\left(-\frac{e_{k}^{2}}{3\mu}+\frac{4e_{k}^{4}}{1 5\mu^{3}}-\frac{11e_{k}^{6}}{84\mu^{5}}\right)+O(e_{k}^{8}), \tag{125}\]
and
\[(\ref{eq:112})=-\mu\hat{v}(0)\left(2\mu+\frac{4e_{k}^{2}}{3\mu}+\frac{3e_{k}^{ 4}}{20\mu^{3}}-\frac{2e_{k}^{6}}{7\mu^{5}}\right)+O(e_{k}^{8}). \tag{126}\]
Combining (125), (126) and (113) we obtain
\[-\frac{1}{8\pi k}\int_{-e^{k}}^{e_{k}}\frac{(h_{\mathbf{k}}^{\Lambda}(e_{k},y)) ^{2}(e_{k}^{2}-y^{2})}{\sqrt{(e_{k}+y)^{2}+4\mu^{2}}\sqrt{(e_{k}-y)^{2}+4\mu^{ 2}}}\,\mathrm{d}y\]
\[=-\frac{\mu\dot{v}(0)}{16\pi k}\left(\frac{5}{12}-\frac{41}{120}\right)\frac{e_{k}^ {6}}{\mu^{5}}=-\frac{3\dot{v}(0)}{640\pi^{2}\mu^{4}}\frac{e_{k}^{6}}{k}. \tag{127}\]
This yields (91).
## 9. Renormalization of the full self-energy
In this section we will try to make sense of the real part of the energy shift. We will see that it is much more problematic. Actually, our result will be negative: The Fermi Golden Rule starting from the Bogoliubov approximation does not allow us to compute the energy shift of the dispersion relation.
We start with a seemingly positive result, which may suggest that one can hope for a removal of the ultraviolet cutoff in the self-energy:
**Theorem 3**.: _For \(\mathbf{k}\neq 0\), the cutoff self-energy at \(z=0\), that is \(\Sigma^{\Lambda}_{\mathbf{k}}(0)\), is finite. Moreover, for \(\mathrm{Im}z>0\) there exists the limit_
\[\tilde{\Sigma}_{\mathbf{k}}(z):=\lim_{\Lambda\to\infty}\big{(}\Sigma^{\Lambda }_{\mathbf{k}}(z)-\Sigma^{\Lambda}_{\mathbf{k}}(0)\big{)}. \tag{128}\]
_One can also take the limit of (128) on the real line:_
\[\tilde{\Sigma}_{\mathbf{k}}(e_{\mathbf{k}}+\mathrm{i}0):=\lim_{\Lambda\to \infty}\big{(}\Sigma^{\Lambda}_{\mathbf{k}}(e_{\mathbf{k}}+\mathrm{i}0)- \Sigma^{\Lambda}_{\mathbf{k}}(0)\big{)}=\lim_{\varepsilon\searrow 0}\tilde{ \Sigma}_{\mathbf{k}}(e_{\mathbf{k}}+\mathrm{i}\varepsilon). \tag{129}\]
What is the physical meaning of \(\tilde{\Sigma}_{\mathbf{k}}(z)\) and \(\tilde{\Sigma}_{\mathbf{k}}(e_{\mathbf{k}}+\mathrm{i}0)\)? Probably none. The counterterm \(\Sigma^{\Lambda}_{\mathbf{k}}(0)\) depends on \(k\). We conclude that the quantity \(\mathrm{Re}\Sigma^{\mathrm{ren}}_{\mathbf{k}}(e_{\mathbf{k}}+\mathrm{i}0)\) probably has little to with the real energy shift as we do not see how one can justify that we are using the "right" counterterm. Indeed, in principle, one could add to this counterterm an arbitrary function of \(k\).
If one could find a \(k\)-independent counterterm \(c^{\Lambda}\) such that
\[\Sigma^{\mathrm{ren}}_{\mathbf{k}}(z):=\lim_{\Lambda\to\infty}\big{(}\Sigma^{ \Lambda}_{\mathbf{k}}(z)-c^{\Lambda}\big{)} \tag{130}\]
exists, then imposing \(\Sigma^{\mathrm{ren}}_{0}(0+\mathrm{i}0)=0\) one could hope that \(\Sigma^{\mathrm{ren}}_{\mathbf{k}}(e_{k}+\mathrm{i}0)\) yields the real part of the energy shift. Unfortunately, the next theorem excludes this possibility.
**Theorem 4**.: _We have_
\[\lim_{k\to 0}\Sigma^{\Lambda}_{\mathbf{k}}(0)=-\infty. \tag{131}\]
Proof of Theorem 3.: In this section we will use the variables \(t:=p+l\) and \(s:=p-l\) for integration. Recall from (83) that in these variables
\[\Sigma^{\Lambda}_{\mathbf{k}}(z)=\frac{1}{(2\pi)^{2}}\int_{k}^{\Lambda}\, \mathrm{d}t\int_{-k}^{k}\,\mathrm{d}s\frac{(h_{k}^{\Lambda}(p,l))^{2}pl}{8k(z- e_{p}-e_{l})}, \tag{132}\]
Hence,
\[\Sigma^{\Lambda}_{k}(0)=-\frac{1}{(2\pi)^{2}}\int_{k}^{\Lambda}\,\mathrm{d}t \int_{-k}^{k}\,\mathrm{d}s\frac{h_{k}^{2}(p,l)pl}{8k(e_{p}+e_{l})}. \tag{133}\]
Note that for some \(c>0\), we have
\[e_{p}+e_{l}\geq c(p+l)=ct. \tag{134}\]
Let \(k\neq 0\). Using (134) we see that (133) is an integral of a continuous function over a compact region, hence finite.
Subtracting (133) from (132) we obtain
\[\Sigma^{\Lambda}_{k}(z)-\Sigma^{\Lambda}_{k}(0)=\frac{1}{(2\pi)^{2}}\int_{k}^{ \Lambda}\,\mathrm{d}t\int_{-k}^{k}\,\mathrm{d}s\frac{zh_{k}^{2}(p,l)pl}{8k(z- e_{p}-e_{l})(e_{p}+e_{l})}, \tag{135}\]
For small \(t\) the integrand is bounded, using again (134). For large \(t\) we have \(e_{p}\simeq\frac{t^{2}}{2}\), \(e_{l}\simeq\frac{t^{2}}{2}\). Moreover, \(h_{k}(p,l)\) is bounded. Therefore, the integrand of (135) behaves as \(t^{-2}\). Hence it is integrable for large \(t\) and we can take the limit \(\Lambda\to\infty\) obtaining
\[\Sigma_{k}^{\mathrm{ren}}(z): =\lim_{\Lambda\to\infty}\big{(}\Sigma_{k}^{\Lambda}(z)-\Sigma_{k }^{\Lambda}(0)\big{)} \tag{136}\] \[=\frac{1}{(2\pi)^{2}}\int_{k}^{\infty}\,\mathrm{d}t\int_{-k}^{k} \,\mathrm{d}s\frac{zh_{k}^{2}(p,l)pl}{8k(z-e_{p}-e_{l})(e_{p}+e_{l})}. \tag{137}\]
This ends the proof of the theorem.
Before we show Theorem 4 we prove some lemmas.
**Lemma 5**.: _For small \(p,l\), we have_
\[\frac{e_{\frac{t}{2}}}{e_{p}+e_{l}}-\frac{1}{2} =O(s^{2}), \tag{138}\] \[\frac{pl}{e_{p}e_{l}}-\frac{t^{2}}{4e_{\frac{t}{2}}^{2}} =O(s^{2}),\] (139) \[\sigma_{p}\sigma_{l}\sqrt{e_{p}e_{l}}-\sigma_{\frac{t}{2}}^{2}e_ {\frac{t}{2}} =O(s^{2}),\] (140) \[\gamma_{p}\gamma_{l}\sqrt{e_{p}e_{l}}-\gamma_{\frac{t}{2}}^{2}e_ {\frac{t}{2}} =O(s^{2}). \tag{141}\]
Proof.: We can assume that \(s\geq 0\).
\[e_{p}^{\prime}=\big{(}\tfrac{p^{2}}{2}+\mu\big{)}\big{(}\tfrac{p^{2}}{4}+\mu \big{)}^{-\frac{1}{2}},\quad e_{p}^{\prime\prime}=p\big{(}\tfrac{p^{2}}{8}+ \tfrac{3\mu}{4}\big{)}\big{(}\tfrac{p^{2}}{4}+\mu\big{)}^{-\frac{3}{2}}=O(p). \tag{142}\]
Therefore,
\[2e_{\frac{t}{2}}-e_{p}-e_{l}=-\int_{-\frac{t}{2}}^{\frac{s}{2}}\big{(}\tfrac{s }{2}-|v|\big{)}e_{\frac{t}{2}+v}^{\prime\prime}\,\mathrm{d}v=O(ts^{2}),\]
and hence
\[\frac{e_{\frac{t}{2}}}{e_{p}+e_{l}}-\frac{1}{2}=\frac{2e_{\frac{t}{2}}-e_{p}- e_{l}}{2(e_{p}+e_{l})}\]
is \(O(s^{2})\), which proves (138).
Next, set \(f(p):=\frac{p}{e_{p}}\). We have
\[\frac{\mathrm{d}}{\mathrm{d}p}f(p)=\frac{-2p}{(p^{2}+4\mu)^{\frac{3}{2}}}=O(p),\qquad\frac{\mathrm{d}^{2}}{\mathrm{d}p^{2}}f(p)=\frac{4(p^{2}-2\mu)}{(p^{2} +4\mu)^{\frac{5}{2}}}=O(1). \tag{143}\]
Hence
\[\frac{pl}{e_{p}e_{l}}-\frac{t^{2}}{4e_{\frac{t}{2}}^{2}}=f(p)f(l)-f \big{(}\tfrac{t}{2}\big{)}^{2} \tag{144}\] \[=\]
which is \(O(s^{2})\), which proves (139).
We check that the 0th, 1st and 2nd derivatives of
\[\sigma_{p}\sqrt{e_{p}}=\frac{1}{\sqrt{2}}\sqrt{\tfrac{p^{2}}{2}+\mu+\sqrt{ \tfrac{p^{4}}{4}+\mu p^{2}}}, \tag{145}\]
\[\gamma_{p}\sqrt{e_{p}}=\frac{1}{\sqrt{2}}\sqrt{\tfrac{p^{2}}{2}+\mu-\sqrt{ \tfrac{p^{4}}{4}+\mu p^{2}}} \tag{146}\]
are bounded. Then we argue as in (144), proving (140) and (141).
**Lemma 6**.: \[\lim_{k\to 0}\int_{k}^{\Lambda}\,\mathrm{d}t\int_{-k}^{k}\,\mathrm{d}s\frac{( \sigma_{p}\sigma_{l}-\gamma_{p}\gamma_{l})^{2}pl}{8k(e_{p}+e_{l})}=\int_{0}^{ \Lambda}\,\mathrm{d}t\frac{t^{2}}{64e_{\frac{t}{2}}},\] (147)
_where the right hand side is a finite positive number._
Proof.: We have
\[\frac{(\sigma_{p}\sigma_{l}-\gamma_{p}\gamma_{l})^{2}pl}{8k(e_{p} +e_{l})}-\frac{t^{2}}{8\cdot 8ke_{\frac{t}{2}}} \tag{148}\] \[= \frac{\big{(}(\sigma_{p}\sigma_{l}-\gamma_{p}\gamma_{l})\sqrt{e_ {p}e_{l}}+e_{\frac{t}{2}}\big{)}pl}{8k(e_{p}+e_{l})e_{p}e_{l}}\Big{(}(\sigma_{ p}\sigma_{l}-\gamma_{p}\gamma_{l})\sqrt{e_{p}e_{l}}-e_{\frac{t}{2}}\big{)}\Big{)}\] (149) \[+\frac{e_{\frac{t}{2}}^{2}}{8k(e_{p}+e_{l})}\Big{(}\frac{pl}{e_ {p}e_{l}}-\frac{t^{2}}{4e_{\frac{t}{2}}^{2}}\Big{)}\] (150) \[+\frac{t^{2}}{32ke_{\frac{t}{2}}}\Big{(}\frac{e_{\frac{t}{2}}}{e _{p}+e_{l}}-\frac{1}{2}\Big{)}. \tag{151}\]
By Lemma 5 the terms in the big brackets on the right of (149), (150) and (151) are \(O(s^{2})\). The terms in (150), (151) on the left are all \(\frac{1}{k}O(t)\). The most singular in \(t\) term is the one on the left of (149) and it is of order \(\frac{1}{k}O(t^{-1})\). Therefore,
\[\int_{k}^{\Lambda}\,\mathrm{d}t\int_{-k}^{k}\,\mathrm{d}s\Bigg{(} \frac{(\sigma_{p}\sigma_{l}-\gamma_{p}\gamma_{l})^{2}pl}{8k(e_{p}+e_{l})}- \frac{t^{2}}{64e_{\frac{t}{2}}}\Bigg{)} \tag{152}\] \[= \int_{k}^{\Lambda}\,\mathrm{d}t\int_{-k}^{k}\,\mathrm{d}sO(t^{-1 })\frac{O(s^{2})}{k}=\int_{k}^{\Lambda}\,\mathrm{d}tO(t^{-1}k^{2})=O(k^{2}\ln k )\to 0. \tag{153}\]
Proof of Theorem 4.: Recall (61). We have
\[\frac{h_{\mathbf{k}}(\mathbf{p})}{2\sqrt{\mu\hat{v}(0)}}=\frac{1}{2}(\sigma_ {k}+\gamma_{k})(\sigma_{p}\sigma_{l}-\gamma_{p}\gamma_{l})+\frac{1}{2}(\sigma_ {k}-\gamma_{k})(\sigma_{p}\sigma_{l}+\gamma_{p}\gamma_{l}-2\sigma_{p}\gamma_{l }-2\gamma_{p}\sigma_{l}). \tag{154}\]
Thus, using (83), we obtain
\[-\frac{(2\pi)^{2}}{\mu\hat{v}(0)}\Sigma_{k}^{\Lambda}(0) \tag{155}\] \[= (\sigma_{k}+\gamma_{k})^{2}\int_{k}^{\Lambda}\,\mathrm{d}t\int_{ -k}^{k}\,\mathrm{d}s\frac{(\sigma_{p}\sigma_{l}-\gamma_{p}\gamma_{l})^{2}pl}{ 2k(e_{p}+e_{l})}\] (156) \[+2\int_{k}^{\Lambda}\,\mathrm{d}t\int_{-k}^{k}\,\mathrm{d}s\frac{ (\sigma_{p}\sigma_{l}-\gamma_{p}\gamma_{l})(\sigma_{p}\sigma_{l}+\gamma_{p} \gamma_{l}-2\sigma_{p}\gamma_{l}-2\gamma_{p}\sigma_{l})pl}{2k(e_{p}+e_{l})}\] (157) \[+(\sigma_{k}-\gamma_{k})^{2}\int_{k}^{\Lambda}\,\mathrm{d}t\int_{ -k}^{k}\,\mathrm{d}s\frac{(\sigma_{p}\sigma_{l}+\gamma_{p}\gamma_{l}-2\sigma_ {p}\gamma_{l}-2\gamma_{p}\sigma_{l})^{2}pl}{2k(e_{p}+e_{l})} \tag{158}\]
where we used that \(\sigma_{k}^{2}-\gamma_{k}^{2}=1\). Since \(\Lambda\) is fixed we are only interested in the small \(t\) region. Since \(k\) is small too, this implies also \(p\) and \(l\) are small. For such we have
\[(\sigma_{k}+\gamma_{k})^{2} \geq Ck^{-1},\quad C>0 \tag{159}\] \[(\sigma_{k}-\gamma_{k})^{2} =O(k),\] (160) \[(\sigma_{p}\sigma_{l}-\gamma_{p}\gamma_{l})\sqrt{pl} =O(p)+O(l)=O(t),\] (161) \[(\sigma_{p}\sigma_{l}+\gamma_{p}\gamma_{l}-2\sigma_{p}\gamma_{l}- 2\gamma_{p}\sigma_{l})\sqrt{pl} =O(1), \tag{162}\]
\[\frac{1}{e_{p}+e_{l}}=O(t^{-1}). \tag{163}\]
By Lemma 6 and (159),
\[|(156)|\geq C_{1}k^{-1}\to+\infty. \tag{164}\]
By (161), (162) and (163),
\[|(157)|\leq C\int_{k}^{\Lambda}\,\mathrm{d}t\int_{-k}^{k}\mathrm{d}s\frac{1}{k }\to C_{\Lambda}\qquad\text{as}\;\;k\to 0. \tag{165}\]
Here \(C_{\Lambda}\) is a constant depending on \(\Lambda\) (which is fixed). By (160), (162) and (163),
\[|(158)|\leq Ck\int_{k}^{\Lambda}\,\mathrm{d}t\int_{-k}^{k}\,\mathrm{d}s\frac{1 }{kt}\leq Ck|\ln(k)|\to 0, \tag{166}\]
Hence (155) converges to \(+\infty\).
|
2308.15281
|
Back to the Future: From Microservice to Monolith
|
Recently the trend of companies switching from microservice back to monolith
has increased, leading to intense debate in the industry. We conduct a
multivocal literature review, to investigate reasons for the phenomenon and key
aspects to pay attention to during the switching back and analyze the opinions
of other practitioners. The results pave the way for further research and
provide guidance for industrial companies switching from microservice back to
monolith.
|
Ruoyu Su, Xiaozhou Li, Davide Taibi
|
2023-08-29T13:12:23Z
|
http://arxiv.org/abs/2308.15281v1
|
# Back to the Future:
###### Abstract
Recently the trend of companies switching from microservice back to monolith has increased, leading to intense debate in the industry. We conduct a multivocal literature review, to investigate reasons for the phenomenon and key aspects to pay attention to during the switching back and analyze the opinions of other practitioners. The results pave the way for further research and provide guidance for industrial companies switching from microservice back to monolith.
**Keywords.**_Microservice \(\cdot\) Monolith \(\cdot\) Multivocal literature review \(\cdot\) Practitioner \(\cdot\) Trend_
## 1 Introduction
Microservice has become an important style of architecture due to its decomposable and decentralized nature [8]. In recent years, microservice has become increasingly popular, especially in industry [6]. Big companies like Netflix, Amazon and Spotify have also adopted microservice architecture and more and more companies are following this trend and migrating their systems to microservice [7]. They want to utilize the benefits of microservice, such as independent development, deployment and scaling to help the system solve the problem at hand, improve the quality of the system or facilitate software maintenance [3, 9].
However, while several companies have made significant improvements in velocity and team independence, others did not achieve the benefits expected after migrating to microservice. With the increasing number of companies migrating from monolith to microservice, the drawbacks of microservice architectures are enhanced [4, 5].
Recently, there is a trend towards switching from microservice back to monolith. One example is Amazon Prime Video, which is one of the world's largest streaming services, serving millions of customers worldwide. It claims that the switch from a distributed microservice architecture to a monolithic application helps achieve greater scale, resilience and lower costs [2]. This report caused a heated debate among practitioners, after all, even big companies like Amazon have made rollbacks from microservice.
This paper aims to investigate the cases that switch from microservice back to the monolith with a multivocal literature review. Based on our goals, we define the following research questions: _RQ\({}_{1}\) What are the reasons for switching back to monolith? RQ\({}_{2}\) What are the key aspects to pay attention to during the switching back? RQ\({}_{3}\) What are the opinions of the other practitioners regarding such "switch-back"?_.
The results show that there are four cases that switch from microservice back to monolith of the company: Istio 1, Amazon Prime Video2, Segment 3 and InVision 4. The five main reasons that switch back to monolith are: cost, complexity, scalability, performance and organization. During the process, there are six key aspects needed to be aware of: (1) stop developing more services, (2) consolidate and test paths, (3) unify data storage, (4) implement message bus principle, (5) give up diverse techniques and (6) learn to use modular design principles. Opinions of other practitioners are mixed, but most still believe that the decision to switch back to monolith requires careful consideration of the actual system situation and principles.
Footnote 1: [https://istio.io/](https://istio.io/)
Footnote 2: [https://www.primevideo.com/offers/nonprimehomepage/ref=dv_web_force_root](https://www.primevideo.com/offers/nonprimehomepage/ref=dv_web_force_root)
Footnote 3: [https://segment.com/](https://segment.com/)
Footnote 4: [https://www.invisionapp.com/](https://www.invisionapp.com/)
## 2 Methodology
Here we aim to understand the state of the arts regarding the methods, techniques, and tools facilitating the shift from microservice back to the monolith, as well as the practitioners' opinions and advice towards such practices. To such an end, we conducted a multivocal literature review (MLR) based on the guidelines defined by [1]. An MLR is a combination of two parts, including 1) a Systematic Literature Review (SLR) on the academic literature (white) published in journals or conferences, and 2) that on the grey literature, e.g., blog posts, social media posts, and videos [1]. Herein, we used the search query _(microservice* OR micro-service* OR "micro service*") AND monolith* AND (back OR return* OR refactor* OR rearchitect* OR migrant* OR re-architect*")_ in both white and grey literature search. From the search results, we aimed to select the articles (white and grey) that propose _change back from microservice to monolith_ and provide factual evidence and/or practical advice related to real industrial cases. By following the traditional SLR process, we obtained only one academic paper from four sources (Scopus, IEEE, ACM and Web of Science). Furthermore, by searching on Google, Reddit, Quora and Stack Overflow, we obtained 19 useful articles and 9 extra from snowballing 5. We extracted the data to answer the RQs by adapting and merging the categories provided by the selected articles.
Footnote 5: All selected articles are listed in Appendix which is saved in Arxiv.org. The link will be shared when the paper is accepted.
## 3 Results
According to the review, there are four cases that switch from microservice back to monolith: Istio [1], Amazon Prime Video [1], Segment [2] and InVision [1]. Among them, the case of Amazon Prime Video is the most discussed with nine articles. These sources were first published in 2018 and did not attract much attention at first. With the case of Segment in 2020, some discussion among practitioners was generated. The case of Amazon Prime Video in 2023 brought the heated discussion of switching back to monolith.
### RQ1: What are the reasons for switching back to monolith?
From the review, we identified five main reasons that cases switch back to monolith.
**Cost**. Cost is the most common reason why companies switch from microservice back to monolith. In Istio, marginal costs and operational costs are high due to the microservice architecture [1][2][3][4]. Amazon Prime Video has the most serious cost problem in four cases. It uses of serverless components resulted in the overall cost of all the building blocks not allowing for the large-scale acceptance of the solution, and the way the video frames (images) are passed between the different components is expensive for the large number of Tier-1 calls to S3 buckets. [2][3][4][5][6][7][8][9][10][11][12][13]. The cost-benefit of Prime Video's switch back to monolith is also the most significant. According to official reports, moving the service from microservice back to a monolith reduced the infrastructure cost by over 90% [2]. Segment also has a cost problem. The operational costs of supporting microservices are unaffordable for it [1][10]. Finally, in the case of InVision, it makes the comment that " Microservices Also Have a Dollars-And-Cents Cost" [1][11]. The service runs on the server, talks to the database, reports on metrics, and generates log entries, all of which have a very real dollar and cent cost. Therefore, it is evident that microservices incur expenses, particularly when accounting for the necessity of maintaining redundancy for establishing a highly available system [1].
**Complexity**. Complexity is one of the most important reasons why companies switch from microservice back to monolith. In the case of Istio, microservice architecture leads to greater complexity. Firstly, different planes in Istio are written in different programming languages [1]. Secondly, different teams are responsible for different services separately, but the reality is that this approach makes for increased complexity and has a bad impact on user usability, rather than making it simpler for the development team to manage [1][10]. In addition, all components in Istio's control plane are always released in the same version at the same time, while the functionality of the decoupled version of microservices complicates it [1]. Finally, Istio has only limited isolation, making full isolation of microservices difficult [1][1][10]. However, some other factors lead to greater complexity in the case Segment except the nature of the microservice architecture itself: managing multiple repositories and divergence of shared libraries [1][1][11][12]. Initially, each destination was divided into separate services but the same repository, but this caused frustration and inefficiency. A single broken test affected all destinations, and deploying changes required fixing unrelated tests [1][10][11]. Breaking out the code for each destination into separate repositories increased complexity and maintenance effort. To ease the burden of developing and maintaining these codebases, it created shared libraries to make common transforms and functionality [1]. The complexity of the problem InVision encountered is very similar to Segment. As time went on, InVision had more repositories, more programming languages, more databases, more monitoring dashboards, etc. that became too much for the development team to bear [1][10][11].
**Scalability**. The advantage of microservice scalability becomes a disadvantage in these cases. The control plane costs in Istio are mainly determined by the individual feature (XDS) [1][1][10]. In contrast, all other functions have marginal costs and the value of isolation is very small [10][11]. Amazon Prime Video met a scaling bottleneck. Due to the microservices architecture, it hit a hard scaling limit with around 5% of the expected load, resulting from the orchestration management implemented using AWS Step Functions [2][1][10][11][12]. Different from Prime Video, Segment scaling challenges are the reason for the automated scaling configuration. As the number of destinations grows, managing and scaling each microservice becomes a significant operational overhead [1]. Each service has a specific load pattern that requires manual scaling to cope with unexpected spikes [1]. Tuning the auto-scaling configuration becomes more challenging due to the different resource requirements of each service [1][10].
**Performance**. Performance is the most important reason for Segment's switch back to monolith. The most serious is the head of line blocking. The microservice architecture causes head-of-line blocking, causes delays in all destinations [1][1]. This affects the timeliness of event delivery and also customer satisfaction [1]. In addition, high complexity is also a factor that causes system performance to decline. The high complexity caused by microservice puts development teams in a difficult situation where the benefits of modularity and autonomy become burdensome, slowing them down and reducing productivity, which leads to poorer performance [1].
**Organization**. The suboptimal management of teams is also a headache, especially in Istio and InVision. In the case of Istio, although microservices allow different teams to manage services individually, in practice this creates a mess for development teams who want simpler management [1].In contrast, InVision has the most serious people problem. InVision had a legacy team with fewer people but more repositories, databases, programming languages, etc [1]. As time went on, the benefits of Conway's Law became a burden on the legacy team because of this unsuitable'size', so it was necessary to merge microservices back into a monolith [1][1].
### RQ2: What are the key aspects to pay attention to during the switching back?
We analyzed six key aspects during the process that switching from microservice back to monolith.
**Stop developing more services**. This means new services cannot be introduced. Switching from microservice back to monolith requires an existing microservice to be used as the "center" of the future monolith to host the new functionality [1]. All other services will eventually be merged into this center. However, if we still make new services after switching back to monolith, this could lead to the whole system getting messy.
**Consolidate and test paths**. Microservices sometimes have a single, coherent flow between multiple systems. Consolidation paths are necessary when systems are merged from microservices to a monolith [1]. After the merger, it is also important to test the path to ensure that the new system runs smoothly. This process ensures that the new monolithic architecture works properly and meets all requirements [1].
**Unify data storage**. Shanea proposed there are two main options: move the data to a single database or keep the data separate [1]. The former can reduce costs and improve performance while reducing the complexity of the system. The latter can help maintain the autonomy and isolation of separate components, while still moving towards a more homogeneous structure [1]. The choice of data storage is critical and development teams need to choose carefully based on actual requirements.
**Implement the message bus principle**. Implementing a message bus, like Kafka, can be a layer of indirection while transitioning [1]. This strategy enables a gradual consolidation of microservice into monolith without any interruptions to the current system. By utilizing a message bus, smooth communication between various components is ensured, facilitating the decomposition and recombination of services as required.
**Give up diverse techniques**. The feature of microservice is that different services can use different languages, frameworks, etc. And after the system has switched back to monolith, these diversifications need to be given up. For example, most systems should use no more than two back-end languages at any one time [1].
**Learn to use modular design principles**. We need to maintain a modular design when switching back to monolith. Modular design allows the code to be organized into distinct modules with clear
boundaries, which promotes separation of concerns and maintainability [1]. The modular design also allows systems to gain the flexibility and modularity benefits of microservices with the simplicity and ease of use of monoliths [1].
### RQ3: What are the opinions of the other practitioners regarding such "switch-back"?
Other practitioners have mixed opinions about this'switch-back' behavior. Some argue that this way is correct. They think microservice is not the "utopian application architecture" [1]. David Heinemeier Hansson scoffs microservice is a zombie architecture [1]. Monoliths do have an advantage over microservices because they are easier to code, scale, deploy, test, and deal with cross-domain issues [1][1]. Some still don't agree with it. They believe that microservices are still one of the most popular architectures. Angel posts monoliths are not the solution, and organizations need to think better and support proactively communication channels that can supply the gaps between teams [1]. However, most practitioners still believe that the need to switch back to a monolithic architecture requires consideration of the actual system situation and principles. Such a switch back would require an assessment of whether monolithic is really the best fit for the company's team size, structure, skills, and operational capabilities [1][1]. Moreover, most of the disadvantages of microservices are well-known [1], so Itiel believes that the recommendation for architecture depends on the type of project [1].
## 4 Conclusions
There are discussions among practitioners regarding switching from microservice back to monolith when, especially, some companies have already taken action. Though it is still too early to claim it as a trend, the practitioners' opinions are certainly worth noticing. In this work, we performed a preliminary investigation on the reasons for companies to decide to switch back to monolith and key aspects to pay attention to during the process. At the same time, we analyzed the opinions of other practitioners regarding this trend. By systematically addressing 29 white and grey literature in the field, our findings reveal cost is the most important reason why companies switch from microservice back to monolith. Furthermore, complexity, scalability, performance and organization are also the main reasons for this trend. During this process of switching back, we summarized there are six key aspects that needed to pay attention to: (1) stop developing more services, (2) consolidate and test paths, (3) unify data storage, (4) implement message bus principle, (5) give up diverse techniques and (6) learn to use modular design principles. The results show that practitioners have started seriously considering the benefits and motivation of switching back. Though academic studies and industrial applications of microservices are obviously in their prime, the pains of microservices and the benefits of adopting monolith can still provide insights into their improvement. In future studies, we shall further investigate the in-depth opinions of the industry on this topic via surveys and interviews. We shall also conduct comparative case studies on the performances of microservice and reversed monolith systems.
|
2306.12042
|
Block-Wise Index Modulation and Receiver Design for High-Mobility OTFS
Communications
|
As a promising technique for high-mobility wireless communications,
orthogonal time frequency space (OTFS) has been proved to enjoy excellent
advantages with respect to traditional orthogonal frequency division
multiplexing (OFDM). Although multiple studies have considered index modulation
(IM) based OTFS (IM-OTFS) schemes to further improve system performance, a
challenging and open problem is the development of effective IM schemes and
efficient receivers for practical OTFS systems that must operate in the
presence of channel delays and Doppler shifts. In this paper, we propose two
novel block-wise IM schemes for OTFS systems, named delay-IM with OTFS
(DeIM-OTFS) and Doppler-IM with OTFS (DoIM-OTFS), where a block of
delay/Doppler resource bins are activated simultaneously. Based on a maximum
likelihood (ML) detector, we analyze upper bounds on the average bit error
rates for the proposed DeIM-OTFS and DoIM-OTFS schemes, and verify their
performance advantages over the existing IM-OTFS systems. We also develop a
multi-layer joint symbol and activation pattern detection (MLJSAPD) algorithm
and a customized message passing detection (CMPD) algorithm for our proposed
DeIMOTFS and DoIM-OTFS systems with low complexity. Simulation results
demonstrate that our proposed MLJSAPD and CMPD algorithms can achieve desired
performance with robustness to the imperfect channel state information (CSI).
|
Mi Qian, Fei Ji, Yao Ge, Miaowen Wen, Xiang Cheng, H. Vincent Poor
|
2023-06-21T06:18:40Z
|
http://arxiv.org/abs/2306.12042v1
|
# Block-Wise Index Modulation and Receiver Design for High-Mobility OTFS Communications
###### Abstract
As a promising technique for high-mobility wireless communications, orthogonal time frequency space (OTFS) has been proved to enjoy excellent advantages with respect to traditional orthogonal frequency division multiplexing (OFDM). Although multiple studies have considered index modulation (IM) based OTFS (IM-OTFS) schemes to further improve system performance, a challenging and open problem is the development of effective IM schemes and efficient receivers for practical OTFS systems that must operate in the presence of channel delays and Doppler shifts. In this paper, we propose two novel block-wise IM schemes for OTFS systems, named delay-IM with OTFS (DeIM-OTFS) and Doppler-IM with OTFS (DoIM-OTFS), where a block of delay/Doppler resource bins are activated simultaneously. Based on a maximum likelihood (ML) detector, we analyze upper bounds on the average bit error rates for the proposed DeIM-OTFS and DoIM-OTFS schemes, and verify their performance advantages over the existing IM-OTFS systems. We also develop a multi-layer joint symbol and activation pattern detection (MLJSAPD) algorithm and a customized message passing detection (CMPD) algorithm for our proposed DeIM-OTFS and DoIM-OTFS systems with low complexity. Simulation results demonstrate that our proposed MLJSAPD and CMPD algorithms can achieve desired performance with robustness to the imperfect channel state information (CSI).
OTFS modulation, index modulation, layered message passing algorithm, performance analysis.
## I Introduction
Nowadays, a large number of wireless applications such as communication with high-speed trains and unmanned autonomous vehicles are emerging. Accordingly, it is important to have high data rate and low latency communications to satisfy the fast-growing requirements expected in the future. Orthogonal frequency division multiplexing (OFDM) modulation is prevalent in today's wireless systems as it is able to provide high spectral efficiency and is easy to implement [1, 2, 3]. However, for time-varying channels with large Doppler spread, OFDM can suffer significant performance degradation due to the loss of orthogonality or inter-carrier-interference (ICI).
To cope with high-mobility scenarios, a new modulation scheme referred to as orthogonal time frequency space (OTFS) has been proposed [4, 5, 6], which can achieve significant performance improvement over OFDM modulation. OTFS can exploit the diversity gain from both the delay and Doppler dimensions of a mobile wireless channel since all transmitted symbols can be multiplexed in the delay-Doppler domain and spread over the time-frequency domain [7, 8, 9, 10, 11, 12]. Furthermore, OTFS can convert the time-varying channel into a two-dimensional (2D) quasi-time-invariant channel in the delay-Doppler domain, which significantly reduces the complexity of channel estimation [13, 14, 15] and symbol detection [16, 17, 18, 19, 20, 21, 22] at the receiver. Attracted by its advantages, a number of studies of OTFS have examined it in concert with non-orthogonal multiple access (NOMA) [23], millimeter wave (mmWave) communication systems [24], and integrated sensing and communication [25]. In [23], the authors investigated an OTFS-based NOMA configuration in which each group of co-channel mobile and stationary users is modulated by OTFS. The work in [24] addressed the effect of oscillator phase noise on the performance of mmWave OTFS systems, where oscillator phase noise and Doppler shifts are typically high. The authors of [25] proposed a novel integrated sensing and communication-assisted OTFS transmission scheme in vehicle-to-infrastructure scenarios, which reduces the hardware cost as well as the demand on spectral resources.
Index modulation (IM), which enjoys high spectral and energy efficiency, is a promising modulation technique for next generation wireless networks [26, 27]. In IM schemes, information bits are transmitted not only by \(M\)-ary signal constellations but also by the indices of transmission entities. Many kinds of transmission entities, such as antennas [28], OFDM subcarriers [29, 30] and frequency slots [31], can be used for carrying index bits without extra energy consumption.
Recognizing the superiority of IM, index modulation based orthogonal time frequency space (IM-OTFS) [32] has been recently proposed to improve the bit error rate (BER) performance for high-mobility communication scenarios. Specifically, the index bits are transmitted by the indices of the activated OTFS delay-Doppler resources, where the active resource bins are independently randomly selected. To further improve the system performance, OTFS with dual-mode index
modulation (OTFS-DM-IM) was proposed in [33], which provides a desired trade-off between transmission reliability and spectral efficiency (SE). To effectively decode the index bits and constellation bits, several detectors have also been proposed in the literature. In [32], a minimum mean squared error with maximum likelihood (MMSE-ML) detector was proposed, where the MMSE criterion was employed for the detection of constellation bits and index bits, and the ML principle is utilized to detect the indices information. In [33], a modified log likelihood ratio (LLR) detector based on the minimum Hamming distance was investigated to improve the BER performance. However, the performance analysis for the designed schemes and detectors in [32] and [33] only considers ideal bi-orthogonal OTFS pulses and requires mobile channels exhibiting on-the-grid delays and Doppler shifts, which are unrealistic assumptions in practical OTFS system deployment.
On the other hand, the channel delay and Doppler shifts will cause severe inter-symbol interference (ISI) in high mobility OTFS communications. The existing IM-OTFS systems [32, 33] only activate independent delay-Doppler resources and cannot determine the active and inactive resources accurately at the receiver, leading to an inevitable performance loss. Therefore, it is necessary to develop more efficient and reliable IM schemes for OTFS transmissions by considering the effects caused by the channel delays and Doppler spreads. To date, there has been no relevant work taking these factors into account.
In this paper, we propose effective block-wise IM schemes and develop efficient receiver algorithms for OTFS systems to alleviate the delay-Doppler channel effects. We also dispense with the impractical assumption that the channel delays and Doppler shifts are on the OTFS sampling grid, and analyze the performance of our proposed schemes. Our contributions in this paper are summarized as follows:
* We propose two effective block-wise IM schemes for OTFS systems, denoted as delay-IM with OTFS (DeIM-OTFS) and Doppler-IM with OTFS (DoIM-OTFS), where a block of delay/Doppler resource bins are activated simultaneously. The proposed schemes can operate with practical rectangular pulses and work well for the practical scenarios where the channel delay and Doppler shifts do not necessarily land on the OTFS delay-Doppler sampling grid.
* We derive asymptotically tight BER upper bounds for the DeIM-OTFS and DoIM-OTFS schemes with the optimal ML detectors. The performance improvement of our proposed block-wise IM schemes for OTFS is also verified in contrast to the existing IM-OTFS schemes.
* We develop a multi-layer symbol and activation pattern detection (MLJSAPD) algorithm and a customized message passing detection (CMPD) algorithm for the proposed DeIM-OTFS and DoIM-OTFS schemes. The MLJSAPD introduces a new layer in the factor graph to further track the activated blocks of the transmitted symbols. The CMPD algorithm can effectively identify the active resource units by considering the active probability of each resource unit during the iterations.
* Simulation results demonstrate that the proposed MLJSAPD and CMPD algorithms can achieve desired performance with relatively low complexity for both DeIM-OTFS and DoIM-OTFS systems, and also robustness against imperfect channel state information (CSI).
The rest of this paper is organized as follows. In Section II, we first introduce our proposed block-wise IM schemes and also describe the corresponding system model. In Section III, we analyze the theoretical BER upper bounds of the proposed DeIM-OTFS and DoIM-OTFS schemes with the ML detector. The proposed low-complexity MLJSAPD and CMPD detectors are described in Section IV. Simulation results are presented in Section V. Finally, Section VI concludes the paper.
\(Notation:(\cdot)^{\mathrm{T}}\), \((\cdot)^{\mathrm{*}}\), \((\cdot)^{\mathrm{H}}\), and \(\|\cdot\|\) denote the transpose, conjugate, Hermitian operations, and Euclidean norm of a matrix, respectively. \([\cdot]\) denotes the integer floor operator. \([\cdot]_{m}\) denotes the mod-\(m\) operation. \(\mathbb{C}\) and \(\mathbb{Z}\) denote the set of complex numbers and positive integers, respectively. \(S\) is the constellation set. \(\mathrm{C}(n,k)\) denotes the binomial coefficient that chooses \(k\) out of \(n\). \(\mathbb{E}(\cdot)\), \(\text{det}(\cdot)\), \(\text{diag}(\cdot)\), and \(Q(.)\) denote the expectation, determinant, diagonal matrix, and Gaussian \(Q\)-function, respectively.
## II System Model
In this section, we briefly introduce our proposed block-wise IM schemes for OTFS and also present the corresponding system model, which are shown in Fig. 1 and Fig. 2, respectively.
A 2D lattice in the time-frequency plane is sampled at interval \(T\) (seconds) and \(\Delta f=1/T\) (Hz), respectively, i.e., \(\Lambda=\left\{\left(m\Delta f,nT\right),m=0,\ldots,M-1;n=0,\ldots,N-1\right\}\) for \(M\in\mathbb{Z},N\in\mathbb{Z}\). Here, \(M\) and \(N\) represent the total available numbers of subcarriers and time slots, respectively. \(\Delta f\) and \(T\) are chosen larger than the maximum Doppler frequency shift \(\nu_{max}\) and maximal channel delay spread \(\tau_{max}\), respectively. Thus, the corresponding delay-Doppler plane is described as an information grid, i.e., \(\Gamma=\left\{\left(\frac{\ell}{M\Delta f},\frac{k}{NT}\right),\ell=0,\ldots,M -1;k=0,\ldots,N-1\right\}\), where the sampling time \(1/M\Delta f\) and sampling frequency \(1/NT\) are referred to as the delay resolution and the Doppler resolution of the delay-Doppler grid, respectively.
### _Proposed Block-wise IM Schemes for OTFS_
Unlike the conventional random IM schemes applied in OTFS systems [32, 33], our proposed DeIM-OTFS and DoIM-OTFS schemes activate a block of delay/Doppler resource bins simultaneously, which can help to further improve the receiver performance and combat the effect of high mobility time-varying channels.
Let us consider a total number of \(\mathcal{B}\) information bits for transmission in each OTFS frame. The OTFS frame is split into \(J\) subframes, each of which is composed of an \(\widetilde{M}\times\widetilde{N}\) signal matrix. \(\widehat{M}\) and \(\widehat{N}\) denote the numbers of resource units in the delay dimension and Doppler dimension for each subframe, respectively. Let \(\widehat{\ell}=0,\ldots,\widehat{M}-1\) and \(\widehat{k}=0,\ldots,\widehat{N}-1\) represent indexes of delay and Doppler resource bins for each subframe, respectively. The total number of
subframes is given by \(J=\overline{MN}\), where \(\overline{M}=M/\widehat{M}\) and \(\overline{N}=N/\widehat{N}\), respectively. We denote the \(\beta\)-th subframe as \(G[\beta]\), where \(\beta=\overline{\ell}+\overline{Mk}+1\) with \(\overline{\ell}=0,\ldots,\overline{M}-1\) and \(\overline{k}=0,\ldots,\overline{N}-1\). As shown in Fig. 1, each OTFS frame consists of \(\{G[1],G[2],\ldots,G[\beta],\ldots,G[J]\}\) subframes. For each subframe, our proposed block-wise index modulator processes \(p=\mathcal{B}/J\) bits in the delay-Doppler domain. These \(p\) information bits are then divided into two parts: the first \(p_{1}\) bits are transferred to the index selector to decide the active resource units; the remaining \(p_{2}\) bits are mapped to the constellation symbols and placed on active resource units. The details of the proposed DeIM-OTFS and DoIM-OTFS schemes are respectively described as follows:
1. DeIM-OTFS: For the DeIM-OTFS scheme, each subframe is divided into \(\widehat{M}\) blocks along the delay dimension with \(\hat{N}\) Doppler resource units in each block, as shown in Fig. 1(a). We activate the resource units based on blocks according to the index bits, i.e., a block of delay resource bins are activated simultaneously. We assume the number of active blocks in each transmitted subframe is \(\widehat{k}\), such that there are \(\text{C}(\widehat{M},\widehat{k})\) possible index combinations of active indices and \(\widehat{k}\hat{N}\) active resource units in each subframe for given \(\widehat{M}\), \(\widehat{N}\) and \(\widehat{k}\). Therefore, the total numbers of index bits and constellation bits for each OTFS frame are given by \(m_{1}=p_{1}J=\lfloor\log_{2}(\text{C}(\widehat{M},\widehat{k}))\rfloor J\) and \(m_{2}=p_{2}J=\widehat{k}\log_{2}(M_{c})\widehat{N}J\), respectively, where \(M_{c}\) represents the modulation order. The SE of the DeIM-OTFS scheme can be calculated as \(S_{E}=(\log_{2}(\text{C}(\widehat{M},\widehat{k}))+\widehat{k}\log_{2}(M_{c} )\hat{N})/(\widehat{M}\hat{N})\). For example, in each subframe, the resource units of the first and second blocks are active if the indices of \(\{1,2\}\) are selected, while the remaining inactive resource units are set to zero.
2. DoIM-OTFS: For the DoIM-OTFS scheme, each subframe is divided into \(\widehat{N}\) blocks along the Doppler dimension with \(\widehat{M}\) delay resource units in each block, as shown in Fig. 1(b). We activate a block of Doppler resource bins simultaneously according to the index bits. For given \(\widehat{M}\), \(\widehat{N}\) and \(\widehat{k}\), there are totally \(\text{C}(\widehat{N},\widehat{k})\) possible index combinations of active indices and \(\widehat{k}\widehat{M}\) active resource units in each subframe. The total numbers of index bits and constellation bits in each OTFS frame are given by \(m_{1}=\lfloor\log_{2}(\text{C}(\widehat{N},\widehat{k}))\rfloor J\) and \(m_{2}=\widehat{k}(\log_{2}M_{c})\widehat{M}J\), respectively. The SE of the DoIM-OTFS scheme can be calculated similar to the DeIM-OTFS scheme, given by \(S_{E}=(\log_{2}(\text{C}(\widehat{N},\widehat{k}))+\widehat{k}\log_{2}(M_{c} )\widehat{M})/(\widehat{M}\widehat{N})\).
We assume that the signal constellation symbols are normalized to have unit average power. A look-up table example is presented in Table I with parameters \(\widehat{M}=4\), \(\widehat{N}=4\) and \(\widehat{k}=2\). Since \(\text{C}(4,2)=6\), we select four index combinations out of six by abandoning the other two cases.
### _Transmitter Model_
The transmitter and receiver structures of our proposed DeIM-OTFS/DoIM-OTFS system are depicted in Fig. 2. At the transmitter, the modulated signal in the \(\ell\)-th delay and \(k\)-th Doppler grid for \(\ell=0,\ldots,M-1\) and \(k=0,\ldots,N-1\) is given by \(X[\ell,k]\in\{0,S\}\). According to the proposed DeIM-OTFS/DoIM-OTFS scheme, the delay-Doppler signal \(\mathbf{X}\in\mathbb{C}^{M\times N}\) can be generated. Then, the corresponding delay-Doppler symbols \(\mathbf{X}\) are converted into the time-frequency domain by using the 2D inverse symplectic finite Fourier transform (ISFFT),
\[\overline{\mathbf{X}}=\mathbf{F}_{M}\mathbf{X}\mathbf{F}_{N}^{\text{H}}, \tag{1}\]
where \(\mathbf{F}_{M}\) and \(\mathbf{F}_{N}\) denote the normalized discrete Fourier transform (DFT) matrices of size \(M\times M\) and size \(N\times N\), respectively. The time-frequency domain samples \(\{\overline{X}[m,n],m=0,\ldots,M-1;n=0,\ldots,N-1\}\) are transmitted over an OTFS frame with duration \(T_{f}=NT\) and occupies a bandwidth of \(B=M\Delta f\). After ISFFT, the time-frequency signal \(\overline{\mathbf{X}}\) is modulated through the Heisenberg transform by utilizing a transmit rectangular pulse \(g_{tx}(t)\). Thus, the resulted time domain sampled signal \(\mathbf{s}\in\mathbb{C}^{MN\times 1}\) can be written as
\[s[u]=\sum_{n=0}^{N-1}\sum_{m=0}^{M-1} \overline{X}[m,n]g_{tx}\left(uT_{s}-nT\right)e^{j2\pi m\Delta f \left(uT_{s}-nT\right)},\] \[u=0,\ldots,MN-1, \tag{2}\]
where \(T_{s}=1/M\Delta f\) denotes the symbol spaced sampling interval.
Fig. 1: A snapshot of the delay-Doppler resource bins for the proposed DeIM-OTFS and DoIM-OTFS schemes.
### _Channel Model_
To eliminate the inter-frame interference, a cyclic prefix (CP) of length no shorter than the maximal channel delay spread is appended to the front of the time domain signal \(\mathbf{s}\). Then, \(\mathbf{s}\) enters the multipath fading channels after passing through a transmit filter, the channel impulse response \(h[u,p]\) is characterized as
\[h[u,p] =\sum_{i=1}^{L}h_{i}e^{j2\pi\nu_{i}(uT_{s}-pT_{s})}\mathrm{P}_{ \mathrm{rc}}\left(pT_{s}-\tau_{i}\right),\] \[u =0,\ldots,MN-1,\ p=0,\ldots,P-1, \tag{3}\]
where \(h_{i}\), \(\tau_{i}\) and \(\nu_{i}\) denote the channel gain, delay, and Doppler shift corresponding to the \(i\)-th path, respectively. Parameter \(L\) represents the number of multipaths. The number of the channel taps \(P\) is determined by the maximal channel delay spread and the duration of the overall filter response. \(\mathrm{P}_{\mathrm{rc}}\left(pT_{s}-\tau_{i}\right)\) is the sampled overall filter response composed of bandlimiting matched filters equipped at the transmitter and receiver, which can control the bandwidth of the transmitted signal and achieve maximum signal-to-noise ratio (SNR) at the receiver. In our proposed DeIM-OTFS/DoIM-OTFS system, we choose a pair of root raised-cosine (RRC) filters in the transmitter and receiver, which are the most commonly implemented pulse shaping filters to generate an equivalent raised-cosine (RC) rolloff pulse for \(\mathrm{P}_{\mathrm{rc}}(\tau)\). Unlike the existing works in [32, 33], which require delay shifts must be on the grid, we relax such ideal assumption and consider that the channel delays do not necessarily stand on the OTFS sampling grid. The Doppler frequency shift of the \(i\)-th path can be written as \(\nu_{i}=\left(k_{\nu_{i}}+\beta_{\nu_{i}}\right)/NT\), where integer \(k_{\nu_{i}}\) denotes the index of Doppler \(\nu_{i}\), and real \(\beta_{\nu_{i}}\in(-0.5,0.5]\) represents the fractional shift from the nearest Doppler tap \(k_{\nu_{i}}\).
### _Receiver Model_
At the receiver, the channel output signal first enters a receive filter. After removing the CP, the received signal can be written as
\[r[u]=\sum_{p=0}^{P-1}h[u,p]s\left[[u-p]_{MN}\right]+n[u],\ u=0,\ldots,MN-1, \tag{4}\]
where \(\mathbf{n}=[n[1],n[2],\ldots,n[MN-1]]\) represents the filtered noise.
Then, the received time domain signal \(\mathbf{r}\) is transferred back to the time-frequency domain signal by Wigner transform (i.e., the inverse of Heisenberg transform) using a rectangular pulse \(g_{rx}(t)\) at the receiver, which is given by
\[\overline{Y}[m,n]=\sum_{u=0}^{MN-1}g_{rx}^{*}\left(uT_{s}-nT\right) r[u]e^{-j2\pi m\Delta f(uT_{s}-nT)},\] \[m=0,\ldots,M-1;n=0,\ldots,N-1. \tag{5}\]
Finally, the signal matrix in the time-frequency domain is processed via the symplectic finite Fourier transform (SFFT) to produce the delay-Doppler domain signal, which can be represented as
\[\mathbf{Y}=\mathbf{F}_{M}^{\mathrm{H}}\overline{\mathbf{Y}}\mathbf{F}_{N}. \tag{6}\]
Based on the above analysis, the DeIM-OTFS/DoIM-OTFS input-output relationship in the delay-Doppler domain can be written as [34]
\[Y[\ell,k]= \sum_{p=0}^{P-1}\sum_{i=1}^{L}\sum_{q=0}^{N-1}h_{i}\mathrm{P}_{ \mathrm{rc}}\left(pT_{s}-\tau_{i}\right)\gamma\left(k,\ell,p,q,k_{\nu_{i}}, \beta_{\nu_{i}}\right)\] \[X\left[[\ell-p]_{M},[k-k_{\nu_{i}}+q]_{N}\right]+Z[\ell,k], \tag{7}\]
Fig. 2: Transmitter and receiver structures of the proposed DeIM-OTFS/DoIM-OTFS system.
\[\mathbf{\Phi}_{\rho}(\mathbf{X})=\left[\begin{array}{c}\sum_{q=0}^{N-1}\sum_{p=0} ^{P-1}\mathrm{P}_{rc}(pT_{s}-\tau_{1})\gamma\left(k,\ell,p,q,k_{\nu_{1}},\beta _{\nu_{1}}\right)X\left[[\ell-p]_{M},[k-k_{\nu_{1}}+q]_{N}\right]\\ \sum_{q=0}^{N-1}\sum_{p=0}^{P-1}\mathrm{P}_{rc}(pT_{s}-\tau_{2})\gamma\left(k, \ell,p,q,k_{\nu_{2}},\beta_{\nu_{2}}\right)X\left[[\ell-p]_{M},[k-k_{\nu_{2}}+q ]_{N}\right]\\ \hskip 14.226378pt\vdots\\ \sum_{q=0}^{N-1}\sum_{p=0}^{P-1}\mathrm{P}_{rc}(pT_{s}-\tau_{L})\gamma\left(k, \ell,p,q,k_{\nu_{L}},\beta_{\nu_{L}}\right)X\left[[\ell-p]_{M},[k-k_{\nu_{L}}+q ]_{N}\right]\end{array}\right]. \tag{10}\]
where \(Z[\ell,k]\) denotes the delay-Doppler domain noise sample at the output of the SFFT, and
\[\gamma\left(k,\ell,p,q,k_{\nu_{i}},\beta_{\nu_{i}}\right)\] \[=\left\{\begin{array}{l}\frac{1}{N}\xi\left(\ell,p,k_{\nu_{i}}, \beta_{\nu_{i}}\right)\theta\left(q,\beta_{\nu_{i}}\right),p\leq\ell<M,\\ \frac{1}{N}\xi\left(\ell,p,k_{\nu_{i}},\beta_{\nu_{i}}\right)\theta\left(q, \beta_{\nu_{i}}\right)\phi\left(k,q,k_{\nu_{i}}\right),0\leq\ell<p,\end{array}\right. \tag{8a}\] \[\xi\left(\ell,p,k_{\nu_{i}},\beta_{\nu_{i}}\right)=e^{j2\pi\left( \frac{\ell-p}{M}\right)\left(\frac{k_{\nu_{i}}+\beta_{\nu_{i}}}{N}\right)},\] (8b) \[\theta\left(q,\beta_{\nu_{i}}\right)=\frac{e^{-j2\pi\left(-q- \beta_{\nu_{i}}\right)}-1}{e^{-j\frac{2\pi}{N}\left(-q-\beta_{\nu_{i}}\right) }-1},\] (8c) \[\phi\left(k,q,k_{\nu_{i}}\right)=e^{-j2\pi\frac{\left[k-\nu_{i}+q \right]_{N}}{N}}. \tag{8d}\]
We estimate the signal \(\mathbf{X}\) from the received delay-Doppler signal \(\mathbf{Y}\), then signal \(\mathbf{X}\) is transformed into bits after a series of inverse mapping of IM. From (7), we can observe that the off-grid Doppler shifts will spread to the whole Doppler domain, while the delay spreads only cause the ISI near the maximum delay taps. Therefore, the existing IM-OTFS works [32] and [33] are sensitive to inter-Doppler interference (IDI) and ISI because only individual resource unit is activated each time, leading to a performance loss. However, our proposed block-wise IM schemes are potentially robust to the effects of the channel. We will justify this by analyzing the performance of our proposed DeIM-OTFS/DoIM-OTFS system in the next section.
## III Performance Analysis
In this section, we derive BER upper bounds for the proposed DeIM-OTFS/DoIM-OTFS system, where the ML detector is used to decode the index and constellation bits.
According to (7), the DeIM-OTFS/DoIM-OTFS input-output relationship in the delay-Doppler domain can be vectorized as
\[\mathbf{y}^{\mathrm{T}}=\mathbf{h}\mathbf{\Phi}(\mathbf{X})+\mathbf{z}^{ \mathrm{T}}, \tag{9}\]
where \(\mathbf{y}^{\mathrm{T}}\in\mathbb{C}^{1\times MN}\) denotes the received signal vector, \(\mathbf{h}=[h_{1},h_{2},\ldots,h_{L}]\) is a path coefficient vector and \(h_{i}\) is distributed as \(\mathcal{CN}\left(0,1/L\right)\). \(\mathbf{z}^{\mathrm{T}}\in\mathbb{C}^{1\times MN}\) denotes the vector representation of \(\left(Z[\ell,k]\right)\) with \(\ell=0,\ldots,M-1\) and \(k=0,\ldots,N-1\). \(\mathbf{\Phi}(\mathbf{X})\in\mathbb{C}^{L\times MN}\) is a signal matrix dependent on \(\mathbf{X}\) whose \(\rho\)-th column \(\left(\rho=\ell+kM,\rho=0,\ldots,MN-1\right)\), denoted as \(\mathbf{\Phi}_{\rho}(\mathbf{X})\), is given by (10), as shown at the top of the next page.
We assume that perfect CSI is known at the receiver. The conditional pairwise error probability (PEP) for the proposed DeIM-OTFS/DoIM-OTFS system is defined as the probability of the transmitting symbol matrix \(\mathbf{X}\) and deciding \(\widehat{\mathbf{X}}\), which can be given by
\[P(\mathbf{X}\rightarrow\widehat{\mathbf{X}}|\mathbf{h})=Q\left(\sqrt{\frac{ \|\mathbf{h}(\mathbf{\Phi}(\mathbf{X})-\mathbf{\Phi}(\widehat{\mathbf{X}})\|^ {2}}{2N_{0}}}\right). \tag{11}\]
Denoting the SNR by \(\gamma=1/N_{0}\), the PEP averaged over the channel statistics is given by
\[P(\mathbf{X}\rightarrow\widehat{\mathbf{X}})=\mathbb{E}\left[Q\left(\sqrt{ \frac{\gamma\|\mathbf{h}(\mathbf{\Phi}(\mathbf{X})-\mathbf{\Phi}(\widehat{ \mathbf{X}}))\|^{2}}{2}}\right)\right], \tag{12}\]
where,
\[\|\mathbf{h}(\mathbf{\Phi}(\mathbf{X})-\mathbf{\Phi}(\widehat{ \mathbf{X}}))\|^{2}\\ =\mathbf{h}(\mathbf{\Phi}(\mathbf{X})-\mathbf{\Phi}(\widehat{ \mathbf{X}}))(\mathbf{\Phi}(\mathbf{X})-\mathbf{\Phi}(\widehat{\mathbf{X}}))^{ \mathrm{H}}\mathbf{h}^{\mathrm{H}}=\mathbf{h}\Gamma\mathbf{h}^{\mathrm{H}}. \tag{13}\]
Here, the matrix \(\Gamma\) is a Hermitian matrix that is diagonalizable by unitary transformation and it can be decomposed as \(\mathbf{\Gamma}=\mathbf{U}\mathbf{\Lambda}\mathbf{\Lambda}^{H}\), where \(\mathbf{U}\) is unitary and \(\mathbf{\Lambda}=\mathrm{diag}\left\{\lambda_{1}^{2},\ldots,\lambda_{L}^{2}\right\}\) with \(\lambda_{i}\) being the \(i\)-th singular value of the difference matrix \(\mathbf{\Delta}=\mathbf{\Phi}(\mathbf{X})-\mathbf{\Phi}(\widehat{\mathbf{X}})\).
By defining \(\tilde{\mathbf{h}}=\mathbf{h}\mathbf{U}\), we can rewrite (13) as
\[\|\mathbf{h}(\mathbf{\Phi}(\mathbf{X})-\mathbf{\Phi}(\widehat{\mathbf{X}}))\|^ {2}=\mathbf{h}\mathbf{\Gamma}\mathbf{h}^{\mathrm{H}}=\tilde{\mathbf{h}}\mathbf{ \Lambda}\tilde{\mathbf{h}}^{H}. \tag{14}\]
Therefore, (12) can be calculated as
\[P(\mathbf{X}\rightarrow\widehat{\mathbf{X}})=\mathbb{E}\left[Q\left(\sqrt{ \frac{\gamma\sum_{i=1}^{\alpha}\lambda_{i}^{2}\left|\tilde{h}_{i}\right|^{2}}{2}} \right)\right], \tag{15}\]
where \(\alpha\) denotes the rank of the difference matrix \(\mathbf{\Delta}\) and \(\tilde{h}_{i}\) is the \(i\)-th element of the vector \(\tilde{\mathbf{h}}\). We approximate the \(Q\)-function quite well by using [3]
\[Q\left(x\right)\!\cong\!\frac{1}{12}e^{-\frac{x^{2}}{2}}+\frac{1}{4}e^{-\frac{2x^ {2}}{3}}. \tag{16}\]
Then, the PEP can be approximated as
\[P(\mathbf{X}\rightarrow\widehat{\mathbf{X}})\approx\frac{1}{12}\prod_{i=1}^{ \alpha}\frac{1}{1+\frac{\gamma\lambda_{i}^{2}}{4L}}+\frac{1}{4}\prod_{i=1}^{ \alpha}\frac{1}{1+\frac{\gamma\lambda_{i}^{2}}{3L}}. \tag{17}\]
At high SNRs, (17) can be further simplified as
\[P(\mathbf{X}\rightarrow\widehat{\mathbf{X}})\approx\frac{1/12}{\gamma^{\alpha} \prod_{i=1}^{\alpha}\frac{\lambda^{2}}{4L}}+\frac{1/4}{\gamma^{\alpha}\prod_{i=1}^{ \alpha}\frac{\lambda^{2}}{3L}}. \tag{18}\]
After evaluating the unconditional PEP from (18), the average BER of the proposed DeIM-OTFS/DoIM-OTFS scheme
can be upper bounded by
\[P_{b}\leq\frac{1}{B\varpi_{\mathbf{X}}}\sum_{\mathbf{X}}\sum_{\widehat{\mathbf{X} }}P(\mathbf{X}\rightarrow\widehat{\mathbf{X}})e(\mathbf{X},\widehat{\mathbf{X} }), \tag{19}\]
where \(\varpi_{\mathbf{X}}\) denotes the number of possible realizations of \(\mathbf{X}\), and \(e(\mathbf{X},\widehat{\mathbf{X}})\) is the number of error bits for the corresponding pairwise error event.
In Fig. 3, we compare the BER performance of the proposed DeIM-OTFS and DoIM-OTFS schemes with that of the conventional random IM-OTFS scheme. The parameters of the considered three systems are: (i) DeIM-OTFS system with \(M=N=\widehat{M}=\widehat{N}=4\), the number of active blocks is \(\widehat{k}=1\), QPSK; and (ii) DoIM-OTFS system with \(M=N=\widehat{M}=\widehat{N}=4\), the number of active blocks is \(\widehat{k}=1\), QPSK; and (iii) random IM system with \(M=N=4\), the number of active resource units is set to \(2\), QPSK. All the above considered systems have the same SE of 0.625bps/Hz for fair comparison. The channel model is given by (3), and the number of propagation paths is considered to be four (i.e., \(L=4\)). The velocity of mobile user is set to \(\lambda=300\) Kmph and the carrier frequency is 4 GHz. As can be seen from Fig. 3, the simulated BER and theoretical upper bound of the DeIM-OTFS and DoIM-OTFS schemes almost coincide at the high SNR regime, which verifies the accuracy of theoretical results. Furthermore, our proposed DeIM-OTFS and DoIM-OTFS systems can achieve superior performance to the existing random IM-OTFS system. Moreover, the BER performance of the DeIM-OTFS scheme exhibits an approximately 2 dB gain over the DoIM-OTFS scheme under the same conditions. This is due to the fact that the effect of off-grid channel Doppler spreads causes severe interference among the resource units in the Doppler domain, while the channel delay spreads only cause interference in the delay domain with maximum delay taps. Such constrained interference of the DeIM-OTFS scheme makes the decision of the active and inactive resource blocks more accurate at the receiver than DoIM-OTFS scheme, leading to a better performance. In other words, the DoIM-OTFS scheme suffers from more severe interference than the DeIM-OTFS scheme, thus, resulting in poor performance due to the lower accuracy of receiver detection.
Fig. 4 gives the comparison results for the proposed DeIM-OTFS and DoIM-OTFS schemes with different SE values (0.625bps/Hz, 1.125bps/Hz and 1.625bps/Hz, respectively). In this simulation, the values of \(M\), \(N\), \(\widehat{M}\) and \(\widehat{N}\) are set to 4, and QPSK modulation is adopted. As seen from Fig. 4, DeIM-OTFS and DoIM-OTFS systems with index combination C(4,1) exhibit exactly the best BER performance, which means that increasing the number of active blocks will slightly decrease the BER performance of the DeIM-OTFS/DoIM-OTFS scheme. This can be understood since the detection of data symbols and active indices are more challenging for a higher SE with severe interference effect. Moreover, we again notice that the DeIM-OTFS scheme can always achieve superior BER performance to the DoIM-OTFS scheme for different activated strategies, which is consistent with the observations in Fig. 3.
Fig. 5 compares the BER performance of the DeIM-OTFS and DoIM-OTFS schemes with different numbers of channel multipaths under \(M=N=\widehat{M}=\widehat{N}=4\) and \(\widehat{k}=1\), where
Fig. 4: BER performance comparison between the proposed DeIM-OTFS and DoIM-OTFS schemes with different activation strategies.
Fig. 5: BER performance comparison between the proposed DeIM-OTFS and DoIM-OTFS schemes with different numbers of multipaths, where \(M=N=4\), \(\widehat{k}=1\) and QPSK is adopted.
Fig. 3: BER performance comparison between the proposed DeIM-OTFS/DoIM-OTFS schemes and the conventional random IM-OTFS scheme.
all the schemes have the same SE of 0.625bps/Hz. As the number of multipaths increases from 2 to 4, we can observe a significant performance improvement in both DeIM-OTFS and DoIM-OTFS schemes. Specifically, the proposed DeIM-OTFS and DoIM-OTFS schemes of \(L=4\) can achieve about 4 dB gain than that of \(L=3\), while a more than 5 dB gain is obtained compared to that of \(L=2\). This can be explained by the fact that with a larger number of independent resolvable multipaths, more diversity can be exploited for better performance.
It is well-known that the SEs of DeIM-OTFS and DoIM-OTFS systems increase with a larger size of each subframe and higher-order signal modulation. However, these would lead to an extremely large size of the look-up table and increase the computational complexity of both the transmitter and receiver. Moreover, the computational complexity of the DeIM-OTFS and DoIM-OTFS schemes for ML detection are \(\mathcal{O}((2^{p_{1}}M_{e}^{\hat{k}N})^{J})\) and \(\mathcal{O}((2^{p_{1}}M_{e}^{\hat{k}M})^{J})\), respectively, which increase exponentially with a large size of look-up table. In order to solve this problem, we develop low-complexity MLJSAPD and CMPD algorithms for the proposed DeIM-OTFS and DoIM-OTFS systems in the next section.
## IV Receiver Design
In this section, we develop MLJSAPD and CMPD algorithms for practical large-dimensional signal detection for the DeIM-OTFS/DoIM-OTFS system. Here, we use the DeIM-OTFS system as an example, which can be generalized to the DoIM-OTFS system in a straightforward manner.
According to (7), the input-output relationship of the DeIM-OTFS/DoIM-OTFS system can be vectorized as
\[\mathbf{y}=\mathbf{H}\mathbf{x}+\mathbf{z}, \tag{20}\]
where \(\mathbf{x},\mathbf{y}\in\mathbb{C}^{MN\times 1}\), and \(\mathbf{z}\in\mathbb{C}^{MN\times 1}\) is the noise vector. \(\mathbf{H}\in\mathbb{C}^{MN\times MN}\) is a sparse matrix since the number of non-zero elements in each row and column of \(\mathbf{H}\) is \(\mathcal{Z}\) due to the modulo-\(N\) and modulo-\(M\) operations. The \((\ell+kM+1)\)-th element of \(\mathbf{x}\) is defined by \(x[\ell+kM+1]=X[\ell,k]\) with \(\ell=\widehat{M\ell}+\widehat{\ell}\)\((0\leq\ell\leq M-1)\) and \(k=\widehat{Nk}+\widehat{k}\)\((0\leq k\leq N-1)\). Similarly, the \((\ell+kM+1)\)-th element of \(\mathbf{y}\) and \(\mathbf{z}\) are \(y[\ell+kM+1]=Y[\ell,k]\) and \(z[\ell+kM+1]=Z[\ell,k]\), respectively, where \(\ell=\widehat{M\ell}+\widehat{\ell}\) and \(k=\widehat{Nk}+\widehat{k}\).
The joint maximum a posterior (MAP) probability detection rule of the transmitted signal is given by
\[\widehat{\mathbf{x}}=\operatorname*{arg\,max}_{\mathbf{x}\in\{S \cup 0\}^{MN\times 1}}\Pr(\mathbf{x}|\mathbf{y},\mathbf{H}), \tag{21}\]
where "0" means the resource units is not activated; otherwise it is activated.
We observe that the exact computation of (21) has a complexity exponential in \(MN\), making the joint MAP detection intractable for practical values of \(N\) and \(M\). To reduce the receiver complexity, we propose two efficient detection algorithms in the following subsections.
### _MLJSAPD Algorithm_
In this subsection, the details of the MLJSAPD algorithm are described in the following and summarized in **Algorithm 1**.
In (21), \(\Pr(\mathbf{x}|\mathbf{y},\mathbf{H})\) can be written as:
\[\Pr\left(\mathbf{x},\mathbf{a}|\mathbf{y},\mathbf{H}\right)\] \[\propto\Pr\left(\mathbf{y}|\mathbf{x},\mathbf{a},\mathbf{H}\right) \Pr\left(\mathbf{x},\mathbf{a}\right)\] \[=\Pr\left(\mathbf{y}|\mathbf{x},\mathbf{a},\mathbf{H}\right)\Pr \left(\mathbf{x}|\mathbf{a}\right)\Pr\left(\mathbf{a}\right)\] \[=\!\!\!\prod_{d=1}^{MN}\!\!\!\!\Pr(y[d]|\mathbf{x},\mathbf{a}, \mathbf{H})\!\!\prod_{f=1}^{\widehat{M}J}\prod_{c=\overline{k}(\widehat{N}-1) M+f}^{\widehat{\left(\overline{k}+1\right)(\widehat{N}-1)M+f}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
from \(x[c]\) to \(a[f]\). Furthermore, activity indicator nodes and constraint nodes denote Layer 3, which generates approximate probabilities of the individual elements \(\mathbf{a}\) being active or inactive. For the proposed MLJSAPD algorithm, its detailed steps in iteration \(n_{iter}\) are described below.
**1) From observation node \(y[d]\) to variable nodes \(x[c],\)\(c\in\mathcal{I}(d)\)**: At each observation node, we calculate the extrinsic messages for each connected variable node according to the sparsity channel model, and prior information from other connected variable nodes. The interference is approximately modeled as a Gaussian random variable \(\zeta_{d,c}^{n_{iter}}\), where \(\mu_{d,c}^{n_{iter}}\) and variance \((\sigma_{d,c}^{n_{iter}})^{2}\) denote the mean and variance, respectively. Thus, the received signal \(y[d]\) can be written as
\[y[d]=x[c]H[d,c]+\underbrace{\sum\limits_{e\in\mathcal{I}(d),e\neq c}x[e]H[d,e] +v[d]}_{\zeta_{d,c}^{n_{iter}}}, \tag{23}\]
with
\[\mu_{d,c}^{n_{iter}}=\sum\limits_{e\in\mathcal{I}(d),e\neq c}H[d,e]\sum\limits _{x\in\{S\cup 0\}}p_{e,d}^{n_{iter}-1}\left(x\right)x, \tag{24}\]
and
\[(\sigma_{d,c}^{n_{iter}})^{2} =\sum\limits_{e\in\mathcal{I}(d),e\neq c}\left(\sum\limits_{x\in \{S\cup 0\}}p_{e,d}^{n_{iter}-1}\left(x\right)|x|^{2}\left|H[d,e]\right|^{2}\right.\] \[\left.-\left|\sum\limits_{x\in\{S\cup 0\}}p_{e,d}^{n_{iter}-1} \left(x\right)xH[d,e]\right|^{2}\right)+\sigma^{2}, \tag{25}\]
where \(\sigma^{2}=\sigma_{N}^{2}\int_{\mu}\mathbf{p}_{\mathrm{rc}}^{2}(\mu)d\mu\) is the variance of the colored Gaussian noise. \(\Pr_{\mathrm{rc}}(\mu)\) denotes the RRC rolloff receive filter and \(\sigma_{N}^{2}\) is the variance of the AWGN at the receiver input [34]. The mean \(\mu_{d,c}^{n_{iter}}\) and variance \((\sigma_{d,c}^{n_{iter}})^{2}\) of the interference terms are used to calculate the approximate marginal probability of the transmitted symbols. Therefore, the probability estimate of \(x[c]\) passed from observation node \(y[d]\) to variable node \(x[c]\) is given by
\[v_{d,c}^{n_{iter}}(x)\propto\Pr(y[d]|x[c]=x,\mathbf{H})\] \[\propto\exp\left(-\frac{\left|y[d]-\mu_{d,c}^{n_{iter}}-H[d,c]x \right|^{2}}{(\sigma_{d,c}^{n_{iter}})^{2}}\right),\ \forall x\in\{S\cup 0\}. \tag{26}\]
**2) From variable node \(x[c]\) to activity indicator node \(a[f]\)**: All resource units in each block are connected to an indicator node \(a[f]\). The probability of each indicator node \(a[f]\) is determined by the corresponding variable nodes. We assume the probability estimate of message from \(x[c]\) to \(a[f]\) is given by
\[q_{c}^{n_{iter}}\left(b\right)=\Delta\cdot\widehat{q}_{c}^{n_{iter}}\left(b \right)+\left(1-\Delta\right)\cdot q_{c}^{n_{iter-1}}\left(b\right), \tag{27}\]
where \(\Delta\in(0,1]\) is the \(message\)\(damping\)\(factor\) used to improve the system performance by controlling the convergence rate, and
\[\widehat{q}_{c}^{n_{iter}}\left(b\right) \stackrel{{\Delta}}{{=}}\Pr\left(a[f]=b|\mathbf{x}\right) \tag{28a}\] \[\propto\begin{cases}\sum\limits_{x\in S}\prod\limits_{d\in \mathcal{J}(c)}\Pr\left(y[d]|x[c]=x,\mathbf{H}\right),&\text{if }b=1,\\ \prod\limits_{d\in\mathcal{J}(c)}\Pr\left(y[d]|x[c]=0,\mathbf{H}\right),& \text{if }b=0,\end{cases}\] (28b) \[\propto\begin{cases}\sum\limits_{x\in S}\prod\limits_{d\in \mathcal{J}(c)}v_{d,c}^{n_{iter}}\left(x\right),&\text{if }b=1,\\ \prod\limits_{d\in\mathcal{J}(c)}v_{d,c}^{n_{iter}}\left(0\right),&\text{if }b=0.\end{cases} \tag{28c}\]
**3) From activity indicator node \(a[f]\) to constraint node \(G[\beta]\)**: According to the indices of the activated resource units, the probability estimate of message passed from indicator node \(a[f]\) to constraint node \(G[\beta]\), can be written as
\[w_{f}^{n_{iter}}\left(b\right) =\Pr\left(a[f]=b|\mathbf{x}^{f}\right) \tag{29a}\] \[\propto\begin{cases}\prod\limits_{e\in\mathcal{D}(f)}q_{c}^{n_{ iter}}\left(1\right),&\text{if }b=1,\\ \prod\limits_{e\in\mathcal{D}(f)}q_{c}^{n_{iter}}\left(0\right),&\text{if }b=0,\end{cases} \tag{29b}\]
where \(\mathbf{x}^{f}=[x[\overline{k}(\widehat{N}-1)M+f],x[\overline{k}(\widehat{N} -1)M+f+\overline{k}(\widehat{N}-1)M+f+\overline{k}(\widehat{N}-1)M+f+\overline{ k}(\widehat{N}-1)M+f+\overline{k}(\widehat{N}-1)M+f+\overline{k}(\widehat{N}-1)M+f+ \overline{k}(\widehat{N}-
\(M],\ldots,x[(\overline{k}+1)(\widehat{N}-1)M+f]]\).
**4) From constraint node \(G[\beta]\) to activity indicator nodes \(a[f]\)**, \(f\in\mathcal{K}(\beta)\)**: In each subframe, \(\widehat{M}\) indicator nodes \(a[f]\) are linked to a constraint node \(G[\beta]\), where \(\sum_{f=(\beta-1)\widehat{M}+1}^{\widehat{M}\beta}a[f]=\widehat{k}\). At each constraint node, the extrinsic information for each indicator node can be generated by prior messages collected from other indicator nodes. We can calculate the probability estimate of message passed from constraint node to indicator node, as
\[\psi_{f}^{n_{iter}}\left(b\right)=\Pr\left(a[f]=b|\mathbf{a}_{ \backslash f}^{\beta}\right) \tag{30a}\] \[=\begin{cases}\Pr\left(\sum\limits_{e=\widehat{M}(\beta-1)+1,e \neq f}^{\overline{M}\beta}a[e]=\widehat{k}-1|\mathbf{a}_{\backslash f}^{ \beta}\right),&\text{if }b=1,\\ \Pr\left(\sum\limits_{e=\widehat{M}(\beta-1)+1,e\neq f}^{\overline{M}\beta}a[ e]=\widehat{k}|\mathbf{a}_{\backslash f}^{\beta}\right),&\text{if }b=0,\end{cases}\] (30b) \[\approx\begin{cases}\Omega_{f}^{n_{iter}}(\widehat{k}-1),&\text{if }b=1,\\ \Omega_{f}^{n_{iter}}(\widehat{k}),&\text{if }b=0,\end{cases}\]
where \(\mathbf{a}_{\backslash f}^{\beta}\) denotes \(\mathbf{a}^{\beta}\) excluding \(a[f]\) for \(f\in\mathcal{K}(\beta)\) and \(\mathbf{a}^{\beta}=[a[\widehat{M}(\beta-1)+1],a[\widehat{M}(\beta-1)+2],\ldots,a[\widehat{M}\beta]]\). \(\mathbf{\Omega}_{f}^{n_{iter}}\) is calculated as \(\mathbf{\Omega}_{f}^{n_{iter}}=\otimes_{e=\widehat{M}(\beta-1)+1,e\neq f}^{ \mathbf{w}_{e}^{n_{iter}}}\) for \(\mathbf{w}_{e}^{n_{iter}}=[w_{e}^{n_{iter}}(0)\ w_{e}^{n_{iter}}(1)]\), where \(\otimes\) denotes the convolution operator.
**5) From activity indicator node \(a[f]\) to variable nodes \(x[c]\)**, \(c\in\mathcal{D}(f)\): We note that the indicator node \(a[f]=1\) only when all variable nodes \(x[c]\) connected to \(a[f]\) are activated (i.e., \(x[c]=x\in S\)) and \(0\) otherwise. The probability estimate of message passed from \(a[f]\) to \(x[c]\) is given by
\[u_{c}^{n_{iter}}\left(b\right) \stackrel{{\Delta}}{{=}}\Pr\left(a[f]=b|\mathbf{x}_ {\backslash c}^{f}\right) \tag{31a}\] \[=\begin{cases}\Pr\left(\sum\limits_{e\in\mathcal{D}(f),e\neq c}x[ e]\neq 0|\mathbf{x}_{\backslash c}^{f}\right),&\text{if }b=1,\\ \Pr\left(\sum\limits_{e\in\mathcal{D}(f),e\neq c}x[e]=0|\mathbf{x}_{\backslash c }^{f}\right),&\text{if }b=0,\end{cases}\] (31b) \[\approx\begin{cases}\sum\limits_{e\in\mathcal{D}(f),e\neq c}\psi_{f}^{ n_{iter}}\left(1\right)q_{e}^{n_{iter}}\left(1\right),&\text{if }b=1,\\ \sum\limits_{e\in\mathcal{D}(f),e\neq c}\psi_{f}^{n_{iter}}\left(0\right)q_{e}^ {n_{iter}}\left(0\right),&\text{if }b=0,\end{cases} \tag{31c}\]
where \(\mathbf{x}_{\backslash c}^{f}\) denotes \(\mathbf{x}^{f}\) excluding \(x[c]\) for \(c\in\mathcal{D}(f)\). \(\psi_{f}^{n_{iter}}(1)\) denotes the activated probability of all variable nodes connected to \(a[f]\).
**6) From variable node \(x[c]\) to observation nodes \(y[d],d\in\mathcal{J}(c)\)**: The posterior probability of the elements \(\mathbf{x}\) passed from variable node \(x[e]\) to observation node \(y[d]\) is denoted by \(\mathbf{p}_{e,d}\). At each variable node, the extrinsic information for each connected observation node is generated from prior messages collected from other observation nodes and indicator nodes. Hence, the probability \(\widehat{\mathbf{p}}_{c,d}^{n_{iter}}\) can be given by
\[\widehat{p}_{c,d}^{n_{iter}}(x) \propto u_{c}^{n_{iter}}(x^{\bigodot})\prod_{e\in\mathcal{J}(c),e \neq d}\Pr\left(y[e]|x[c]=x,\mathbf{H}\right) \tag{32a}\] \[\propto u_{c}^{n_{iter}}(x^{\bigodot})\prod_{e\in\mathcal{J}(c),e \neq d}v_{e,c}^{n_{iter}}(x),\ \forall x\in\{S\cup 0\}, \tag{32b}\]
where \(x^{\bigodot}=1\) if \(x\in S\) or 0 otherwise. The message from variable node \(x[c]\) to observation node \(y[d]\) contains the probability mass function (pmf) with elements
\[p_{c,d}^{n_{iter}}\left(x\right)=\Delta\cdot\widehat{p}_{c,d}^{n_{iter}}(x)+(1- \Delta)\cdot p_{c,d}^{n_{iter-1}}\left(x\right). \tag{33}\]
**7) Convergence indicator** : We calculate the convergence indicator \(\eta^{n_{iter}}\) for some small \(\varrho\) as
\[\eta^{n_{iter}}=\frac{1}{MN}\sum_{c=1}^{MN}\mathbb{I}\left(\max_{x\in\{S\cup 0 \}}\,p_{c}^{n_{iter}}\left(x\right)\geq 1-\varrho\right), \tag{34}\]
where \(\mathbb{I}\) denotes indicator function. The posterior probability for each element of the transmit symbol is given by
\[p_{c}^{n_{iter}}\left(x\right)=\frac{1}{C}u_{c}^{n_{iter}}\left(x^{\bigodot} \right)\prod_{d\in\mathcal{J}(c)}v_{d,c}^{n_{iter}}\left(x\right),\ \forall x\in\{S\cup 0\}, \tag{35}\]
where \(C\) is a normalizing constant.
**8) Update criteria**: If \(\eta^{n_{iter}}>\eta^{n_{iter}-1}\), the probability of the transmitted symbols is updated only when the current iteration provides a better solution than the previous one,
\[\overline{\mathbf{p}}_{c}=\mathbf{p}_{c}^{n_{iter}},\ c=1,\ldots,MN. \tag{36}\]
In this algorithm, the different messages passed in this graph are as follows: \(\mathbf{v}_{d,c}\) passes from observation node \(y[d]\) to the connected variable node \(x[c]\); \(\mathbf{q}_{c}\) passes from variable node \(x[c]\) to the connected activity indicator node \(a[f]\); \(\mathbf{w}_{f}\) passes from activity indicator node \(a[f]\) to the connected constraint node \(G[\beta]\); \(\mathbf{\psi}_{f}\) passes from constraint node \(G[\beta]\) to the connected activity indicator node \(a[f]\); \(\mathbf{u}_{c}\) passes from activity indicator node \(a[f]\) to the connected variable node \(x[c]\); \(\mathbf{p}_{c,d}\) passes from variable node \(x[c]\) to the connected observation node \(y[d]\). All of the messages are exchanged between these four nodes until convergence.
**9) Stopping criteria**: The MLJSAPD algorithm stops when \(\eta^{n_{iter}}=1\) or the maximum number of iterations \(n_{iter}^{max}\) is reached.
After satisfy the convergence of the algorithm, we can find all of the activated blocks, which are determined by choosing the blocks of the corresponding \(\widehat{k}\) largest activated probabilities in each subframe, given by \([a_{1}^{\beta},a_{2}^{\beta},a_{4}^{\beta},\ldots,a_{\widehat{k}}^{\beta}]\) with \(1\leq t\leq\widehat{k}\) and \(1\leq\beta\leq J\). Let \([d_{1}^{\beta},d_{2}^{\beta},\ldots,d_{\widehat{M}}^{\beta}]\) denotes the blocks of the \(\beta\)-th subframe, i.e., \(a_{t}^{\beta}\in[d_{1}^{\beta},d_{2}^{\beta},\ldots,d_{\widehat{M}}^{\beta}]\). Then, we can obtain the corresponding activated resource units according to the activated blocks.
Finally, we make a decision of the transmitted symbols, as
given by
\[\widehat{x}\left[c\right]=\underset{x\in S}{\text{argmax}}\ \overline{p}_{c}\left(x\right),\ c\in \mathbb{A}, \tag{37}\]
where \(\mathbb{A}\) denotes the set of active resource units.
According to the active resource units, the corresponding symbols will be transferred into bits through a series of inverse mapping of IM.
### _CMPD Algorithm_
To further reduce the complexity, we propose the CMPD algorithm to simplify the structure of the above factor graph and only keep the observation node \(\mathbf{y}\) and variable node \(\mathbf{x}\). In our proposed CMPD algorithm, we identify the active blocks by comparing the LLRs of each block after the iterations. The details of the CMPD algorithm are given as follows and summarized in **Algorithm 2**.
The joint MAP probability of the transmitted signal is give by (21). Differently, we calculate (21) by the following approximation:
\[\widehat{x}\left[c\right] =\underset{x\in\{S\cup 0\}}{\arg\max}\Pr(x[c]=x|\mathbf{y}, \mathbf{H})\] \[\propto\underset{x\in\{S\cup 0\}}{\arg\max}\prod_{d\in\mathcal{J} \left(c\right)}\Pr\left(y[d]|x[c]=x,\mathbf{H}\right). \tag{38}\]
Similar to the MLJSAPD algorithm, we employ the Gaussian approximation to the interference term, and the received signal can be obtained by applying the same expression in (23). The mean and variance of the interference in the \(n_{iter}\)-th iteration can still be given in (24) and (25), respectively. The probability estimate of \(x[c]\) passed from observation node \(y[d]\) to variable node \(x[c]\) is given by (26). From variable node \(x[c]\) to observation node \(y[d]\), the pmf vector \(\mathbf{p}_{c,d}\) is updated by the expression (33), with
\[\widehat{p}_{c,d}^{n_{iter}}(x) \propto\prod_{e\in\mathcal{J}\left(c\right),e\neq d}\Pr\left(y[e ]|x[c]=x,\mathbf{H}\right)\] \[=\prod_{e\in\mathcal{J}\left(c\right),e\neq d}\frac{v_{e,c}^{n_{ iter}}(x)}{\sum\limits_{x\in\{S\cup 0\}}v_{e,c}^{n_{iter}}(x)}. \tag{39}\]
Here, we calculate the convergence indicator \(\eta^{n_{iter}}\) by (34). The posterior probability for each element of the transmit symbol is given as
\[p_{c}^{n_{iter}}\left(x\right)=\prod_{e\in\mathcal{J}\left(c\right)}\frac{v_{ e,c}^{n_{iter}}(x)}{\sum\limits_{x\in\{S\cup 0\}}v_{e,c}^{n_{iter}}(x)},\ \forall x\in\{S\cup 0\}. \tag{40}\]
The update and stopping criteria of the CMPD algorithm is the same as that of the MLJSAPD algorithm. Once the stopping criteria is satisfied, we can obtain the LLR of each resource unit, as
\[\widehat{L}[c]=\log\frac{\prod\limits_{x\in S}p_{c}^{n_{iter}}(x)}{p_{c}^{n_{ iter}}(x=0)},\ c=1,\ldots,MN. \tag{41}\]
Then, we average the LLRs of all resource units in each block. The active blocks can be determined by choosing the blocks of the corresponding \(\widehat{k}\) largest average LLRs.
Finally, we make the decisions of the transmitted symbols \(\widehat{x}\left[c\right]\) for the active resource units according to (37). Then, the estimated symbols are transferred into bits by using a series of inverse mapping of IM.
### _Complexity Analysis_
The complexity of the proposed MLJSAPD and CMPD algorithms are analyzed in this subsection. We take the DeIM-OTFS scheme as an example, the complexity of the DoIM-OTFS scheme can be generated in a straightforward manner. As shown in TABLE II, the complexity of the proposed MLJSAPD and CMPD algorithms for each iteration is calculated according to the real-field multiplications [23], and exponential functions, respectively, given at the top of next page. Complex multiplication, inverse, and division are equivalent to three, four, and six real-field multiplications, respectively. The MLJSAPD algorithm complexity is mainly dominated by (24)-(26), and (28)-(32). The number of real-filed multiplications required in steps (24), (25), and (28)-(32) are \(2MN\mathcal{Z}(M_{c}+1)\), \(MN\mathcal{Z}(4(M_{c}+1)+1)\), \(MN\mathcal{Z}(M_{c}+1)\), \(2\widetilde{M}J\mathcal{Z}\), \(\widetilde{M}^{2}J^{2}+\widetilde{M}J\), \(2J\mathcal{Z}\) and \(MN\mathcal{Z}^{2}\), respectively. In addition, (26) is a exponential function with the complexity of \(MN\mathcal{Z}\). The CMPD algorithm complexity is dominated by (24)-(26), (39), and (40). The number of real-filed multiplications of (24)-(26) is the same as the MLJSAPD algorithm, and (39), (40) are given by \(MN(\mathcal{Z}+M_{c}+4)\) and \(MN(\mathcal{Z}+M_{c}+8)\), respectively. From these analysis, we can observe that our proposed MLJSAPD and CMPD algorithms have tolerable complexity for symbol detection. Moreover, simulation results verified the desired performance of the MLJSAPD and CMPD algorithms in the next section.
## V Simulation Results
In this section, we study the BER performance of the proposed DeIM-OTFS/DoIM-OTFS scheme with MLJSAPD and CMPD detection algorithms. We assume that the perfect channel knowledge is known at the receiver and all relevant simulation parameters are given in Table III. We also test the receiver performance of the proposed schemes with imperfect CSI. The Doppler frequency shift of the \(i\)-th channel path is generated by \(\nu_{i}=\nu_{max}\text{cos}(\theta_{i})\), where \(\nu_{max}\) denotes the maximum Doppler frequency shift and \(-\pi\leq\theta_{i}\leq\pi\). Moreover, the RRC rolloff factor is set to 0.4 at the transmitter and receiver. Without loss of generality, we choose \(\Delta=0.4\) and \(\varrho=0.1\). Unless otherwise mentioned, the numbers of delay bins. Doppler bins and active blocks in each subframe are set to \(\widehat{M}=4\), \(\widehat{N}=4\) and \(\widehat{k}=1\), respectively. The number of multipaths is set to \(L=4\), and the user velocity is set to 300 Kmph.
In Fig. 7, we illustrate the convergence analysis of the proposed MLJSAPD and CMPD algorithms for different SNRs with the DeIM-OTFS scheme. As shown in Fig. 7, the proposed MLJSAPD and CMPD algorithms at low SNR exhibit a slightly faster convergence speed than that at high SNR. At an SNR of 5 dB, the MLJSAPD and CMPD algorithms converge after 8 iterations on average. However, for high SNR of 10 dB, the MLJSAPD and CMPD algorithms converge in 10 iterations. Based on the above analysis, we take the number of iterations to be 10 for the following simulation tests. Similar convergence result can be observed for the DoIM-OTFS scheme, and thus we omit the details here for brevity.
since more diversity can be exploited from a larger number of independent resolvable paths.
Fig. 10 shows the BER performance of MLJSAPD and CMPD algorithms for different user velocities with SNR = 3 dB and 11 dB. It can be observed that the BER performance of the MLJSAPD and CMPD algorithms gradually improve as the velocity increases and are saturated after the velocity beyond 450 Kmph. The underlying reason is that, as the increase of velocity, OTFS modulation can resolve higher contrast paths in the Doppler domain, and thus better performance can be achieved. As a result, it is obvious that performance improvements can be obtained at high user velocities.
Finally, the BER performance of the proposed MLJSAPD and CMPD algorithms are tested in terms of imperfect CSI in Fig. 11. Here, we characterize the CSI errors by adopting the following model [23]
\[h_{i}=\tilde{h}_{i}+\Delta h_{i},\|\Delta h_{i}\|\leq\epsilon_{h_{i}},\] \[\nu_{i}=\tilde{\nu}_{i}+\Delta\nu_{i},\|\Delta\nu_{i}\|\leq \epsilon_{\nu_{i}},\] \[\tau_{i}=\tilde{\tau}_{i}+\Delta\tau_{i},\|\Delta\tau_{i}\|\leq \epsilon_{\tau_{i}},\]
where \(\tilde{h}_{i}\), \(\tilde{\nu}_{i}\) and \(\tilde{\tau}_{i}\) denote the estimated values of \(h_{i}\), \(\nu_{i}\), and \(\tau_{i}\), respectively. \(\Delta h_{i}\), \(\Delta\nu_{i}\), and \(\Delta\tau_{i}\) are the corresponding channel estimation errors. We assume the norms of \(\Delta h_{i}\), \(\Delta\nu_{i}\), and \(\Delta\tau_{i}\) do not exceed the given values of \(\epsilon_{h_{i}}\), \(\epsilon_{\nu_{i}}\) and \(\epsilon_{\tau_{i}}\), respectively. Here, we set \(\epsilon_{h_{i}}=\epsilon\left\|\tilde{h}_{i}\right\|\), \(\epsilon_{\nu_{i}}=\epsilon\left\|\tilde{\nu}_{i}\right\|\) and \(\epsilon_{\tau_{i}}=\epsilon\left\|\tilde{\tau}_{i}\right\|\) for simplicity. From Fig. 11, we can observe that both the proposed MLJSAPD and CMPD algorithms only suffer from a mild performance loss for the modest values of channel uncertainty \(\epsilon\). With the increase of the level of channel uncertainty, the rapid degradation in BER performance does not appear, which verifies the robustness of our proposed MLJSAPD and CMPD detection algorithms.
## VI Conclusion
In this paper, we have proposed two efficient block-wise IM schemes for practical high mobility OTFS communications, namely DeIM-OTFS and DoIM-OTFS. We have analyzed the average BER bounds for the proposed DeIM-OTFS and DoIM-OTFS schemes with the optimal ML detectors. Both theoretical analysis and simulation results have demonstrated that our proposed DeIM-OTFS and DoIM-OTFS schemes outperform the conventional random IM-OTFS scheme. We have also
Fig. 11: BER performance of MLJSAPD and CMPD algorithms with imperfect CSI.
Fig. 8: BER performance comparison between the proposed MLJSAPD and CMPD algorithms with DeIM-OTFS/DoIM-OTFS scheme and the traditional OTFS/IM-OTFS systems.
Fig. 10: BER performance of MLJSAPD and CMPD algorithms for different mobile velocities.
Fig. 9: BER performance comparison for different activated indices of the proposed DeIM-OTFS system under different number of channel multipaths with BPSK modulation.
noted that our proposed DeIM-OTFS scheme outperform the DoIM-OTFS scheme as the interference effect caused by the channel delays is much less than that caused by the channel Doppler spreads. Furthermore, we have developed low-complexity MLJSAPD and CMPD algorithms for symbol detection in the proposed DeIM-OTFS and DoIM-OTFS systems. Numerical results have verified that our proposed MLJSAPD and CMPD algorithms can achieve desired performance and robustness to the imperfect CSI. The proposed MLJSAPD algorithm can achieve superior performance to the CMPD algorithm with a slight sacrifice in complexity.
|
2310.06909
|
Fragility of the magnetic order in the prototypical altermagnet RuO$_2$
|
Altermagnetism is a topic that has lately been gaining attention and the
RuO$_2$ compound is among one of the most studied altermagnetic candidates.
However, the survey of available literature on RuO$_2$ properties suggests that
there is no consensus about the magnetism of this material. By performing
density functional theory calculations, we show that the electronic properties
of stoichiometric RuO$_2$ are described in terms of a smaller Hubbard $U$
within DFT+$U$ than the value required to have magnetism. We further argue that
Ru vacancies can actually aid the formation of a magnetic state in RuO$_2$.
This in turn suggests that a characterization of the amount of Ru vacancies in
experimental samples might help the resolution of the controversy between the
different experimental results.
|
Andriy Smolyanyuk, Igor I. Mazin, Laura Garcia-Gassull, Roser Valentí
|
2023-10-10T18:04:46Z
|
http://arxiv.org/abs/2310.06909v2
|
# RuO\({}_{2}\): a puzzle to be solved
###### Abstract
Alternagnetism is a topic that has lately been gaining attention and the RuO\({}_{2}\) compound is among one of the most studied altermagnetic candidates. However, the survey of available literature on RuO\({}_{2}\) properties suggests that there is no consensus about the magnetism of this material. By performing density functional theory calculations, we show that the electronic properties of stoichiometric RuO\({}_{2}\) are described in terms of a smaller Hubbard \(U\) within DFT+\(U\) than the value required to have magnetism. We further argue that Ru vacancies can actually aid the formation of a magnetic state in RuO\({}_{2}\). This in turn suggests that a characterization of the amount of Ru vacancies in experimental samples might help the resolution of the controversy between the different experimental results.
## I Introduction
In recent years the topic of altermagnetism has been gaining attention, with significant efforts directed towards finding new altermagnetic materials [1; 2]. Altermagnetism is defined as a magnetic phase with symmetry-driven compensated net magnetization, where the symmetry operation responsible for this magnetic phase is neither inversion nor translation. A material exhibiting these properties combines characteristics of both ferromagnetism and antiferromagnetism. Furthermore, in regards to the electronic band structure, the bands in this phase are non-spin-degenerate, leading to intriguing applications.
Among the various proposed materials as altermagnetic candidates, RuO\({}_{2}\) is attracting much attention. However, the magnetism in this system is still in itself a controversial topic. On the one hand, the absence of a discernible phase transition in the heat capacity [3; 4] and the resistivity data suggests that RuO\({}_{2}\) is a Pauli paramagnet [5; 6; 7; 8; 9]. On the other hand, the existence of an antiferromagnetic configuration has been reported by resonant X-ray scattering [10] and neutron diffraction [11]. However, the latter measurements reported a rather small local magnetization value (0.05 \(\mu_{B}\)). Additionally, there have been observations of a sizeable anomalous Hall effect, consistent with a considerably larger magnetization [12; 13].
Unfortunately, the available neutron diffraction data on RuO\({}_{2}\)[11] are not sufficient to confidently resolve the controversy on the magnetization, for the reasons described below. The main issue is that the quality of the magnetic component of the fit in these experiments depends on the quality of the structural refinement. In reference [11], the authors mention the possibility of a structural distortion in the rutile phase accompanied by antiferromagnetic order. However, Ref. [11] was unable to find a distorted structure that would fit both unpolarized and polarized neutron diffraction data, while the powder X-ray diffraction patterns are consistent with the undistorted rutile structure (see the crystal structure depicted in Fig. 1). To address this problem, Ref. [11] employed density functional theory (DFT) calculations in an attempt to find such a structure. A distorted \(2\times 2\times 2\) rutile supercell was optimized in both the non-magnetic and antiferromagnetic states, and the rutile structure was obtained as the ground state. The available computational data on the lattice dynamics in RuO\({}_{2}\)[14; 15] confirm that the rutile structure is dynamically stable.
The absence of a structural phase transition is indirectly confirmed by other measurements. Two independent electron transport measurements, one conducted up to 300 K [8] and one up to 1000 K [9], show no changes in resistivity that could be caused by a structural phase transition. Both data sets are well described by a model that has three contributions to the resistivity: the electron-phonon interaction with acoustic (Bloch-Grunesien) and optical modes, along with a term arising from electron-electron scattering. Moreover, there is no indication of a structural phase transition in the heat capacity measurements, which was measured up to \(\sim\)340 K [3] and \(\sim\)1050 K [4]. This same conclusion is supported by the available measurements of thermal expansion [16].
Based on the refinement using the rutile structure, the extracted magnetic moment per Ru atom is 0.23 \(\mu_{B}\) for unpolarized neutron diffraction and 0.05 \(\mu_{B}\) for po
larized neutron diffraction measurements [11]. Furthermore, there is no evidence of a phase transition to an antiferromagnetic phase, neither in the susceptibility data of Ref. [11] nor in earlier measurements [5; 6; 7]. Additionally, nuclear magnetic resonance measurements strongly suggest the absence of long-range magnetic order. This conclusion is supported by the absence of any contribution from Ru d electrons in both the Knight shift and relaxation rate, as well as the absence of any hyperfine splitting [17]. The authors of this paper point out that, overall, the resonant magnetic properties closely resemble those of nonmagnetic Ru metal.
The controversy among the different experiments suggests that the existence of antiferromagnetic (and hence altermagnetic) order in RuO\({}_{2}\) is rather fragile, likely sample-dependent, and possibly present in only a fraction of the sample volume. In order to gain a better microscopic understanding of the magnetism (or lack thereof) in this material, we have systematically investigated the magnetic states of RuO\({}_{2}\) employing density functional theory (DFT), both with and without a Hubbard \(U\) correction applied to the Ru \(d\)-orbitals. Our tentative conclusion is that the perfectly ordered, stoichiometric RuO\({}_{2}\) is likely nonmagnetic, consistent with numerous experiments above. On top of that, a modest hole doping, for instance, by creating Ru vacancies (a common defect in this class of materials, cf. Ref. [18] that found 5% vacancies in their RuO\({}_{2}\) samples) promotes the RuO\({}_{2}\) to a magnetic state of exactly the same symmetry as suggested in Ref. [11] and utilized in Refs. [19; 20].
The amount of Ru vacancies is liable to vary from sample to sample, and even from one batch to another, depending on the growth procedure, and may even be nonuniform over a sample. This could explain the discrepancy between different experiments and leads us to conclude that a characterization of the Ru vacancies in the samples may be key to know about the magnetic character of RuO\({}_{2}\).
## II Discussion
The first concern to cover is whether stoichiometric RuO\({}_{2}\) is magnetic or not. To account for the possible effects of electronic correlations in this system, we perform DFT+\(U\) computations. In Fig. 2, we plot the dependency of the local magnetic moment at the Ru site for an anti-parallel spin orientation (a parallel orientation, as well as various magnetic arrangements with \(q\neq 0\), are invariably higher in energy) over a range of \(U\) values. As seen from the overlap of the magnetic moment for small values of \(U\) in the plot, RuO\({}_{2}\) is non-magnetic up to a critical value, \(U_{eff}=U-J\sim\)1.06 eV. After this, there is a discontinuous jump (see below the explanation) of the value of the magnetic moment to \(\sim\)0.5 \(\mu_{B}\). This jump is an order of magnitude larger than the 0.05 \(\mu_{B}\) obtained from polarized neutron scattering measurements [11], and more than twice larger than the 0.23 \(\mu_{B}\) value fitted to unpolarized data (claimed to be less reliable and contaminated by unknown structural factors).
Moreover, \(U_{eff}>1\) eV is rather large for this good-metallic, strongly-hybridized, 4\(d\) system. For comparison, first principles calculations of \(U_{eff}\) for the ruthenium-based spin-orbit Mott insulators \(\alpha\)-RuCl\({}_{3}\), RuBr\({}_{3}\), and RuI\({}_{3}\) gave estimates of 2 to 1 eV [21]. Considering the metallic screening occurring in RuO\({}_{2}\), it is expected that its \(U_{eff}\) will be noticeably smaller than the values given above. This leads us to conclude that for stoichiometric RuO\({}_{2}\) a smaller \(U_{eff}\) is likely more realistic to describe its properties than the required one
Figure 2: Total energy (left) and local magnetization at the Ru site (right) as a function of \(U_{eff}=U-J\) are explored in two sets of calculations. In one set (“increasing \(U_{eff}\)”), denoted with \(\times\) (energy results) and \(\square\) symbols (magnetization results) the calculations were done starting from \(U=0\) eV and progressively increasing \(U\) in each subsequent calculation. In the other set (“decreasing \(U_{eff}\)”), denoted with \(+\) and \(*\) symbols, the direction of the calculation was reversed. The calculations are without SOC contributions.
Figure 1: Crystal structure of RuO\({}_{2}\): Ru atoms are shown in red and blue (different colors denote different spin orientations), O atoms are shown in teal.
to have magnetism, and therefore stoichiometric RuO\({}_{2}\) is most probably non-magnetic.
Moving on to analyzing the projected density of states (DOS) (see Fig. 3), we observe that the main contribution to the DOS around the Fermi level comes from the \(xz/yz\) Ru \(d\)-orbitals. The value of the DOS at the Fermi level is relatively low and flat in its vicinity. This causes the Stoner criterion for ferromagnetism to be very hard to fulfill.
Zone-center antiferromagnetism, as in RuO\({}_{2}\), obeys a modified Stoner criterion, where instead of the uniform susceptibility, \(\chi({\bf q}=0)=N(0)\) at some finite reciprocal lattice vectors appears, \(\chi({\bf G}\neq 0)\), but, it is quite obvious that highly dispersive bands at the Fermi level and low DOS are rather unfavorable in this case as well.
Another interesting aspect in the DOS is the narrow peak below the Fermi level coming from the \(xy\) Ru orbitals. If the Fermi level were shifted closer to this peak, it could potentially trigger a magnetic transition from the current non-magnetic state to a state with a significant magnetic moment.
The shift of the Fermi level can be accomplished through hole doping, changing the electron occupation. The only other alternative for generating a magnetic order _without doping_ is to increase \(U\), and thus the effective Stoner parameter \(I_{eff}=I+(U-J)/5\) (see Ref. [22]) until the Stoner criterion (\(I_{eff}>J\)) is satisfied. To check this hypothesis we did a series of DFT+\(U\) calculations varying both the value of \(U_{eff}\) and the number of electrons. Figure 4 shows the value of the local magnetic moment at the Ru site as a function of \(U_{eff}\) and the number of electrons per unit cell (two formula units). The isoline with \(m=0.05\)\(\mu_{B}\), corresponding to the measured value from Ref. [11], is highlighted. One can see that there is a rather stable ground state for \(\sim 0.1\) hole/Ru doping, within a reasonable range of \(U_{eff}\lesssim 1\) eV. For the same \(U_{eff}\), a larger doping of 0.4 hole/Ru, corresponding to 10% of Ru vacancies, generated a local magnetic moment of \(m=0.2\)\(\mu_{B}\). It is worth pointing out that the discontinuous jump in the calculated magnetic moment for the undoped compound as a function of \(U_{eff}\) is immediately understood from the DOS on Fig. 3. That is because, in order to get a stable magnetic solution, the exchange splitting (proportional to \(U_{eff}\)) must reach the threshold (around 1 eV, from Fig. 3) corresponding to the separation between the \(xy\) band and the Fermi level.
Note that the data in Fig. 4 merely show a trend of the system to attain a magnetic moment with hole doping, but the preferred magnetic orientation may depend on the doping as well. In particular, when spin-orbit coupling (SOC) is accounted for, our DFT+\(U\) calculations with \(U_{eff}\)=1.4 eV show that the magnetic anisotropy energy \(E_{100}-E_{001}\) changes linearly with doping and there is a transition from the easy axis along the \(c\) direction towards an easy plane at around 0.2 hole/Ru. However, the full sampling of magnetic ground states as a function of \(U_{eff}\) and hole doping is out of the scope of this work.
The next issue is to focus on the magnetic ground state when the system is in the regime where magnetization is permitted. To address this question, we initially calculated, using VASP [23; 24; 25; 26], the energy difference between the ferromagnetic (FM) and intermediate (AM) stoichiometric RuO\({}_{2}\) configurations. The calculations were done without considering SOC effects while setting the value of \(U_{eff}\) to 1.3 and 1.4 eV and we analyzed the magnetization of both configurations. Interestingly, the AM configuration converged to \(M_{Ru}=0.66\) and 0.78 \(\mu_{B}\) for each \(Ru\) atom in the cell, respectively. In contrast, the FM ones essentially collapsed, yielding a total magnetization of \(M_{tot}=0.015\) (0.038) \(\mu_{B}\) per Ru atom. Correspondingly, the AM energy was lower than the FM one by 3.3 (9.3) meV/Ru. Altogether, these results indicate that the intermediate configuration has the lowest energy. Besides that, the spin-spirals with \(\vec{q}=(0,0,q)\) and
Figure 3: Projected non-magnetic density of states onto Ru \(d\)-orbitals: blue, orange, cyan, red, and teal are used to depict \(z^{2}\), \(x^{2}-y^{2}\), \(xy\), \(xz\) and \(yz\) orbitals respectively. The coordinate system is aligned with Ru-O bonds.
Figure 4: Dependence of the local atomic magnetization \(m\) on hole doping (an undoped case corresponds to 56 electrons per cell, i.e., per two formula units) and the effective Hubbard parameter \(U_{eff}=U-J\). Isolines for \(m=0.05\), 0.10, and 0.15 \(\mu_{B}\) are depicted by white lines (with no attempts to smooth the plotted lines).
\(\vec{q}=(q,0,0)\) were checked leading to \(q=0\) as the lowest energy state in both cases.
We also checked the calculated magnetic anisotropy and compared it with the experiment. Including spin-orbit coupling, we found that the \(c\) axis is the easy axis along the direction, in agreement with experiment[11], as seen in Fig. 5.
Thus, the magnetic ground state is characterized by an antiparallel alignment along the \(c\) axis of the magnetic moments of two Ru atoms. This description can be described by the magnetic space group P4\({}^{\prime}\)2/mmm\({}^{\prime}\) (BNS 136.499).
## III Conclusions
Our DFT calculations show that, for a realistic value of \(U_{eff}\) (using Ru based insulators as reference), the stoichiometric RuO\({}_{2}\) compound is non-magnetic. However, hole doping due to Ru vacancies can induce a phase transition to the antiferromagnetic phase even for small values of \(U_{eff}\). This observation may be a key to reconcile different, strongly mutually contradicting experiments. If our conjecture is correct, every experimental work on RuO\({}_{2}\) must begin with a careful characterization of the O and Ru content. Moreover, a systematic experimental investigation of magnetic properties as a function of the O and Ru content is absolutely necessary. One verifiable corollary is that with controllable Ru vacancies one should be able to observe the antiferromagnetic transition in thermodynamics, transport and magnetometry.
## IV Computational details
Computations were done using density functional theory (DFT) in the generalized gradient approximation (GGA) with the Perdew-Burke-Ernzerhof [27; 28] functional as implemented in the VASP[23; 24; 25; 26] package employing the projector augmented wave method (PAW) [29; 30], Ru_sv and O (or O_h, for structural optimization) pseudopotentials were used. The energy cutoff was set to 400 eV (for the test purpose, selected calculations were performed with a 900 eV cutoff) and the 8x8x12 (12x12x18 for testing) k-points Monkhorst-Pack grid [31; 32] was used. For selected calculations, a cross-check using Wien2k was used.
## V Acknowledgments
We thank Libor Smejkal and Huibo Cao for the discussions. A.S. was supported by the Austrian Science Fund (FWF) through the project P33571 "BandITT". L.G-G. and R.V. were supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) for funding through TRR 288 - 422213477 (project B05). I.M. was supported by the Army Research Office under Cooperative Agreement Number W911NF- 22-2-0173. He also acknowledges Heraeus Foundation for supporting his visits to University of Frankfurt. Some of the images in the paper were created using VESTA software [33].
|
2304.13525
|
Thermal Vision for Soil Assessment in a Multipurpose Environmental
Chamber under Martian Conditions towards Robot Navigation
|
Soil assessment is important for mobile robot planning and navigation on
natural and planetary environments. Terramechanic characteristics can be
inferred from the thermal behaviour of soils under the influence of sunlight
using remote sensors such as Long-Wave Infrared cameras. However, this
behaviour is greatly affected by the low atmospheric pressures of planets such
as Mars, so practical models are needed to relate robot remote sensing data on
Earth to target planetary exploration conditions. This article proposes a
general framework based on multipurpose environmental chambers to generate
representative diurnal cycle dataset pairs that can be useful to relate the
thermal behaviour of a soil on Earth to the corresponding behaviour under
planetary pressure conditions using remote sensing. Furthermore, we present an
application of the proposed framework to generate datasets using the
UMA-Laserlab chamber, which can replicate the atmospheric \ch{CO2} composition
of Mars. In particular, we analyze the thermal behaviour of four soil samples
of different granularity by comparing replicated Martian surface conditions and
their Earth's diurnal cycle equivalent. Results indicate a correlation between
granularity and thermal inertia that is consistent with available Mars surface
measurements recorded by rovers. The resulting dataset pairs, consisting of
representative diurnal cycle thermal images with heater, air, and subsurface
temperatures, have been made available for the scientific community.
|
Raul Castilla-Arquillo, Anthony Mandow, Carlos J. Perez-del-Pulgar, Cesar Alvarez-Llamas, Jose M. Vadillo, Javier Laserna
|
2023-04-26T13:01:38Z
|
http://arxiv.org/abs/2304.13525v2
|
Thermal Vision for Soil Assessment in a Multipurpose Environmental Chamber under Martian Conditions towards Robot Navigation
###### Abstract
Soil assessment is important for mobile robot planning and navigation on natural and planetary environments. Terramechanic characteristics can be inferred from the thermal behaviour of soils under the influence of sunlight using remote sensors such as Long-Wave Infrared cameras. However, this behaviour is greatly affected by the low atmospheric pressures of planets such as Mars, so practical models are needed to relate robot remote sensing data on Earth to target planetary exploration conditions. This article proposes a general framework based on multipurpose environmental chambers to generate representative diurnal cycle dataset pairs that can be useful to relate the thermal behaviour of a soil on Earth to the corresponding behaviour under planetary pressure conditions using remote sensing. Furthermore, we present an application of the proposed framework to generate datasets using the UMA-Laserlab chamber, which can replicate the atmospheric CO\({}_{2}\) composition of Mars. In particular, we analyze the thermal behaviour of four soil samples of different granularity by comparing replicated Martian surface conditions and their Earth's diurnal cycle equivalent. Results indicate a correlation between granularity and thermal inertia that is consistent with available Mars surface measurements recorded by rovers. The resulting dataset pairs, consisting of representative diurnal cycle thermal images with heater, air, and subsurface temperatures, have been made available for the scientific community.
keywords: Soil assessment; Thermal inertia; Thermal vision; Multipurpose Environmental Chamber. +
Footnote †: journal: Journal of Terramechanics
## 1 Introduction
Remote assessment of soils terramechanic characteristics can be crucial for the safety and efficiency of a broad range of tasks related to planetary mobile robot navigation such as odometry, environment mapping or energetic consumption (Wong, 2022). Soil assessment is useful to prevent slipping, skidding and getting entrapped on granular soils, which led to delay and significant mobility difficulties in the Curiosity and Spirit rover missions (Arvidson et al., 2017; Gonzalez and Iagnemma, 2018).
In general, onboard mobile robot sensors such as RGB stereo cameras or 3D laser scanners (Guastella and Muscato, 2020) can be used to infer soil characteristics such as roughness and slope (Nampoothiri et al., 2021). However, these measurements are limited to the surface layer, so relevant subsurface properties for traversability such as soil cohesion or internal friction cannot not be assessed. Alternatively, infrared data has been useful for terrain classification on Mars (Putzig and Mellon, 2007). In this sense, onboard remote sensors such as thermopiles and thermal cameras can provide relevant data to infer subsurface properties from thermal behaviour (Chhaniyara et al., 2012).
Thermopiles are being used in the Curiosity and Perseverance rovers (Gomez-Elvira et al., 2012; Perez-Izquierdo et al., 2018) to perform on-site measurements of Martian surface thermal behaviour. Furthermore, in the future Martian Moons eXploration (MMX) mission, a rover will be equipped with thermophiles to infer Phobos' composition from its thermal inertia (Michel et al., 2022). Nevertheless, thermal cameras offer significantly higher resolution, which can be advantageous for assessment and segmentation of heterogeneous soils. For instance, thermal images can be processed to infer soil traversability from measured thermal diffusivity (Cunningham et al., 2015) or thermal inertia (Cunningham et al., 2015; Gonzalez et al., 2017).
Thermal imagery is suitable for training neural networks to
Figure 1: UMA-Laserlab Mars Environment Chamber used in the experiments.
classify soils based on their thermal behaviour (Iwashita et al., 2020). In fact, there is a growing interest in image-based machine learning for navigation and terrain classification for planetary rovers (Rothrock et al., 2016; Mandrake et al., 2022). In particular, thermal inertia measurements have been used to train slippage models on rovers (Cunningham et al., 2019) and to improve their autonomy on machine learning systems (Ono et al., 2020). Nonetheless, machine learning approaches are limited by an insufficient amount of representative data, given the difficulty and expense of planetary imaging (Nagle-Menaughton et al., 2022; Atha et al., 2022). Besides, the thermal behaviour obtained in experiments on Earth is often different from the behaviour on planets such as Mars, which limits the applicability of the machine learning models (Cunningham et al., 2019). Therefore, experimental frameworks are needed to obtain experimental datasets on Earth that are representative of planetary conditions.
The ability to replicate conditions representative of a real scientific mission on other planets is important in experiments for thermal inertia estimations, which are very dependent on pressure (Putzig, 2006). In this sense, Multipurpose Environmental Chambers (MECs) can operate under representative conditions of temperature and pressure found in other planets such as Mars (Vakkada Ramachandran et al., 2020; Wu et al., 2021).
This article addresses on-robot remote sensing for planning and navigation, by proposing a general MEC-based framework to generate representative diurnal cycle dataset pairs. They can be useful to relate the thermal behaviour of a soil on Earth to the corresponding behaviour under planetary pressure conditions using thermal cameras. We analyzed the thermal behaviour of four soil samples of different granularity by comparing replicated Martian surface conditions and their Earth's diurnal cycle equivalent, focusing this analysis on thermal inertia for different type of soils. Results indicate a correlation between granularity and thermal inertia that is consistent with available Mars surface measurements recorded by rovers. Provided framework and its corresponding results are supported by a dataset that was generated using the UMA-Laserlab MEC (see Fig. 1), which can replicate the atmospheric CO\({}_{2}\) composition of Mars. This dataset consists of representative diurnal cycle thermal images with heater, air, and subsurface temperatures for both Earth and Mars conditions, allowing to perform a thermal inertia comparison for different soils. It has been made available for the scientific community.
This article is organized as follows. Section 2 reviews thermal inertia as well as methods to estimate it. Section 3 presents the proposed the MEC-based framework. Section 4 describes the experimental setup. Section 5 introduces the generated dataset and discusses experimental results. Finally, Section 6 offers conclusions and provides an insight on future works.
## 2 Thermal inertia
This section reviews thermal inertia concepts, the use of the thermal diffusion equation to model the Martian surface thermal behaviour, and two methods to estimate thermal inertia based on surface temperature gradients.
### Definition and pressure dependence
Thermal inertia, \(I\), is defined as follows:
\[I=\sqrt{k\rho c}, \tag{1}\]
where \(k\) is the bulk thermal conductivity, \(p\) is the bulk density and \(c\) is the soil specific heat capacity. Thermal inertia is the property of a material that affects the resistance of a soil to change its temperature. A higher thermal inertia value means a slower heating of the soil. Thermal conductivity is the parameter which mainly influences thermal inertia. It is affected by three different heat transfer mechanisms (Putzig, 2006):
\[k=k_{r}+k_{c}+k_{g}, \tag{2}\]
where \(k_{r}\) is the transfer across pore spaces; \(k_{c}\) is the conduction between grains contact areas; and \(k_{g}\) is the conduction of the gas which fills the pores between grains. Pressure greatly determines which term acquires the most relevance. Gas conduction (\(k_{g}\)) dominates at pressures between 0.1 mbar and 1000 mbar, where there is a near-linear relationship between particle size and thermal conductivity for granular soils (Presley and Christensen, 1997; Masamune and Smith, 1963). In this case, loose granular soils have lower thermal inertia than compacted rocky soils (Jakosky, 1986a). However, the relationship is not so strong at pressures higher than 1000 mbar. Thus, it is easier to estimate the soils characteristics based on thermal inertia at Martian pressure than at Earth pressure.
### Martian surface behaviour
Martian surface thermal behaviour can be expressed as a boundary condition on the thermal diffusion equation derived from its surface energy budget:
\[G=-I\sqrt{\frac{\pi}{P}}\left.\frac{\partial T}{\partial Z}\right|_{Z\sim 0} =(1-A)R_{sw}-\epsilon\sigma_{B}T_{s}^{4}+\epsilon R_{bv}-F_{CO_{2}}, \tag{3}\]
where \(G\) is the net heat flux expressed in \(W/m^{2}\), \(A\) is the albedo, \(\sigma_{B}\) is the Stefan-Boltzmann constant, \(R_{sw}\) is the down-welling shortwave (SW) radiation absorbed from the Sun, \(R_{bv}\) is the down-welling longwave (LW) radiation emitted by the atmosphere and the Sun, \(\epsilon\) is the thermal emissivity, \(F_{CO_{2}}\) is the seasonal CO\({}_{2}\) condensation, P is the period of a diurnal cycle, \(T_{s}\) is the surface temperature, and the term \(\left.\frac{\partial T}{\partial Z}\right|_{Z\sim 0}\) is the temperature gradient evaluated at the surface of the terrain, being \(Z^{\prime}\) the distance into the terrain normalized to the thermal skin depth. The sign convention is to use a positive sign when modeling the heating of the terrain and a negative sign when modeling its cooling.
The \(F_{CO_{2}}\) term of Eq. 3 is negligible for Martian surfaces located from equatorial to mid-latitudes that present no frost. Moreover, the down-welling LW radiation is not considered, as previous on-ground measurements have shown it to be an order of magnitude smaller than the rest of terms (Martinez et al., 2014). Thus, the simplified equation for the soil thermal behaviour is:
\[G=-I\sqrt{\frac{\pi}{P}}\left.\frac{\partial T}{\partial Z^{T}}\right|_{Z^{T}=0}=(1- A)R_{sw}-\epsilon\sigma_{B}T_{s}^{4}, \tag{4}\]
where the soil thermal behaviour depends on the SW incident Sun's radiation, the thermal inertia and the surface radiative emission.
### Thermal inertia estimation
Thermal inertia represents a complex combination of physical properties that are not directly measurable in practice, so simplified estimations based on surface temperature observations are required (Wang et al., 2010). In this work, we use the Apparent Thermal Inertia (ATI) (Price, 1977) and the method based on daily amplitude of surface soil heat flux and temperature by Wang et al. (2010).
ATI (Price, 1977) is a simple method to estimate the thermal inertia of an outdoors surface subjected to the Sun's heating. This estimation takes into account the diurnal temperature amplitude by measuring the minimum night and maximum day surface temperatures, \(T_{min}\) and \(T_{max}\), respectively. The formula to obtain the ATI of a surface is:
\[ATI=\frac{1-A}{\Delta T_{s}}, \tag{5}\]
where \(\Delta T_{s}=T_{max}-T_{min}\). The result can be multiplied by a 4186 coefficient to express ATI in thermal inertia units, \(titu\equiv\frac{W_{s}^{2/2}}{m^{2}K}\). Throughout this work, ATI will always be expressed in \(titu\). Even if ATI is widely used in the literature, this estimation does not consider the surface energy budget, among other limitations (Price, 1985).
The alternative estimation proposed by Wang et al. (2010) considered a sinusoidal approximation of the Earth's net heat flux and surface temperatures for a diurnal period \(P\). The same assumption can be applied to Mars' heat fluxes and temperatures (Martinez et al., 2014). Under this assumption, the thermal inertia of a given soil can be estimated as:
\[I_{sin}=\frac{\Delta G_{s}}{\Delta T_{s}\sqrt{2\pi/P}}, \tag{6}\]
where the net heat flux expressed as \(\Delta G_{s}=G_{max}-G_{min}\), being \(G_{max}\) and \(G_{min}\) the maximum and minimum values of the net heat flux, respectively.
## 3 MEC-based framework for thermal remote sensing dataset generation
In this section, we propose a general framework for generating representative diurnal cycle thermal behavior datasets of soils under representative pressures. The framework consists of a physical MEC-based configuration (see Fig. 2) and an experimental methodology (see Fig. 3).
The proposed physical configuration (see Fig. 2) allows to perform remote temperature measurements under the extreme conditions produced by the MEC. This configuration consists of an inner plate where the sample bins are placed, an external thermal camera connected to a Mini PC for data collection and an IR viewport. The viewport must allow the infrared range from 8 \(\upmu\)m to 14 \(\upmu\)m to pass through with minimal losses. Furthermore, it must withstand the pressure differential and temperatures reached by the MEC.
The experimental methodology (see Fig. 3) is divided into three sequential tasks: the preparation of the MEC setup; the adjustment of the inner pressure according to the environment to be replicated; and the physical simulation where the actuation profile is defined. In the preparation task, the sample bins are placed on the plate while care is taken to thermally insulate the plate surface, as it can produce IR reflections that can distort the measurements. The thermal camera is placed on the viewport and its housing is connected to ground to avoid the electrostatic charges produced by the MEC pumps. Next, the thermal camera is calibrated to provide precise measurements of the sample bins surfaces and, finally, the MEC gets sealed.
Figure 3: Flow chart of the proposed methodology.
Figure 2: Schematic diagram of the proposed experimental MEC setup.
In the pressure adjustment task, different procedures have to be performed depending on the pressure and temperature range of the experiments. The simulation can be started if the experiments are planned to be at earth pressure and temperature above 0 \({}^{\circ}\)C. Otherwise, the air is pumped out until vacuum and then humidity-free air of the desired composition (i.e., 95 % of CO\({}_{2}\) for Mars) is pumped in. This process prevents the freezing of the air moisture from affecting the MEC internal systems. During the physical simulation part, temperatures are defined for the MEC heaters to obtain sinusoidal soil samples temperatures similar to those obtained in a diurnal cycle in reality.
In the physical simulation, the surface energy budget of each sample bin inside the MEC can be expressed in function of a radiative flux produced by the MEC heaters according to the following equation:
\[G=-I\sqrt{\frac{\pi}{P}}\left.\frac{\partial T}{\partial Z}\right|_{Z=0}= \epsilon\sigma_{B}T_{ heater}^{4}-\epsilon\sigma_{B}T_{s}^{4}, \tag{7}\]
where \(T_{s}\) is the surface mean temperature of each soil and \(T_{ heater}\) is the MEC heaters temperature. Air natural convection is considered to be negligible as the MEC is an enclosed space with no wind. Inside the MEC, the radiation term \(\epsilon\sigma_{B}T_{ heater}^{4}\) simulates the active Sun heating of the term \((1-A)R_{sw}\) of Eq. 4.
## 4 Experimental setup
The framework proposed in Section 3 has been applied to produce representative diurnal cycle datasets for analyzing the thermal behaviour of four soil samples for corresponding Martian and Earth surface conditions. This section presents the integration of hardware components to evaluate the proposed framework as well as the selection of soil samples.
### Equipment
The UMA-Laserlab MEC (see Fig. 1) is a stainless-steel cylinder of 12 m of length and 1.6 m of diameter and viewports on the top and sides (Alvarez-Llamas et al., 2021). It is equipped with an inner spot-gridded thermal jacket or heater which contains a cooling fluid that can reach a temperature in the range of \(-72\,^{\circ}\)C to \(127\,^{\circ}\)C, at a rate of 1 \({}^{\circ}\)C/min. The air inside can be pumped out until a pressure of \(10^{-4}\)mbar is reached and can be replaced by CO\({}_{2}\) to simulate the composition of the atmosphere on Mars. It is equipped with vacuum-compliant thermocouple gauges in the center of its core to measure the air temperature. Additionally, the MEC has a stainless steel plate on rails that allow a payload of up to 70 kg.
The thermal vision camera is a PI-640i by Optris based on uncooled microbolometer technology. It is a 320 g Long-Wave Infrared (LWIR) camera that works in the spectral range of \(8\mu m\) to \(14\mu m\), has a resolution of 640x480 pixels and a germanium optic with a field of view of \(60^{\circ}\) x \(45^{\circ}\). It can measure temperatures from \(-20\,^{\circ}\)C to \(900\,^{\circ}\)C with a thermal sensitivity of \(0.04\,^{\circ}\)C. We selected this camera due to its high resolution and light weight, making it suitable for mobile and aerial robots. However, this uncooled camera does not provide temperature measurements below \(-20\,^{\circ}\)C, which limited the absolute minimum temperature to which the soil samples could be subjected.
The thermal camera was connected to a Mini PC Intel NUC with an Intel Core i5 processor of 1.8GHz and 16GB of RAM running the software Optris PIX Connect. We adjusted the focal length of its optic by using a warm body (i.e., a hand) placed on the plate as a reference. The thermal camera geometric and radiometric calibrations were performed and provided by the manufacturer. The sample bins were placed on the plate perpendicular to the thermal camera at a distance of around 1.3 m to have an undistorted view of their surfaces. The plate surface was covered with insulating cardboards and a thick black fabric to avoid IR reflections of the steel.
We designed and developed a viewport adapter (see Fig. 4) to remotely make measurements from outside the MEC using the thermal camera. It is composed of a IR window that keeps the inside of the MEC sealed while letting the LWIR range from 8 m to 14 m radiation pass through. We chose an anti-reflection coated Germanium circular optic model GEW16AR.20 by MKS Instruments due its high mechanical resistance and its ability to withstand abrupt thermal changes. We selected a diameter of 74.9 mm and 5.0 mm of thickness in order to comply with the minimum thickness required to avoid reaching the germanium's fracture strength caused by the pressure differential between the environment and Martian pressure inside the MEC (Yoder Jr, 2005). Furthermore, an aluminium
Figure 4: a) 3D model of the viewport adapter (dimensions in millimeters), b) Customized viewport adapter.
Figure 5: Samples bins of soils of different granularity introduced into the MEC.
toroid frame was crafted to place the unclamped Germanium window into the MEC's upper ISO160K compliant viewports.
### Soil samples
Four sample bins with soils of different characteristics were selected for the experiments (see Fig. 5). Three of the bins contained granular soils an one contained an example of bedrock. Table 1 shows them sorted from highest to lowest mean granularity. We plotted granularity charts of the granular soils (see Fig. 6) by passing them through sieves with grids of different sizes. In terms of homogeneity, Soil C is the most homogeneous, as more than 90 % of its grains have a diameter of 0.7 mm to 1 mm. It is followed by Soil B, whose grains are mostly concentrated on the size of less than 2 mm. Finally, Soil A is classified as the most heterogeneous of the three soils as it consists on a mixture of several grain sizes.
## 5 Experiments
This section presents the experiments carried out to generate the dataset pairs under Earth's and Martian conditions using the framework proposed in Section 3. The dataset thermal images were processed to analyze the soils thermal behaviour and to estimate their thermal inertia.
Two pairs of experiments (Pair-1 and Pair-2) were performed in the MEC on the soils defined in Table 1 in order to provide redundant measurements. Table 2 summarizes the main characteristics of each experiment. It was only possible to obtain the subsurface temperature of one of the soils per experiment due to MEC connectivity limitations. The subsurface thermo-couple gauge was located a depth of 3 cm in soils A and C for Pair-1 and Pair-2, respectively. The experiment pairs consist of Earth representative (1000 mbar) (#1 and #3) and Mars representative (8 mbar) (#2 and #4) pressures. Besides, or the Mars-like experiments, air with Mars' Carbon Dioxide (CO\({}_{2}\)) atmospheric composition of 95 % was introduced into the MEC. On-site near-equatorial environmental measurements performed by Mars Science Laboratory (MSL) showed mean daily air temperatures of around \(-\)50 \({}^{\circ}\)C with approximate amplitudes of 60 \({}^{\circ}\)C (Martinez et al., 2017). Thus, sinusoidal temperatures of similar amplitude were simulated in diurnal cycles of experi
\begin{table}
\begin{tabular}{l l l l} Sample & Granularity & Density & Bin size \\ \hline Bedrock & 40.0 - 50.0 & 2.94 & 53 x 23 x 3.5 \\ Soil A & 3.0 - 5.0 & 1.43 & 22 x 22 x 7 \\ Soil B & 1.3 - 2.0 & 1.40 & 55 x 23 x 3.5 \\ Soil C & 0.7 - 1.0 & 1.71 & 22 x 22 x 7 \\ \hline \end{tabular}
\end{table}
Table 1: Sample bins characteristics. Mean granularity ranges are expressed in mm, density is in g/ml and bin sizes are in cm.
\begin{table}
\begin{tabular}{l l l l l l} & Experiment & Pressure & Subsurface & \(P_{e}\) \\ \hline \multirow{2}{*}{Pair-1} & \#1 Earth-like & 1000 & Soil A & 296 \\ & \#2 Mars-like & 8 & Soil A & 297 \\ \hline \multirow{2}{*}{Pair-2} & \#3 Earth-like & 1000 & Soil C & 320 \\ & \#4 Mars-like & 8 & Soil C & 360 \\ \hline \end{tabular}
\end{table}
Table 2: Description of the MEC experiments and the soil in which the subsurface thermocouple gauge was located. Pressure is expressed in mbar and the experimental actuation period, \(P_{e}\), is showed in minutes.
Figure 6: Granularity chart for: a) Soil A, b) Soil B, c) Soil C.
\begin{table}
\begin{tabular}{l l l l l} & & \multicolumn{2}{c}{Mean Temp.} & \multicolumn{1}{c}{Dev.} \\ \cline{3-5} & & \(T_{\text{init}}\) & \(\Delta T_{\text{s}}\) & \(T_{\text{ran}}\) & \(\Delta G_{\text{s}}\) \\ \hline \multirow{4}{*}{\#1 Earth} & Bedrock & 24.8 & 53.3 & 1.2 & 280 \\ & Soil A & 24.9 & 51.1 & 1.8 & 280 \\ & Soil B & 25.1 & 51.6 & 0.9 & 274 \\ & Soil C & 24.8 & 51.4 & 1.1 & 275 \\ \hline \multirow{4}{*}{\#2 Mars} & Bedrock & 25.0 & 42.0 & 1.7 & 416 \\ & Soil A & 22.7 & 45.7 & 2.5 & 357 \\ & Soil B & 22.4 & 45.5 & 0.4 & 356 \\ & Soil C & 25.0 & 46.3 & 1.0 & 325 \\ \hline \multirow{4}{*}{\#3 Earth} & Bedrock & 24.3 & 48.0 & 1.3 & 268 \\ & Soil A & 24.4 & 45.6 & 1.8 & 265 \\ & Soil B & 24.5 & 45.8 & 0.8 & 264 \\ & Soil C & 24.5 & 46.2 & 1.1 & 257 \\ \hline \multirow{4}{*}{\#4 Mars} & Bedrock & 25.5 & 38.3 & 1.6 & 399 \\ & Soil A & 24.2 & 41.3 & 2.3 & 337 \\ & Soil B & 24.2 & 41.2 & 1.3 & 337 \\ & Soil C & 24.8 & 43.0 & 0.8 & 310 \\ \hline \end{tabular}
\end{table}
Table 3: Surface temperatures and heat fluxes of the sampled soils at Earth’s and Martian pressures.
mental actuation period \(P_{e}\) by manual input of constant heating and cooling setpoints for MEC actuation.
### Soils thermal behaviour
Soil surface temperatures were measured by means of the Optris PI-640i thermal camera. The thermal remote sensing was done as realistically as possible to an actual on-robot implementation, so no prior knowledge of the soils was assumed. Thus, emissivity (\(\epsilon\)) was considered to be unitary and the albedo (\(A\)) to be zero for all the soils, according to Kirchhoff's law: \(1=A+\epsilon\)(Vollmer, 2021). Polygonal areas delimiting each soil were defined in the acquired thermal images, where the pixels showing the thermocouple gauge were removed so as not to affect the temperature measurements.
Figures 7-10 show the diurnal cycle temperatures for all soil types from experiments #1-#4, respectively. Besides, Figs. 11 and 12 present surface and subsurface temperature readings for soil A (experiments #1 and #2) and soil C (experiments #3 and #4), respectively. All figures show the heater, setpoint and air temperatures. The transient is assumed to end when an inflection point is reached in the upwards heater temperature response. Moreover, Table 3 presents soil surface mean temperatures for the pixels in the corresponding polygonal area together with standard deviations for the four experiments. In the table, \(\Delta T_{s}=T_{max}-T_{init}\), being \(T_{max}\) the maximum mean temperature and \(T_{init}\) the mean temperature when actuation starts. \(T_{trans}\) specifies the standard deviation temperature of each soil when the actuation transient ends. Net heat fluxes were computed by applying the surface energy budget equation of each sample bin inside the MEC (see Eq. 7) using the soils mean temperatures and the MEC heater temperatures obtained during the experiments. \(\Delta G_{s}=G_{max}-G_{init}\), being \(G_{max}\) the maximum net heat flux and \(G_{init}\) the net heat flux when actuation starts.
First, we compare the surface mean temperatures of pair-1 (Figs. 7a and 8a). The graphs show that only the bedrock is distinguishable at terrestrial pressure as all the granular soils present similar temperatures. On the other hand, at Martian pressure, the soils can be classified into three groups based on their temperature during the MEC heating; from highest to lowest: (1) Soil C; (2) Soil A and B; and (3) bedrock. Besides, both graphs show a slight temporal delay of the bedrock temperature over the rest of soils. The same analysis can be applied to the graphs of pair-2 (Figs. 9a and 10a).
Next, we compare the soils standard deviation curves of pair-1 (Figs. 7b and 8b). All the soils can be classified using the standard deviation of the surface temperatures at both Earth's an Mars' pressures from the start of the actuation until the transient ends. This behaviour can be due to their granularity: heterogeneous soils (e.g., Soil A) show higher standard deviation temperatures than more homogeneous soils (e.g., Soil C). This behaviour becomes more evident in the Martian case. A similar analysis can be applied to the graphs of pair-2 (Figs. 9b and 10b).
Finally, we compare the mean surface value with the subsurface temperatures of Soil A in pair-1 (Figs. 11a and 11b). In this case, the maximum difference between the surface and subsurface temperature are \(11.6\,^{\circ}\)C and \(14.7\,^{\circ}\)C at Earth's and Mar's pressure, respectively; which constitutes an increase of \(26.72\%\). As for Soil C in pair-2 (Figs. 12a and 12b), for Earth, the maximum difference is \(14.4\,^{\circ}\)C, whereas for Mars-like it is \(24.4\,^{\circ}\)C; which is a \(69.44\%\) increase. Based on this data, we can conclude that the thermal inertia of both soils increases when the pressure decreases, as it gets more difficult for the heat to be transmitted vertically.
### Thermal inertia estimation
We computed estimations of each soil thermal inertia based on the \(\Delta T_{s}\), \(\Delta G_{s}\) and \(P_{e}\) values obtained during the experiments. For comparison purposes, we used both the ATI (Eq. 5) and the sinusoidal estimation, \(I_{sim}\) (Eq. 6). The estimated values are shown in Table 4.
Regarding the sinusoidal estimations, \(I_{sim}\), it is observed that thermal inertia increases when pressure decreases. Thus, soils are easier to classify at Martian pressure than at Earth's pressure. Soils with larger particle sizes, e.g., Bedrock, have higher thermal inertia; on the other hand, soils with smaller particles, e.g., Soil C, show lower thermal inertia. The estimated values under Martian conditions of the bedrock are consistent with the on-site thermal inertia obtained by Curiosity for bedrock-dominated surfaces (\(\sim 350-550\)_tiu_) (Vasavada et al., 2017). As for Soil C, its estimated thermal inertia is similar to surfaces of around \(1\,\mathrm{mm}\) mean particle size also derived from Curiosity's data (\(\sim 265-375\)_tiu_) (Hamilton et al., 2014). On the other hand, soils A and B present similar thermal inertia despite having different particle size. In this case, Jakosky (1986b) argued that soils with particle sizes from \(1\,\mathrm{mm}\) to a few centimeters have a constant thermal inertia of \(\sim 420\)_tiu_. In conclusion, at Earth's pressure, the mean relative difference of the highest inertia soil compared with the lowest inertia soil is \(4.20\%\); while at Martian pressure the difference is \(42.84\%\).
As for the ATI estimations, even though the relative differences between soils are consistent, they do not display a significant increase of their absolute values when the pressure decreases. This is mainly due to the fact that ATI does not consider the soils heat fluxes inside the MEC. Thus, thermal inertia estimations using the ATI equation are not adequate enough for this kind of experiments.
### Dataset
During the experiments, we collected a total of \(9225\) radiometric images. Each image recorded by the thermal camera
\begin{table}
\begin{tabular}{c c c c c c} & & Bedrock & Soil A & Soil B & Soil C \\ \hline \multirow{3}{*}{I\({}_{sim}\)} & \#1 Earth & 309 & 322 & 312 & 314 \\ & \#2 Mars & 522 & 411 & 411 & 370 \\ & \#3 Earth & 311 & 323 & 320 & 310 \\ & \#4 Mars & 548 & 431 & 431 & 379 \\ \hline \multirow{3}{*}{ATI} & \#1 Earth & 79 & 82 & 81 & 81 \\ & \#2 Mars & 100 & 92 & 92 & 90 \\ \cline{1-1} & \#3 Earth & 87 & 92 & 91 & 91 \\ \cline{1-1} & \#4 Mars & 109 & 101 & 102 & 97 \\ \hline \end{tabular}
\end{table}
Table 4: Estimated values of thermal inertia for each soil.
Figure 8: Diurnal cycle temperatures for the Experiment #2 at Martian pressure (\(p=8\,\mathrm{mbar}\)).
Figure 10: Diurnal cycle temperatures for the Experiment #4 at Martian pressure (\(p=8\,\mathrm{mbar}\)).
Figure 7: Diurnal cycle temperatures for the Experiment #1 at Earth’s pressure (\(p=1000\,\mathrm{mbar}\)).
Figure 9: Diurnal cycle temperatures for the Experiment #3 at Earth’s pressure (\(p=1000\,\mathrm{mbar}\)).
was saved as a plain text 640 x 480 matrix with each cell containing the temperature in degrees Celsius. Snapshots of the thermal images were processed to facilitate direct viewing. An example of one of these snapshot is shown in Fig. 13 Finally, spreadsheets were generated with the heaters, air, and subsurface temperatures recorded by the thermocouples. To the authors' knowledge, no similar dataset exists in the literature. A public dataset with the recorded data can be found at Zenodo1.
Footnote 1: [http://doi.org/10.5281/zenodo.7750148](http://doi.org/10.5281/zenodo.7750148)
## 6 Conclusions and future work
This article proposes a general framework to generate remotely measured diurnal cycle dataset pairs on soils subjected to planetary exploration conditions using MECs. A dataset was generated on different soils from the experiments carried out in the UMA-Laserlab MEC under Earth's and Mars' conditions. To the authors' knowledge, no similar dataset exists in the literature. The obtained data was processed to estimate the thermal inertia values of the soils. These values were compared with real on-site estimations performed by rovers of Mars, showing that our framework is capable of physically simulating the soil thermal behaviour under Mars' conditions.
Based on the analysis of the experiments carried out in this paper, we conclude that thermal vision cameras can be useful to remotely assess soils under Martian pressures. This is equally true under Earth's conditions: although relative thermal inertia are less dependant on soils characteristics, measurements of surfaces mean temperatures and standard deviations can potentiality provide information about soils characteristics. Thus, soil classification algorithms based on thermal vision that work on Earth will perform much better on Mars. Additionally, the generation of diurnal cycle dataset pairs enables the research of new terrain classification techniques using vision cameras.
Future work will be focused on developing on-robot terrain classifiers based on machine learning algorithms trained with thermal images. Furthermore, we will develop multimodal sensors combining thermal vision together with color and depth information to enhance the autonomous assessment of unstructured environments. Studies can also be focused on implementing this kind of algorithms on drones as NASA is planning to use helicopters on future Mars missions.
## Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Figure 11: Diurnal cycle surface and subsurface temperatures of Soil A.
Figure 12: Diurnal cycle surface and subsurface temperatures of Soil C.
## Acknowledgements
This work was supported by the Andalusian Regional Government under the project entitled "Intelligent Multimodal Sensor for Identification of Terrarmechanic Characteristics in Off-Road Vehicles (IMSITER)" under grant agreement P18-RT-991.
|
2308.13828
|
Gaia view of primitive inner-belt asteroid families: Searching for the
origins of asteroids Bennu and Ryugu
|
Near-Earth asteroids Ryugu and Bennu, were visited, characterised, and
sampled by the Hayabusa2 and OSIRIS-REx missions: remote sensing data and
sample return analysis showed that both asteroids have primitive, hydrated and
organic-rich compositions. The dark families of the inner main belt (IMB) that
belong to the spectroscopic C-complex have been claimed to be the sources of
both Ryugu and Bennu. Hence, there has been large effort to characterise them.
Here we used the Gaia Data Release 3 (DR3) asteroid reflectance spectra to
investigate the 11 known IMB C-complex families (Chaldaea, Chimaera, Clarissa,
Erigone, Eulalia, Klio, Polana, Primordial, Sulamitis, Svea, Tamara). For each
family, we extracted the family members that have known geometric visible
albedo values and Gaia DR3 data and we created an average reflectance spectrum
per family between 370 and 950 nm. The average DR3 reflectance spectra of each
family were compared with the previous literature data and to Bennu's and
Ryugu's spectra. We found that DR3 reflectance spectra of the IMB C-complex
families are in general consistent with previous findings with the only
exception of the Svea family. We also showed that the Polana and the Eulalia
families can be distinguished in the wavelength region 370 - 500 nm. Among all
the IMB C-complex families, we determined that the average reflectance spectra
of the Eulalia and Polana families are the most similar to those of Bennu and
Ryugu, respectively. In particular, Eulalia family's average spectrum is a good
match to Bennu's in the wavelength range 450 - 800 nm, while beyond 800 nm the
spectrum of Bennu is bluer than that of Eulalia. Moreover, the spectrum of the
Polana family has the smallest discrepancy against the spectrum of Ryugu,
although this match is formally unsatisfactory (reduced chi^2 ~ 1.9).
|
Marco Delbo, Chrysa Avdellidou, Kevin J. Walsh
|
2023-08-26T09:38:43Z
|
http://arxiv.org/abs/2308.13828v1
|
# Gaia view of primitive inner-belt asteroid families
###### Abstract
Context:
Aims:Near-Earth asteroids Ryugu and Bennu, were visited, characterised, and sampled by the Hayabusa2 and OSIRIS-REX missions, where remote sensing data and sample return analysis showed that both asteroids have primitive, hydrated and organic-rich compositions. The dark families of the inner main belt that belong to the spectroscopic C-complex have been claimed to be the sources of both Ryugu and Bennu, hence there have been large efforts to spectroscopically characterise them by ground-based observations.
Methods:Here we used the Gaia Data Release 3 (Gaia DR3) asteroid reflectance spectra in order to characterise the 11 known inner main belt C-complex families (Chaldea, Chimara, Clarissa, Erigone, Eulalia, Flo, Polana, Primordial, Salumitis, Svea, Tamara), using space-borne visible-light spectroscopic observations. For each family we extracted the family members that have known geometric visible albedo values and Gaia DR3 data, and we created an average reflectance spectrum per family, between 370 and 950 nm. These averages were then compared with the ground-based visible spectroscopic surveys of the same families, and to Bennu's and Ryugu's space and ground-based spectra in the same wavelength range.
Results:Gaia DR3 reflectance spectra of the dark asteroid families of the inner main belt are in general consistent with previous findings. The only exception is the case of the Svea family: previous surveys classified its members as B-types, whereas the average reflectance spectrum from Gaia DR3 is similar to a C-type. We also showed that the Polana and the Eulalia families can be distinguished in the wavelength region 370 - 500 nm. Among all the primitive inner main belt families, we found that the average reflectance spectra of the Eulalia and Polana families are the most similar to those of Bennu and Ryugu, respectively. In particular, Eulalia family's average spectrum is a good match to Bennu's in the wavelength range 450 - 800 nm, while beyond 800 nm the spectrum of Bennu is bluer than that of Eulalia. Moreover, the spectrum of the Polana family has the smallest discrepancy (smallest \(\chi^{2}\)) against the spectrum of Ryugu, although this match is formally unsatisfactory (reduced \(\chi^{2}\sim 1.9\)).
Conclusions:
## 1 Introduction
The main belt consists of over one million known asteroids. These have a plethora of compositions and values of albedo, ranging from few to several tens of percent. The two largest compositional classes are those broadly identified by the so-called spectroscopic S- and C-complexes (DeMeo et al., 2015). These spectral complexes are also separated in albedo, with the S- and C-complex asteroids having, in general, geometric visible albedo (\(p_{V}\)) values smaller than or larger than 0.12, respectively (Delbo et al., 2017; Ferrone et al., 2023).
Recent theories of asteroid formation invoke an _in situ_ (or quasi-_in situ_) origin of the S-complex bodies (Walsh et al., 2011; Raymond & Izidoro, 2017). On the other hand, C-complex asteroids are supposed to be exogenous to the main belt. It is generally thought that these bodies accreted from the dust of the protoplanetary disk in the giant planet region (Warren, 2011; Kruijer et al., 2017; Nakamura et al., 2023), were later transported, and finally implanted into the main belt from larger heliocentric distances. A handful of mechanisms have been proposed for this process, including the rapid growth of the giant planets scattering nearby planetesimals (Raymond & Izidoro, 2017), the migration of the giant planets amidst the solar nebula (Walsh et al., 2011; Pirani et al., 2019) and likely some contributions from the giant planet instability (Levison et al., 2009; Vokrouhlicky et al., 2016).
After the original asteroids (also known as planetesimals; Delbo et al., 2017) were accreted (for the S-complex) or implanted (for the C-complex) in the main belt, collisions with other asteroids throughout the history of the solar system have broken several of them, which in turn created families of asteroid fragments (Nesvorny et al., 2015). Hence, the large majority of asteroids are collisional fragments of the original ones (Delbo et al., 2017; Dermott et al., 2018; Ferrone et al., 2023).
This large population of small fragments can reach the near-Earth space via well-established dynamical routes and eventually arrive to Earth as meteorites (Greenwood et al., 2020, and references therein). The carbonaceous chondrite meteorites (CCs) are typically associated with the C-complex asteroids. The study of CCs has shown that the C-complex asteroids carry a primitive composition, enriched in water and organics, and were possibly responsible for delivering the ingredients of life to primeval Earth via impacts (Chyba & Sagan, 1992). On the other hand, despite the fact that the C-complex asteroids are a majority in the main belt, the CCs are a small fraction in our meteoritic inventory (Norton & Chitwood, 2008). This is reasonable
if we consider the filtering by Earth's atmosphere. Due to the fact that C-complex asteroid materials are mechanically quite weak and some of them with large porosity (Grott et al., 2019; Ballouz et al., 2020; Cambioni et al., 2021), it is very difficult to survive the atmospheric passage at the typical velocities that meteoroids hit our planet.
The above have also been confirmed by Hayabusa2 (JAXA) and OSIRIS-REx (NASA) sample return missions, which have visited, characterised, and sampled the near-Earth C-complex asteroids (162173) Ryugu (Kitazato et al., 2019) and (101955) Bennu (Lauretta et al., 2019), respectively. Recent sample analysis has shown that Ryugu is an aqueous altered asteroid, with a CI (Ivuna-like) (Yokoyama et al., 2023), organic-rich composition (Ito et al., 2022; Naraoka et al., 2023; Yabuta et al., 2023). Moreover, Ryugu samples provide an excellent example of primitive solar system material that is "clean" from terrestrial contamination. Although Bennu's sample has not arrived on Earth yet, OSIRIS-REx spectroscopic data from the visible to thermal infrared wavelengths have shown that the surface is consistent with an aqueously altered CM chondrite (Hamilton et al., 2019; Kaplan et al., 2020; Simon et al., 2020).
It is established that, based on the orbits of Ryugu and Bennu, both asteroids escaped the inner-main belt (IMB) via the \(\nu_{6}\) secular resonance (Campins et al., 2010, 2013; Bottle et al., 2015) and reached the near-Earth space about 5 (Okazaki et al., 2023) and 1.75 Myr ago (Ballouz et al., 2020), respectively. Therefore, their source regions (i.e., C-complex asteroid families), should be located in the inner main belt and could possibly have different compositions.
In the inner main belt 11 dark C-complex asteroid families of different sizes and ages are known to exist (Walsh et al., 2013; Nesvorny et al., 2015; Delbo et al., 2017), namely: Polana, Eulalia, Ergone, Chaldaea, Chimaraera, Clarissa, Klio, Sulamitis, Svea, Tamara, and the so-called Primordial one (the age of a family is given by the epoch of the collision that created it; Nesvorny et al., 2015). Since 2016, the majority of these families have been the subject of dedicated ground-based visible and near-infrared spectroscopic observations to characterise their composition, report on the level of their homogeneity, and compare to each other. This effort has been mostly driven by the need to constrain the source regions of Ryugu and Bennu, to give astronomical context for the returned samples (e.g., Lauretta et al., 2015).
Specifically, it has been reported that any slope difference between the Polana and Eulalia family members in the visible light range (VIS) is within 1\(\sigma\) uncertainty (de Leon et al., 2016), while they are neither distinguishable in the near-infrared range (NIR) (Pinilla-Alonso et al., 2016), nor in the near-ultraviolet (NUV) (Tatsumi et al., 2022). Visible spectroscopic observations of Ergone, Clarissa, Sulamitis, Klio, Chaldaea, and Svea (Morate et al., 2018, 2019), showed that the slopes of Clarissa and Svea family members agree with the Polana family members, and, similarly, Sulamitis family members' slopes to Erigone ones. Despite their difference in VIS slopes, NIR observations showed that Sulamitis and Klio members are close to Polanas (Arredondo et al., 2020, 2021b). In addition, Klio and Chaldaea show a complementarity in VIS slopes that together could match the Erigone's spectral slope distribution, while in the NIR appear to have extremely similar slopes. It is proposed that these findings could indicate a common origin to these families (Arredondo et al., 2021a), an idea that can be supported by the fact that both families overlap in the space of proper orbital semi-major axis vs. inverse diameter (\(a\) vs. 1/\(D\)). Another interesting result is that the dark IMB families have been divided into two broad groups, the so-called "blue" families with blue to moderate slopes and no sign of hydration, and the "red" families, which include objects with the 700 nm absorption band present (Morate et al., 2019).
In this work, our goal is to study the IMB dark primitive families by exploiting hundreds of VIS spectra from Gaia Data Release 3 (DR3) catalogue (Gaia Collaboration et al., 2023), in order to understand their differences and similarities. In this way we will obtain a view of their original asteroids (planetesimals). In section 2 we will present the datasets that include the family membership and the Gaia DR3 spectra of these populations; in sections 3 our analysis and results, while in section 4 the discussion on our findings, including a comparison with the ground-based spectroscopic studies.
## 2 Data
### Gaia DR3 spectroscopic data
We made use of the reflectance spectra that were derived from asteroid spectroscopic observations obtained by Gaia between 5 August 2014 and 28 May 2017 and were published in June 2022 as part of the DR3 (Gaia Collaboration et al., 2023). This dataset consists of mean reflectance spectra in the VIS wavelength range of 60,518 Solar System objects (SSOs). The reflectance spectra were acquired by two low-resolution slit-less spectrophotometry on board Gaia, the blue and red spectrophotometers (BP and RP), which are respectively optimised for the blue and red part of the spectrum. Specifically, the BP spans the wavelength range from 330 to 680 nm and the RP covers the range from 640 to 1050 nm. The spectral resolution of each spectrophotometer is a function of wavelength, and varies from 4 to 32 nm pixel\({}^{-1}\) for the BP and 7 to 15 nm pixel\({}^{-1}\) for the RP (Gaia Collaboration et al., 2023; Carrasco et al., 2021; Jordi et al., 2010). When an asteroid transisted on the focal plane of Gaia at a given epoch, each spectrophotometric measured photons at every wavelength to create 'epoch spectra'. For each asteroid, each epoch spectrum was divided by the mean spectrum of a series of trusted solar analogue stars (see Table 1 of Gaia Collaboration et al., 2023) in order to create "epoch reflectances". Given that the wavelength range of both instruments overlaps in the 650-680 nm interval, the two epoch reflectances were merged to create a full epoch reflectance. To each asteroid was finally associated a unique mean reflectance spectrum obtained by averaging several epoch reflectances, spanning the visible wavelength range from 374 to 1034 nm in 16 discrete wavelength bands (Gaia Collaboration et al., 2023). A'reflectance_spectrum_flag' (hereafter RSF) number was also associated with each band, assessing the estimated quality of the band. In some cases, the merging of the epoch spectra taken by each spectrophotometer was not perfect, which can lead to the creation of artefact bands (Gaia Collaboration et al., 2023). Thus, caution must be taken when analysing the mean reflectance spectra in the overlapping wavelength interval. In a similar way, the bluest and reddest data bands of Gaia spectra could also be affected by systematics due to the low efficiency of the spectrophotometers in these bands. They were not always flagged but they need to be taken with caution as well (Gaia Collaboration et al., 2023; Galinier et al., 2023).
### C-complex inner main belt families
We retrieved the asteroid membership for seven out of 11 IMB C-complex families from Nesvorny et al. (2015) (see Table 1). The latter family identification within the main belt asteroid population is, by construction, conservative, meaning that good sep
aration between the families is ensured, limiting therefore the number of family interlopers. This is very good for our purposes of studying similarities and differences between families in the same region of the main belt. For the Polana and Eulalia families we used the membership as it was defined in Walsh et al. (2013). For the Tamara family, which is located in the high-inclination Phocea region, we used the membership of Novakovic et al. (2017). Finally, for the so-called Primordial family we used the membership defined by Delbo et al. (2017). We retrieved the \(p_{V}\)-values, where applicable, from the Minor Planet Physical Properties Catalogue (MP3C)1. Specifically, for each asteroid the \(p_{V}\)-values reported by the MP3C are averaged values of the published geometric visible albedo determination (a weighting factor equal to the inverse of square of the published formal uncertainty of each albedo determination is used by the MP3C for their averaging).
Footnote 1: [https://mp3c.oca.eu](https://mp3c.oca.eu)
## 3 Analysis & Results
### Removal of family interlopers
It is important to remember that the identification of family members based on clustering in orbital elements (Nesvorny et al., 2015) or correlations between the orbital proper semi-major axes and the \(1/D\)(V-shape criterion Bolin et al., 2017; Delbo et al., 2017) is a rough expression of the true membership (Nesvorny et al., 2015; Ferrone et al., 2023): in fact, asteroids unrelated to a family that happen to have values of proper elements and \(1/D\)-values within the orbital and/or \(1/D\) range of that family are grouped together with the true members. These objects identified as "false positive" are typically called family interlopers (Nesvorny et al., 2015). Interlopers can be distinguished amongst family members by their anomalous spectral properties or albedo values compared to the majority of the family members.
To filter out potential family interlopers and ensure the cleanest sample for our spectral analysis of each family, we applied a series of criteria on the values of the \(p_{V}\), spectral slope (\(s\)), and reflectance difference \(R_{z}-R_{i}\) of family members. First of all, we required the family members to have \(p_{V}\)-values reported in the literature, and be \(p_{V}<0.12\). In addition, we required the Gaia DR3 spectra to have \(-5<s<6\) %/100 nm, and \(-0.2<R_{z}-R_{i}\)\(<0.185\). The selected \(p_{V}\) threshold has been shown to separate the dark from the bright asteroids respectively belonging to the C- and S-complexes (Delbo et al., 2017). In particular, 88% of the C-complex asteroids are contained within the asteroid population with \(p_{V}\)\(<\)0.12 (Delbo et al., 2017). Moreover, the ranges of spectral parameters mentioned above define the boundaries of the C- and B-classes and therefore of the spectroscopic C-complex (DeMeo & Carry, 2013). The application of these criteria results in reducing the number of members of each family included in our analysis; the latter number is reported, for each family, in Table 1.
The spectral parameters were calculated following the method of Gaia Collaboration et al. (2023), in two steps. First, the spectral slope was determined as the angular coefficient of the best-fit straight line to the reflectance data between 450 and 600 nm. Then, the reflectance difference \(R_{z}-R_{i}\) was obtained by fitting a natural smoothing spline, \(S(\lambda)\), (Python3 module CSAPS; smoothing coefficient of \(5\times 10^{-7}\)) and then by assuming that \(R_{z}-R_{I}=S(\lambda_{z})-S(\lambda_{i})\), where \(\lambda_{z}=893.2\) nm and \(\lambda_{i}\)\(=\)748.0 nm.
### Spectra profiles of IMB dark families
Once the family members were filtered for potential interlopers, we calculated the average reflectance spectrum of each family. First, for each reflectance spectrum, we removed those data points with a'reflectance_spectrum_flag' (RSF) value \(>0\), which indicates that those values are suspected to be of poorer quality (Gaia Collaboration et al., 2023). In addition, we did not take into account all spectra data at 990 and 1034 nm because the reflectance at these wavelengths is typically affected by a systematic increase (Gaia Collaboration et al., 2023). Next, we calculated the weighted average at each of the 16 wavelength-bands with which the Gaia DR3 reflectance spectra were expressed (Gaia Collaboration et al., 2023). We also calculated the median average deviation at each wavelength, using \(1/\sigma^{2}\) as weights, where \(\sigma\) is the reflectance uncertainty reported in Gaia DR3. After having rejected those data whose distance from the mean is larger than 2.5\(\times\) the median average deviation, we recalculated the weighted mean and its uncertainty. The uncertainty of the average reflectance spectra were calculated using a bootstrap technique: namely, we iteratively randomly selected 75% of the filtered family and recalculated an average reflectance spectrum and, after having reached 1000 iterations, we calculated the standard deviation of the mean spectra at each wavelength. Finally, we applied the correction in the blue region of the spectrum, following the procedure of Tinaut-Ruano et al. (2023), where we multiplied by 1.07, 1.05, 1.02, and 1.01 the reflectance values at the wavelengths of 374, 418, 462, and 506 nm, respectively. The resulting average reflectance spectra and their uncertainties for each family are shown in Fig. 1.
## 4 Discussion
There appear to be two classes of dark families in the inner main belt. Specifically, Polana, Eulalia and Clarissa have a blue-to-neutral reflectance spectra, while the other families show a slightly redder reflectance spectra. This pattern has been already noted previously (Morate et al., 2019). Within this second group, the Gaia DR3 data of Chaldaea, Primordial, Tamara, Erigone and perhaps Klib families show evidence of the 700 nm hydration band.
In addition, the Gaia DR3 spectra of Polana and Eulalia families can be distinguished in the near-ultraviolet region below 550 nm, where Eulalia family appears bluer than Polana (Fig. 3). This spectral behaviour is also evident specifically on asteroids (142) Polana and (495) Eulalia, the largest members of their respective families (Fig. 3). A similar behaviour can be found in the ground-based reflectance spectra of asteroids (142) Polana and (495) Eulalia (de Leon et al., 2016). In these data, (495) Eulalia turns bluer than (142) Polana at wavelengths shorter than \(\sim\)450 nm.
### Comparison with ground-based observations
In order to compare our results with previous studies, we downloaded the PRIMASS-L Spectra Bundle V1.0 from the NASA PDS archive (Pinilla-Alonso et al., 2021), which contains VIS reflectance spectra of several members of the C-complex IMB asteroid families. In this dataset, asteroids' spectra are already organised by family, with the exception of the Tamara and the Primordial families that are not listed, while the Eulalia and the Polana families are merged together. Hence, we separated Eulalia and Polana families' members on the basis of their position in the (\(a,1/D\)) space, namely, objects with \(1/D>-1.7(a-2.49)\)
and \(p_{V}<0.12\) were assigned to the Eulalia family, while objects with \(1/D<-1.7(a-2.49)\) and \(p_{V}<0.12\) were assigned to the Polana family. Finally, for all families, we removed interlopers as reported in their respective publications, as well as the spectra that are classified as D and T types. Figure 2 shows that the agreement between Gaia DR3 and PRIMASS-L reflectance spectra is in general quite good, with the exception of the Svea family, where the majority of PRIMASS-L spectra have a blue slope, while Gaia DR3 one is redder. One of the possible reason for this discrepancy is that there are only two objects of the Svea family with DR3 reflectance spectra that pass our filters, namely the asteroids (329) and (102626). The former, which is the asteroid Svea itself, has also a red sloped reflectance spectrum in PRIMASS-L, while for the latter there are no literature spectra previous to Gaia DR3.
We also used a classical \(\chi^{2}\) technique (e.g., Avdellidou et al. 2022) to measure the goodness of fit between the Gaia DR3 and the PRIMASS-L reflectance spectra for the asteroid families of Fig. 2. First, for each family, \(f\), and for each PRIMASS-L reflectance spectrum, we determined a fitting natural spline (Python caps) in order to evaluate PRIMASS-L reflectances at the same wavelengths \(\lambda\) of the Gaia DR3 ones. Next, for each family, we calculated a PRIMASS-L mean spectrum \(\overline{R}_{f}\) and its standard deviation \(\overline{\sigma}_{f}\). Finally, we calculated a \(\chi^{2}_{f}=\sum_{\lambda}\left[\left(\overline{R}_{f}(\lambda)-R_{f}( \lambda)\right)^{2}/\overline{\sigma}^{2}_{f}\right]\), where \(R_{f}(\lambda)\) is the Gaia DR3 average reflectance spectrum of the family \(f\). We also calculated the reduced \(\chi^{2}\) by the classical expression \(\tilde{\chi}^{2}_{f}=\chi^{2}_{f}/\nu\), where \(\nu\) is the number of degrees of freedom (Press et al. 1992) that in our case is the number of Gaia DR3 bands used in the expression above minus 1. Results are given in Table 2. Assuming the \(\chi^{2}\) statistic, one can estimate the maximum \(\tilde{\chi}^{2}=1+\sqrt{2\nu}/\nu\)(Press et al. 1992; Hanus et al. 2018) within which it is the expected to have 68% of the cases, i.e., the PRIMASS-L spectra, assuming that spectral variability is random. Since \(1+\sqrt{2\nu}/\nu\) is equal to 1.5 and 1.4 for the cases with \(\nu\) equal to 8 and 12, we can deduce that the Gaia DR3 reflectance spectra fits well those from PRIMASS-L for all the families that the surveys have in common.
Concerning the distinguishability between the Polana and Eulalia families, previous works (de Leon et al. 2016; Pinilla-Alonso et al. 2016) concluded that members of the two families could not be distinguished on the basis of visible and/or infrared spectroscopy. Tatsumi et al. (2022) also spectroscopically compared the Eulalia and the Polana families (and the Themis family) focusing on the blue (or as they call it the NUV) region (\(350-550\) nm) of the spectrum. Namely, these authors obtained ground-based, low-resolution visible spectra as blue as 350 nm of few asteroids belonging to these families. They also used literature spectroscopic data from the eight colour asteroid survey (ECAS, Zellner et al. 1985). Tatsumi et al. (2022) calculated the NUV and VIS spectral slopes by straight line least-square fit to the spectra data of each asteroid between 360 and 550 nm and 550 nm, respectively. By comparing the NUV and VIS slopes of the asteroids from the different families, these authors found no significant differences between the Polana and Eulalia families.
However, using the same data from Tatsumi et al. (2022), we reached a different conclusion: We used the two-dimensional Kolmogorov-Smirnov test (K-S test) to verify the null hypothesis that the NUV and VIS slopes from the TNG observations reported in Table 4 of Tatsumi et al. (2022) from Eulalia and Polana could come from the same distribution. We found that this null hypothesis could be rejected at 97.2% confidence level
\begin{table}
\begin{tabular}{l|c c c c c c c c|l} \hline \hline Family & \(N_{members}\) & \(N_{DR3}\) & \(N_{filtered}\) & \(p_{V}\) & \(\sigma_{p_{V}}\) & \(s\) & \(\sigma_{s}\) & \(R_{z}-R_{i}\) & \(\sigma_{R_{z}-R_{i}}\) & Ref. \\ \hline \hline Chaldaea & 132 & 40 & 32 & 0.067 & 0.019 & +1.30 & 2.53 & 0.066 & 0.035 & Nesvorný et al. (2015) \\ Chimarea & 108 & 18 & 11 & 0.054 & 0.014 & +2.65 & 1.39 & 0.099 & 0.041 & Nesvorný et al. (2015) \\ Clarissa & 179 & 2 & 2 & 0.069 & 0.002 & -1.00 & - & 0.036 & - & Nesvorný et al. (2015) \\ Erigone & 1776 & 142 & 82 & 0.056 & 0.017 & +1.75 & 2.14 & 0.053 & 0.035 & Nesvorný et al. (2015) \\ Eulalia & 1624 & 248 & 205 & 0.057 & 0.012 & +1.02 & 2.03 & 0.038 & 0.042 & Walsh et al. (2013) \\ Kilo & 330 & 88 & 72 & 0.067 & 0.016 & +2.23 & 2.17 & 0.061 & 0.039 & Nesvorný et al. (2015) \\ Polana & 2037 & 577 & 243 & 0.058 & 0.016 & +1.01 & 2.31 & 0.035 & 0.045 & Walsh et al. (2013) \\ Primordial & 118 & 89 & 64 & 0.061 & 0.022 & +1.93 & 2.29 & 0.043 & 0.048 & Delbo et al. (2017) \\ Sulamitis & 303 & 32 & 23 & 0.056 & 0.010 & +3.14 & 1.99 & 0.071 & 0.038 & Nesvorný et al. (2015) \\ Svea & 48 & 4 & 2 & 0.060 & 0.019 & +1.42 & - & 0.006 & - & Nesvorný et al. (2015) \\ Tamara & 226 & 56 & 56 & 0.060 & 0.014 & +1.60 & 2.11 & 0.060 & 0.042 & Novakovović et al. (2017) \\ \hline \end{tabular} 1
\end{table}
Table 1: Input dataset.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Family & \(\chi^{2}_{f}\) & \(\nu\) & \(\tilde{\chi}^{2}_{f}\) \\ \hline \hline Chaldaea & 1.69 & 8 & 0.21 \\ Chimarea & 7.17 & 8 & 0.90 \\ Clarissa & 7.60 & 8 & 0.95 \\ Erigone & 3.12 & 8 & 0.39 \\ Eulalia & 5.40 & 12 & 0.45 \\ Klio & 0.70 & 8 & 0.09 \\ Polana & 7.99 & 12 & 0.67 \\ Primordial & - & - & - \\ Sulamitis & 1.72 & 8 & 0.22 \\ Svea & 3.23 & 8 & 0.41 \\ Tamara & - & - & - \\ \hline \end{tabular} 1
\end{table}
Table 2: \(\chi^{2}\)-values between Gaia DR3 and PRIMASS-L reflectance spectra for the low-albedo asteroid families of the inner main belt.
with a K-S test distance of 0.71. Also, we performed the K-S test with a Monte Carlo method between the nominal and random draws from the same data of the asteroids of the Polana family and found that, in \(10^{6}\) Monte Carlo iterations, the max K-S test distance obtained with random errors is 0.625: i.e., we never obtained 0.71. This means that there is a probability smaller than \(10^{-6}\) to be able to obtain Eulalia NUV and VIS spectral slopes from random errors from the Polana NUV and VIS slopes. If we add the NUV and VIS spectral slopes from Table 4 of Tatsumi et al. (2022) of the ECAS survey, the difference between the asteroids from the Polana and the Eulalia families decreases, but still the two families remain distinguishable. In particular, the K-S test still indicates that the two samples from the Eulalia and from the Polana families are not obtained from the same distribution with a confidence level of 95% to reject the null hypothesis. We concluded that the data from the work of Tatsumi et al. (2022) do show that the Polana and Eulalia families can be distinguished, contrary to what was stated by the authors. Independent evidence of the distinction between the colours of the Polana and Eulalia family members is also claimed in a recent work based on the Sloan Digital Sky Survey spectrophotometry of asteroids (McClure & Emery 2022).
### The source region of the NEAs Bennu and Ryugu
Previous dynamical and spectroscopic considerations lead to the conclusion that Bennu originated from the low albedo component of what was, once upon a time called the Nysa-Polana clan (Campins et al. 2010). It was later shown that this low albedo component is actually composed by two dynamically distinguishable families named Polana (or New Polana) and Eulalia (Walsh et al. 2013). Further investigations assessed that Bennu has about 70 and 30% probability to originate from the Polana and Eulalia families, respectively (Bottke et al. 2015).
Figure 3 shows the comparison between the reflectance spectra of Eulalia and Polana families and the reflectance spectra of Bennu obtained by the OSIRIS-REx Visible and InfraRed Spectrometer (OVIRS) and MapCam instruments on board OSIRIS-REx (DellaGiustina et al. 2020). In the ultraviolet region, it appears that Bennu is marginally more similar to the Eulalia family members than those of the Polana family: namely, Gaia DR3 results slightly favour an origin of Bennu from the Eulalia family because the trend of Bennu's reflectance into the wavelength region bluer than 450 nm is more consistent with the reflectance of the Eulalia than of the Polana family (Fig. 3). Likewise, Bennu's reflectance trend into the wavelength region bluer than 450 nm is more similar to asteroid (495) Eulalia than to the asteroid (142) Polana.
On the other hand, the average reflectance spectra of the Eulalia and Polana families are redder than the reflectance spectrum of Bennu longward \(\sim\)800 nm. It cannot be established with the present Gaia DR3 data whether this is a real effect or an artefact. Indeed, it has been observed (Gaia Collaboration et al. 2023; Galinier et al. 2023) that Gaia DR3 asteroid spectra tend to have reflectance values higher than the spectra of the same objects collected from ground-based telescopes at wavelengths roughly beyond \(\sim\)800-900 nm.
Figure 3 subplot also shows that the reflectance spectra of the parent bodies of each respective families are consistent with the families general trend.
We also performed a \(\chi^{2}\) analysis between the reflectance spectra of Bennu, obtained by the OSIRIS-REx mission (Fig. 3), and the average reflectance of the families from the Gaia DR3. Namely, we constructed a mean reflectance spectrum \(R_{B}(\lambda)\) and its standard deviation from smoothing spline representations (Python csasp) of the reflectance spectra from OVIRS and MapCam (DellaGiustina et al. 2020). Next, we calculated the reduced \(\chi^{2}\), \(\tilde{\chi}_{B,f}^{2}=(1/\nu)\sum_{\lambda}\left[\left(R_{B}(\lambda)-R_{f}( \lambda)\right)^{2}/\overline{\sigma}_{f}^{2}\right]\), where \(R_{f}(\lambda)\) is the Gaia DR3 average reflectance spectrum of the family \(f\), as before, and \(\nu\) is the degrees of freedom, i.e., the number of wavelength datapoints - 1. The lower the value of \(\tilde{\chi}_{B,f}^{2}\), the better the spectroscopic match is. Firstly, we considered all data points in the wavelength range between 450 and 950 nm, as shown in Fig. 3 (OVIRS and MapCam spectra do not cover wavelengths bluer than 450 nm). We found \(\tilde{\chi}_{B,f}^{2}\)-values of 3.5 and 6.0 for the Eulalia and the Polana families, respectively; all other families producing \(\tilde{\chi}_{B,f}^{2}\)-values \(>76\). This result further indicates that the Eulalia is the first and Polana is the second family more spectroscopic alike Bennu. Given that the spectroscopic match between Eulalia (and Polana) and Bennu, is visually better in the bluer region of the spectrum (Fig. 3), we also calculated \(\tilde{\chi}_{B,f}^{2}\) in different wavelength ranges obtained from the aforementioned one by reducing the upper bound from 950 to 800 nm. We found that the values of \(\tilde{\chi}_{B,f}^{2}\) decrease monotonically as the upper bound decreases, obtaining values below 1 for the Eulalia family when the upper wavelength bound is reduced to 800 nm. The correspond
Figure 1: Average Gaia DR3 spectra of the 11 inner main belt dark families that belong to the spectroscopic C-complex. Spectra are shifted on the reflectance axis for better visibility. The error bars are included, but are smaller than the marker size in most cases.
ing values are reported in Table 3. We also found that Polana is the family always providing the second best match. It is common to accept reduced \(\chi^{2}\) values \(<(1+\sqrt{2v}/v)\)(Press et al., 1992): only the Eulalia family satisfies this constraint.
Figure 4 shows the comparison between the reflectance spectra of Eulalia and Polana families and the reflectance spectra of Ryugu obtained by the Optical Navigation Camera (ONC) instrument on board the Hayabusa2(Sugita et al., 2019), along with ground-based spectra (Moskovitz et al., 2013; Perna et al., 2017). In the blue and ultraviolet regions, Ryugu shows a reflectance spectra that are, in principle, compatible with both Polana and Eulalia families average spectra. On the other hand, at wavelengths longer than 600 nm, the average Gaia DR3 reflectance spectra of Polana and Eulalia families have somewhat lower reflectance values compared to the Ryugu spectra measured from Hayabusa2. However, they appear still marginally within the error-bars of Ryugu's ground-based spectra.
As for Bennu, we performed a \(\chi^{2}\) analysis between the mean reflectance spectra of Ryugu, \(R_{R}(\lambda)\), calculated from averaging smoothing spline fits to the spectra obtained by the Hayabusa2 mission and ground based telescopes (Fig. 4), and the average reflectance spectra of each family, \(f\), from the Gaia DR3, \(R_{f}(\lambda)\): namely, \(\tilde{\chi}_{R,f}^{2}=(1/v)\sum_{\lambda}\left[\left(R_{R}(\lambda)-R_{f}( \lambda)\right)^{2}/\overline{\sigma_{f}^{2}}\right]\). Considering all the data points in the wavelength range between 370 and
\begin{table}
\begin{tabular}{l|r r} \hline Family & \(\tilde{\chi}_{R,f}^{2}\) & \(v\) \\ \hline \hline Eulalia & 0.99 & 6 \\ Polana & 3.34 & 6 \\ Clarissa & 10.00 & 6 \\ Tamara & 25.33 & 6 \\ Erigone & 68.13 & 6 \\ Chaldaea & 88.03 & 6 \\ Klo & 105.47 & 6 \\ Sulamitis & 124.81 & 6 \\ Chimarera & 752.72 & 6 \\ Primordial & 791.36 & 6 \\ Svea & 1540.66 & 6 \\ \hline \end{tabular}
\end{table}
Table 3: Reduced \(\chi^{2}\)-values between Gaia DR3 average reflectance spectra of the low-albedo asteroid families of the inner main belt and the average reflectance spectrum of Bennu. The \(\chi^{2}\)-values are calculated here on the wavelength range between 450 and 800 nm. The table is sorted by the \(\chi^{2}\)-value, from the smallest to the largest.
Figure 2: Comparison of Gaia DR3 average spectra (red line) for each IMB primitive family to the ground-based visible spectra of the literature (gray lines) from the PRIMASS-L survey (see text). Gaia DR3 reflectance spectra are plotted with and without the correction at the blue most wavelengths of Thiaut-Ruano et al. (2023).
950 nm, we obtained the \(\tilde{\chi}_{R,f}^{2}\)-values reported in Table 4. The family with the spectrum producing the minimum \(\chi^{2}\) value is Polana, followed by the Clarissa and the Eulalia families. However, this reduced \(\chi^{2}\) value is never close to 1 (the lowest value is \(\sim\)1.9) for the different spectral ranges here considered, indicating that this spectral match is never formally satisfactory. Contrary to the case of Bennu, calculating the \(\tilde{\chi}_{R,f}^{2}\) in shorter wavelength intervals, such as between 450 and 800 nm, does not lower the minimum \(\tilde{\chi}_{R,f}^{2}\)-value.
## 5 Conclusions
Using the unprecedented Gaia DR3 sample of visible asteroid spectra, we created the average reflectance spectra of the 11 dark primitive asteroid families of the IMB that belong to the spectroscopic C-complex.
In this work, we reported on their similarities, but also their differences, and we compared nine of them with earlier ground-based spectroscopic results. Gaia DR3 provides invaluable reflectance information between 370 and 500 nm, a region that is poorly studied so far from the ground, but can indicate differences among the C-complex population.
Consistently with previous results, we found that there appear to be two spectroscopic groups within these families: Eulalia, Polana and Clarissa are spectroscopically bluer than all the others. Within the redder groups of families, Ergone, Tamara, Chaldeae and the Primordial, appear to have a 700 nm absorption band, indicative of hydration.
We found that the Eulalia and Polana adjacent families can be spectroscopically distinguished in the wavelength region between 370 to 500 nm, where Eulalia is bluer than Polana.
The comparison of the average family spectra with those of the NEAs Bennu and Ryugu showed that these asteroids are spectroscopically more similar to Eulalia and Polana, respectively, than to other families.
In particular, we found that the Gaia DR3 average spectrum of the Eulalia family is a good spectroscopic match for Bennu's spectrum in the wavelength range between 450 and 800 nm. On the other hand, Polana is the family which average DR3 spectrum has the smallest reduced \(\chi^{2}\) when compared to the average spectrum of Ryugu.
###### Acknowledgements.
We acknowledge support from the ANR ORIGINS (ANR-18-CE31-0014). This work has made use of data from the European Space Agency (ESA) mission Gaia ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the Gaia Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. This work is based on data provided by the Minor Planet Physical Properties Catalogue (MP3C) of the Observatoire da L'Ote de'Azur.
|
2308.05092
|
A degree of image identification at sub-human scales could be possible
with more advanced clusters
|
The purpose of the research is to determine if currently available
self-supervised learning techniques can accomplish human level comprehension of
visual images using the same degree and amount of sensory input that people
acquire from. Initial research on this topic solely considered data volume
scaling. Here, we scale both the volume of data and the quality of the image.
This scaling experiment is a self-supervised learning method that may be done
without any outside financing. We find that scaling up data volume and picture
resolution at the same time enables human-level item detection performance at
sub-human sizes.We run a scaling experiment with vision transformers trained on
up to 200000 images up to 256 ppi.
|
Prateek Y J
|
2023-08-09T17:40:12Z
|
http://arxiv.org/abs/2308.05092v1
|
# A degree of image identification at sub-human scales could be possible with more advanced clusters
###### Abstract
The purpose of the research is to determine if currently available self-supervised learning techniques can accomplish human level comprehension of visual images using the same degree and amount of sensory input that people acquire from. Initial research on this topic solely considered data volume scaling. Here, we scale both the volume of data and the quality of the image. This scaling experiment is a self-supervised learning method that may be done without any outside financing.We find that scaling up data volume and picture resolution at the same time enables human-level item detection performance at sub-human sizes.We run a scaling experiment with vision transformers trained on up to 200000 images up to 256 ppi.
Machine Learning Image Recognition
## 1 Introduction
While initial investigations in this domain primarily focused on the scaling of data volume, the present research takes a bold leap by not only scaling the data volume but also enhancing the image quality. This ambitious scaling experiment adopts a self-supervised learning approach that is both resource-efficient and feasible even without external financial support. Remarkably, our findings unveil the possibility of achieving human-level image identification performance at scales below human capabilities, achieved through the simultaneous scaling of data volume concentration and snapshot resolution. In order to do this, we conduct a scaling experiment using vision transformers and train them on a large data set with up to 200000 pictures, each of which has a resolution of 256 ppi.
## 2 Evaluation Factors and Issues
I answer that query in relation to a particular skill, namely the capacity to recognize visual images in the real world. It is difficult to directly assess the data efficiency of deep learning models trained with self supervised learning methods with regard to people for a variety of explanations.
### Inconsistency in dimensions
Inconsistency in dimensions: They model frequently function with significantly smaller impulses than the human cerebral cortex does, as seen, for instance, when comparing the size of common images used in computer vision with the quantity of detectors in the parafovea of an average person.
### Inconsistency in Model Scale
When assessing the number of parameters in a model to the number of neurotransmitters in the functioning model they are often significantly less than the functioning nervous system, or indeed the visual regions within the nervous system.
## 3 Control Conditions
We undertake control conditions to find out whether the present Self-Learning algorithms can reach sub-human image cognizant scales for model scale, and image dimensions in order to address these discrepancies. In this section we take into account scaling both components simultaneously and come to a decision. In the present trials, we additionally train our models on a roughly two-fold bigger collection of images that resemble real beings.
### Experimental data
A total of two-thousand images of human-like footage from four different data-sets make up the whole collection of training data. The videos have two distinct characteristics: In contrast to common pictures data-sets in machine learning, which usually consist of significantly fewer photographs, most of the content are (i) organic, independent head-cam videos taken from the perspective of grown-up or toddler camera users as they go about their daily lives, and (ii) these are time-wise extended, continuous videos with typical run-times of tens of a few seconds to minutes. The following clip data-sets make up the merged training set:
The individual picture data-sets that make up our integrated training data set are broken out in Figure 1 along with their sizes in thousands. Image-Net, which makes over 50% of our total training data, is by far the largest contributor. About 31.5% of the training data come from the CelebA dataset, while 5% come from the CIFAR-10 dataset and 13.5% come from the ADE20K dataset. As a consequence, we anticipate that people who choose to repeat the experiments in this study using solely publicly available data sources would obtain outcomes that are strikingly comparable to those presented here.
We train models on the whole dataset and on constantly randomized random portions of it in order to investigate the scaling of image identification performance with the quantity of human-like visual input utilized during self-supervised pretraining. We explicitly construct models on percentages of the dataset that are 100 percent, 50 percent, 25 percent and 5 percent. These selections encompass a 20-fold variance in data size, encompassing around two hundred thousand to ten thousand captures of labelled images in every instance (from the biggest through the tiniest). We repeat the subset selection a total of four times for every data set size because it is stochastic.
#### 3.1.1 Image-Net
The Image-Net Large Scale Visual Recognition Challenge (ILSVRC) 2012-2017 image categorization and localization data-set is the most frequently used subset of Image-Net[16]. There are 1,281,167 training photos, 50,000 validation images, and 100,000 test images in this dataset, which covers 1000 item classes. On Kaggle, this subset is accessible here.
Figure 1: Image Data-set proportions used for control-Conditions
#### 3.1.2 Cifar-10
Five learning groups and one evaluation batch, each containing 10,000 snapshots, make up the data-set. The remaining photos are distributed across the training batches in random order, however certain training batches can have a disproportionate number of images from a particular class.The training blocks are made up of a total of 5000 images from every class. Access public database here
#### 3.1.3 CelebA
More than 200K celebrity photos with 40 feature annotations each make up the large-scale face attributes collection known as Celeb Faces Attributes collection(Zhang et al., 2019). This collection of photos includes a wide range of poses and cluttered backgrounds. With 10,177 distinct identities, 202,599 face photos, 5 landmark locations, and 40 binary attribute annotations per image. Access public database here
#### 3.1.4 Ade20k
ADE20K is made up of more than 27K images from the SUN and Locations databases. The photos have been meticulously annotated with over 3K different images. Numerous images also show individual parts and pieces of other sections. Register and access the database here
### Models and Checkpoints
In our tests, we only utilize vision transformer (ViT) modeling (Dosovitskiy et al., 2021). The models that we optimize include four common sizes: ViT-Hybrid/S/L/B,, These models, which range in size from the smallest to the largest model by a factor of twenty eight. We take into account three distinct spatial resolutions for the picture resolutions: 476 pixels, 448 pixels and 226 pixels.
Our preferred Self Supervised learning method is masked auto-encoders. For the objectives of this work, masked auto-encoders offer a number of benefits against conventional self-supervised pictorial representational training methods. They first require relatively little data augmentation, in contrast to the majority of existing visual Self Supervised methods. This is helpful for our objectives because other Self Supervised learning algorithms' significant data augmentations reduce the human-likeness of the training data. Second, we select a masking proportion of Eighty percent since masked auto-encoders perform well with such large masking proportions. According to the quantity of pictures utilized for self-supervised pre-training, estimated reliability of validation on Image-Net. Performance grater than ninety percent denotes performance at the level of a human. Four distinct models are represented by various colors in the tale. The matches to the first equation are shown by the solid lines. The expected precision under four fictitious situations are shown by the lines of best fits,for additional details on these instances, see Column one. Findings for the more lenient modifying control are shown in the figure 3, while those for the stricter fine-tuning control models without fine-tuning are shown in figure 2.
Figure 2 illustrates the findings of our Image Network scalability research. We use an intuitive polynomial function in logarithmic order to represent the impact of data size, model size, and picture quality on image recognition precision:
\[Precision=(\beta_{i}+\log_{i}\alpha_{i})(\log_{ppi}\alpha_{ppi}+\beta_{ppi}) \tag{1}\]
where i is the scaled data amount which is measured in thousands here and ppi is the scaled image resolution per test.
Table 1 presents three possible outcomes based on the following situations: (i) a practical reference circumstance matching to our largest and best-performing model thus far, the VisionTransformer-H model, that was constructed using all of the approximately 200000 images from the public data.
A possible circumstance where we quadruple 1 each of i, ppi, with respect to the benchmark situation; a possible scenario where we quadrupled 1 each of i, ppi with regard to the benchmark case. The image recognition rate for every one of these situations is shown in Table 1 as the combination of real and predicted accuracy.
The anticipated precision now surpasses the ninety percent accuracy we have defined as the lower limit on human-level precision on validation in the estimated scenario and under the additional lenient finetuning condition. This result is very encouraging because it only calls for a 522B variable Vision Transformer model trained with around 200000 images of public data. The increases in data amount and image resolution needed under this scenario are also relatively small.
Figure 3: Consequences of the validation test accuracy seen in the test with two percent fine-tuning
Figure 2: Consequences of the validation test accuracy seen in the test with no fine-tuning
Figure 4: Consequences of the validation with human benchmark with no fine-tuning
Figure 5: Consequences of the validation with human benchmark with two percent fine-tuning
## 4 Conclusion
Using highly general self-supervised machine learning techniques and deep learning architectures without significant inductive biases, our results show that human-level precision as well as resilience in visual image recognition can be achieved through anthropomorphic visual perception at sub-human scales of data amount and image quality.
## Acknowledgments
This was was supported in part by Microsoft with their Azure Credits porgram
|
2301.06090
|
Extracting the Quantum Geometric Tensor of an Optical Raman Lattice by
Bloch State Tomography
|
In Hilbert space, the geometry of the quantum state is identified by the
quantum geometric tensor (QGT), whose imaginary part is the Berry curvature and
real part is the quantum metric tensor. Here, we propose and experimentally
implement a complete Bloch state tomography to directly measure eigenfunction
of an optical Raman lattice for ultracold atoms. Through the measured
eigenfunction, the distribution of the complete QGT in the Brillouin zone is
reconstructed, with which the topological invariants are extracted by the Berry
curvature and the distances of quantum states in momentum space are measured by
the quantum metric tensor. Further, we experimentally test a predicted
inequality between the Berry curvature and quantum metric tensor, which reveals
a deep connection between topology and geometry.
|
Chang-Rui Yi, Jinlong Yu, Huan Yuan, Rui-Heng Jiao, Yu-Meng Yang, Xiao Jiang, Jin-Yi Zhang, Shuai Chen, Jian-Wei Pan
|
2023-01-15T13:05:01Z
|
http://arxiv.org/abs/2301.06090v2
|
# Extracting the Quantum Geometric Tensor of an Optical Raman Lattice by Bloch State Tomography
###### Abstract
In Hilbert space, the geometry of the quantum state is identified by the quantum geometric tensor (QGT), whose imaginary part is the Berry curvature and real part is the quantum metric tensor. Here, we propose and experimentally implement a complete Bloch state tomography to directly measure eigenfunction of an optical Raman lattice for ultracold atoms. Through the measured eigenfunction, the distribution of the complete QGT in the Brillouin zone is reconstructed, with which the topological invariants are extracted by the Berry curvature and the distances of quantum states in momentum space are measured by the quantum metric tensor. Further, we experimentally test a predicted inequality between the Berry curvature and quantum metric tensor, which reveals a deep connection between topology and geometry.
+
Footnote †: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding: Corresponding author: Corresponding: Corresponding author: Corresponding: Corresponding author: Corresponding Corresponding: Corresponding author: Corresponding: Corresponding author: Corresponding Corresponding: Corresponding author: Corresponding: Corresponding author: Corresponding: Corresponding Corresponding: Corresponding Corresponding: author: Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding: Corresponding Corresponding Corresponding Corresponding:
man lattice with ultracold \({}^{87}\)Rb atoms [24; 26; 29], as shown in Fig.1(a). The Raman lattices are constructed by two laser beams \(\mathbf{E}_{x,y}\) with wavelength \(\lambda=787\)nm that propagate in opposite directions as well as couple two magnetic levels \(\left|F=1,m_{\rm F}=-1\right\rangle\equiv\left|\uparrow\right\rangle\) and \(\left|F=1,m_{\rm F}=0\right\rangle\equiv\left|\downarrow\right\rangle\). The Hamiltonian is (\(\hbar=1\))
\[H=\frac{\mathbf{k}^{2}}{2m}+V_{\rm latt}+\frac{\delta}{2}\sigma_{z}+\Omega_{1} \sigma_{x}+\Omega_{2}\sigma_{y}, \tag{1}\]
where \(\mathbf{k}=(k_{x},k_{y}),m,\delta\) and \(\sigma_{x,y,z}\) are the momentum, mass of the atom, two-photon detuning and Pauli matrices, respectively. The Raman potential \(\Omega_{1,2}\) (lattice potential \(V_{\rm latt}\)) induces spin-flipped (spin-conserved) hopping with the strength \(\Omega_{0}\) (\(V_{0}\)) and transfers momentum of \(\mathbf{k}_{0}=(\pm k_{0},\pm k_{0})\) (\(2\mathbf{k}_{0}\)), which is observed from the atomic distribution of ground state in the momentum space [Fig.1(b)] [25; 26]. Then, the eigenstate \(\left|\Psi(\mathbf{q})\right\rangle\) of Hamiltonian Eq.(1) can be denoted as the superposition of \(\left|\mathbf{q},\uparrow\right\rangle\) and \(\left|\mathbf{q}+\mathbf{k}_{0},\downarrow\right\rangle\), i.e., \(\left|\Psi(\mathbf{q})\right\rangle=c_{\uparrow}(\mathbf{q})\left|\mathbf{q},\uparrow \right\rangle+c_{\downarrow}(\mathbf{q})\left|\mathbf{q}+\mathbf{k}_{0},\downarrow\right\rangle\)[24]. Here, \(\mathbf{q}=(q_{x},q_{y})\) and \(c_{\uparrow,\downarrow}\) are the quasi-momentum and the normalized complex coefficient, respectively. The eigenstate \(\left|\Psi(\mathbf{q})\right\rangle\) can be represented on the Bloch sphere, whose poles are \(\left|\mathbf{q},\uparrow\right\rangle\) and \(\left|\mathbf{q}+\mathbf{k}_{0},\downarrow\right\rangle\) [Fig.1(c)].
The main idea of the tomography is to measure the expectation values of three Pauli matrices \(\langle\sigma_{x,y,z}(\mathbf{q})\rangle\) for Raman lattices by rotating the measurement basis (See [24] for details). We directly obtain the expectation value \(\langle\sigma_{z}(\mathbf{q})\rangle\) by spin-resolved time-of-flight (ToF) imaging, which has been applied in [25; 26]. After rotating the measurement basis \(\left|\mathbf{q},\uparrow\right\rangle\) and \(\left|\mathbf{q}+\mathbf{k}_{0},\downarrow\right\rangle\) to the \(x\)-axis ( \(y\)-axis) on the Bloch sphere, the expectation value \(\langle\sigma_{x}(\mathbf{q})\rangle\) (\(\langle\sigma_{y}(\mathbf{q})\rangle\)) is observed by spin-resolved ToF imaging. Such rotation is achieved by a coherent Raman pulse. The pulse acting on \(\left|\Psi(\mathbf{q})\right\rangle\) requires transferring the momentum \(\mathbf{k}_{0}=(-k_{0},-k_{0})\) between \(\left|\right.\mathbf{q},\uparrow\rangle\) and \(\left|\right.\mathbf{q}+\mathbf{k}_{0},\downarrow\rangle\) as well as maintaining a fixed relative phase \(\Delta\varphi\) between the Raman pulse and the Raman lattices, which ensures accurate determination of \(\langle\sigma_{x,y}(\mathbf{q})\rangle\) [Fig.1(c)].
Concretely, the Raman pulse is made up of incident laser beams of \(\mathbf{E}_{x,y}\) with retroreflective beams switched off by acousto-optic modulators (AOMs) [Fig.1(a)]. Thus, the Hamiltonian of the Raman pulse is [24]
\[H_{\rm R}=\begin{pmatrix}\mathbf{k}^{2}/2m+\delta_{\rm R}/2&\Omega_{\rm R}\\ \Omega_{\rm R}^{*}&\mathbf{k}^{2}/2m-\delta_{\rm R}/2\end{pmatrix}. \tag{2}\]
In the experiment, a sufficient short Raman pulse with two-photon detuning \(\delta_{\rm R}=0\) is applied such that the kinetic energy is ignored and Raman coupling \(\Omega_{\rm R}\) dominates. And \(\Omega_{\rm R}=\Omega_{\rm R0}e^{-i[k_{0}(x+y)-\Delta\varphi]}\) with the strength \(\Omega_{\rm R0}\) launches a momentum transfer of \(\mathbf{k}_{0}=(-k_{0},-k_{0})\) between \(\left|\right.\mathbf{q},\uparrow\rangle\) and \(\left|\right.\mathbf{q}+\mathbf{k}_{0}\left.\downarrow\right\rangle\). The relative phase \(\Delta\varphi\) is controlled by the initial phase difference of the input lasers \(\mathbf{E}_{x,y}\) that generates the Raman pulse and the Raman lattices [24].
Figure 1: Bloch state tomography in 2D optical Raman lattices. (a) Sketch map of Raman lattices. (b) The distribution of Bose-Einstein condensates (BECs) in momentum space. BEC is prepared in \(\left|\uparrow\right\rangle\) and stay at momentum point \(\mathbf{k}=(0,0)\). Four BECs distributing at momenta \(\mathbf{k}=(\pm k_{0},\pm k_{0})\) are in \(\left|\downarrow\right\rangle\) due to the Raman potential. Here, \(k_{0}\) is the recoil momentum. Other BECs are in \(\left|\uparrow\right\rangle\) and diffracted by lattice potential without spin-flipping. (c) Tomography principle. The atoms firstly are in a superposition state \(\left|\Psi\right\rangle\) of \(\left|\mathbf{q},\uparrow\right\rangle\) and \(\left|\mathbf{q}+\mathbf{k}_{0},\downarrow\right\rangle\) (yellow arrows). Applying a \(\pi/2\) Raman pulse, \(\left|\Psi\right\rangle\) evolves to a new superposition state \(\left|\tilde{\Psi}\right\rangle\) (magenta arrows) along the sky blue trajectories. \(\Delta\varphi\) is exactly the angle between the gray axis and \(x\)-axis. The gray axis rotates in the equatorial plane by tuning \(\Delta\varphi\). After ToF, \(\langle\sigma_{x}\rangle\) (\(\langle\sigma_{y}\rangle\)) is obtained when \(\Delta\varphi=\pi/2\) (0). \(\langle\sigma_{z}\rangle\) is measured directly by ToF, without applying the Raman pulse. (d) The experimental sequence. The atoms are loaded in Raman lattices in the preparation stage. The Raman pulse is realized by suddenly turning off reflected beams of \(\mathbf{E}_{x,y}\), setting Raman pulse strength to \(\Omega_{\rm R0}\), the relative phase \(\Delta\varphi\) to a certain value and \(\delta_{\rm R}=0\). After ToF, the spin polarization is measured. (e) The spin polarization \(P_{\rm m}\) at the quasi-momentum \(\Gamma\) and M as a function of the relative phase \(\Delta\varphi\). The circles with error bar (blue curves) are from experimental data (numerical calculations). The experimental data is fitted by sinusoidal function (red curves). The insets mark \(\Gamma\) and M points in the FBZ. Parameters: the lattice depth \(V_{0}=4.0E_{\rm r}\), Raman potential strength \(\Omega_{0}=1.0E_{\rm r}\), \(\delta=-0.2E_{\rm r}\), \(t_{\rm R}=10\mu s\) and \(\delta_{\rm R}=0\). The recoil energy \(E_{\rm r}\approx 2\pi\times 3.7\)kHz.
When the Raman pulse is applied to \(|\Psi(\mathbf{q})\rangle\) with the duration time \(t_{\rm R}\), the relative phase \(\Delta\varphi\) is imprinted onto the time-dependent state \(|\widetilde{\Psi}(\mathbf{q},t_{\rm R})\rangle\). The state \(|\widetilde{\Psi}\rangle\) precesses around an axis on the Bloch sphere with frequency \(\Omega_{\rm R0}\). For instance, \(|\widetilde{\Psi}\rangle\) precesses around \(y\) (\(x\))-axis in the equatorial plane when \(\Delta\varphi=\pi/2\) (0) [Fig.1(c)]. After ToF imaging, the expectation values of Pauli matrices \(\langle\sigma_{x,y,z}\rangle\) are obtained by the spin polarization [24]
\[\begin{split}\langle\widetilde{\Psi}|\sigma_{z}|\widetilde{\Psi} \rangle&=\langle\sigma_{z}\rangle\cos(2\Omega_{\rm R0}t_{\rm R}) \\ &+(\langle\sigma_{y}\rangle\cos\Delta\varphi+\langle\sigma_{x} \rangle\sin\Delta\varphi)\sin(2\Omega_{\rm R0}t_{\rm R}).\end{split} \tag{3}\]
When \(t_{\rm R}=0\), \(\langle\sigma_{z}\rangle\) is obtained; when \(t_{\rm R}=\pi/(4\Omega_{R0})\), \(\langle\sigma_{x}\rangle\) (\(\langle\sigma_{y}\rangle\)) is extracted by a \(\pi/2\) Raman pulse with \(\Delta\varphi=\pi/2\) (0) [Fig.1(c)].
Now, we clarify this technique in the experiment [24]. The experimental protocol is depicted in Fig.1(d). First, BEC of \({}^{87}\)Rb atoms is adiabatically loaded into the Raman lattices. The BECs condense at \(\Gamma\) or M in the first Brillouin zone (FBZ), which depends on BEC prepared in \(|\uparrow\rangle\) or \(|\downarrow\rangle\). Meanwhile, atoms stay at the lowest band of Raman lattices. Second, the \(\pi/2\) Raman pulse is switched on within 200ns to couple the atoms (See [24] for the sensitivities of the pulse), which is accomplished by simultaneously executing the following manipulations on the beams \(\mathbf{E}_{x,y}\): (i) Turning off the retroreflective beams of \(\mathbf{E}_{x,y}\); (ii) Turning on the strength of Raman pulse to \(\Omega_{\rm R0}\approx 3.4E_{\rm r}\) by tuning the intensity of \(\mathbf{E}_{x,y}\); (iii) Setting the relative phase \(\Delta\varphi\) to a certain value by changing the initial phase of \(\mathbf{E}_{x,y}\); (iv) Setting \(\delta_{\rm R}=0\) by adjusting the frequency of \(\mathbf{E}_{x,y}\). Finally, after holding the Raman pulse for a certain duration \(t_{\rm R}=10\mu s\), we measure the atomic numbers \(n_{\uparrow,\downarrow}\) using spin-resolved ToF imaging to obtain spin polarization, i.e., \(P_{\rm m}=(n_{\uparrow}-n_{\downarrow})/(n_{\uparrow}+n_{\downarrow})\).
The measured spin polarizations \(P_{\rm m}\) versus the relative phase \(\Delta\varphi\) are shown in Fig.1(e). The spin polarizations at \(\Gamma\) and M are fitted by sinusoidal functions, which demonstrates the superposition between \(\langle\sigma_{x}\rangle\) and \(\langle\sigma_{y}\rangle\) [cf., Eq. (3)]. Thus, \(\langle\sigma_{x}\rangle\) (\(\langle\sigma_{y}\rangle\)) are extracted from the fittings when \(\Delta\varphi=\pi/2\) (0) at \(\Gamma\) and M, marked by diamonds. For comparison, the numerical simulations of the spin polarizations with the same parameters are also plotted in Fig.1(e), in consistent with the experimental data. All these indicate that Bloch state tomography in the Raman lattices is achieved.
Next, we explore geometry and topology with this novel Bloch state tomography to directly map out the quantum geometric tensor from the eigenfunction based on measuring \(\langle\sigma_{x,y,z}(\mathbf{q})\rangle\) of Bose gases in the FBZ. In that case, almost identical procedure used in the aforementioned BEC measurement is employed, except that the atomic cloud is cooled to a temperature around 100nK [24]. Then, the entire lowest band is occupied by a sufficient number of atoms; simultaneously, the higher bands exhibit nonzero occupation, which reduces the contrast of the spin polarizations being the same as Refs. [25; 26; 30]. Whereupon, we subtract the high-band contribution based on Bose distribution determined by the numerical calculations to obtain \(\langle\sigma_{x,y,z}(\mathbf{q})\rangle\) for the lowest band (See [24] for details).
The typical normalized \(\langle\sigma_{x,y,z}(\mathbf{q})\rangle\) are drawn in
Figure 3: Experimental reconstruction of the eigenfunction in the FBZ: The amplitude of the eigenfunction \(u_{\uparrow}\) (first column) and \(u_{\downarrow}\) (second column), together with the relative phase between \(u_{\uparrow}\) and \(u_{\downarrow}\) (third column). Red squares mark the positions of the phase vortices. Results from experimental data (upper row) are compared with numerical calculations (lower row). Parameters: \((V_{0},\ \Omega_{0},\ \delta)=(4.0,1.0,-0.2)E_{\rm r}\).
Figure 2: The expectation value of three Pauli matrices \(\langle\sigma_{i=x,y,z}\rangle\) in the FBZ. (a) Normalized \(\langle\sigma_{x,y,z}(\mathbf{q})\rangle\) with \(\delta=-0.2E_{\rm r}\) for the lowest band. Results from experimental data (upper row) are compared with numerical calculations (lower row). (b) The distribution of \(\langle\mathbf{\sigma}(\mathbf{q})\rangle\) in the FBZ takes a skyrmion configuration. Parameters: \((V_{0},\Omega_{0},\delta)=(4.0,1.0,-0.2)E_{\rm r}\).
Fig.2(a), which can be divided into two regions with \(\langle\sigma_{x,y,z}\rangle<0\) and \(\langle\sigma_{x,y,z}\rangle>0\). For \(\langle\sigma_{x}(\mathbf{q})\rangle\) (\(\langle\sigma_{y}(\mathbf{q})\rangle\)), the two regions are distributed in the upper and lower (left and right) of the FBZ, respectively. For \(\langle\sigma_{z}(\mathbf{q})\rangle\), one of the regions is centered on \(\Gamma\) and the other region is centered on M for \(\delta=-0.2E_{\rm r}\). And the two regions are demarcated by a ring structure with \(\langle\sigma_{z}\rangle=0\) around M, which is a feature of band topology in our system [25; 26]. The numerical calculations coincide with the experimental measurements. In addition, the distribution of the vectors \(\langle\mathbf{\sigma}(\mathbf{q})\rangle=\left(\langle\sigma_{x}(\mathbf{q})\rangle \,,\langle\sigma_{y}(\mathbf{q})\rangle\,,\langle\sigma_{z}(\mathbf{q})\rangle\right)\) shapes a skyrmion [31] structure in the momentum space, and the twisted skyrmion texture is demonstrated in Fig.2(b). Such skyrmion configuration offers the direct evidence for complete tomography.
Moreover, the eigenfunction can also be extracted from the normalized \(\langle\sigma_{x,y,z}\rangle\). To this end, we consider the Bloch Hamiltonian under the two-band tight-binding approximation of the Eq.(1) [25; 32]. After diagonalizing the Bloch Hamiltonian, the eigenfunction in the lowest band can be written as \(|u(\mathbf{q})\rangle=\left(\sin(\theta_{\mathbf{q}}/2)e^{-i\phi_{\mathbf{q}}},-\cos( \theta_{\mathbf{q}}/2)\right)^{\rm T}\), where the parameters \(\theta_{\mathbf{q}}=\arccos(-\left\langle\sigma_{z}\right\rangle)\in[0,\pi]\) and \(\phi_{\mathbf{q}}=\arg(\langle\sigma_{x}\rangle+i\,\langle\sigma_{y}\rangle)\in [0,2\pi)\). The amplitudes of the eigenfunction \(u_{\uparrow,\downarrow}\) and their relative phase for \(\delta=-0.2E_{\rm r}\) in the FBZ are exhibited in Fig.3. The amplitude of the eigenfunction \(u_{\uparrow}\) has a value of \(\sim\)1 at the \(\Gamma\) point and is close to zero at the M point, whereas the amplitude of \(u_{\downarrow}\) is opposite. The relative phase is significantly different from the amplitude and can be roughly divided into four regions: the blue-gray region (\(0\leq\varphi<0.5\pi\)), the gray region (\(0.5\pi\leq\varphi<\pi\)) on the left of the phase map, and the light-green region (\(\pi\leq\varphi<1.5\pi\)), the green region (\(1.5\pi\leq\varphi<2\pi\)) on the right of the phase map. The junctions of these four regions emerge phase vortices, which implies that the vectors \(\langle\mathbf{\sigma}\rangle\) point to north or south poles of the Bloch sphere [33]. Our experimental measurements basically agree with numerical calculations (second row of Fig.3). Note that the phase vortices deviate from the high-symmetry points of the FBZ {\(\Gamma\),M,\(X_{1,2}\)}, which mainly stems from the effects of the higher bands [24].
Thanks to the complete observation of the eigenfunction \(|u(\mathbf{q})\rangle\), we can identify the geometric structures of the quantum states by the gauge-invariant quantum geometric tensor \(\chi_{\alpha\beta}\)[34]
\[\chi_{\alpha\beta}=\langle\partial_{q_{\alpha}}u|(1-|u\rangle \langle u|)|\partial_{q_{\beta}}u\rangle=g_{\alpha\beta}-i\mathcal{F}_{\alpha \beta}/2. \tag{4}\]
Here, the quantum metric tensor \(g_{\alpha\beta}=\text{Re}(\chi_{\alpha\beta})\), as the real part of QGT, measures a distance between quantum states in quasi-momentum space [7]. The Berry curvature \(\mathcal{F}_{\alpha\beta}=-2\text{Im}(\chi_{\alpha\beta})\), being the imaginary part of QGT, acts as an effective 'electromagnetic' tensor in quasi-momentum space [5]. Hence, the momentum-resolved quantum metric tensor \(g_{\alpha\beta}\) and Berry curvature \(\mathcal{F}_{xy}\) are extracted from the measurement of eigenfunction in Fig.3 using Eq.(4), displayed in Fig.4(a). The Berry curvature \(\mathcal{F}_{xy}\) is mainly localized near the ring structure, the sign and direction of which is negative and perpendicular to the plane \(q_{x}\)-\(q_{y}\), respectively. Thereby, Chern number \(\mathcal{C}=\int_{\rm BZ}\mathcal{F}_{xy}d\mathbf{q}/2\pi=-1.00\pm 0.02\) at \(\delta=-0.2E_{\rm r}\), signifying the band is topologically non-trivial. Tuning the detuning from \(\delta=-1.0E_{\rm r}\) to \(\delta=1.0E_{\rm r}\), in Fig.4(b), the topological phase diagram is obtained by the Chern number, in consistent with our previous measurements via quench dynamics [32; 35; 36].
In addition, the quantum metric tensor with non-vanishing value in Fig.4(a) also appears on the ring structure around M, indicating the distance between quantum states on the ring is larger than other areas. Moreover, the integral of the quantum metric tensor over the full Brillouin zone gives a quantum volume [37]\(\text{vol}_{g}=\int_{\rm BZ}\sqrt{g_{xx}g_{yy}-g_{xy}^{2}d\mathbf{q}/\pi}\), which yields an inequality \(\text{vol}_{g}\geq|\mathcal{C}|\) for Chern insulators [27; 28]. Figure 4(b) shows the quantum volume and the Chern number as
Figure 4: The quantum geometric tensor in the FBZ. (a) The Berry curvature \(\mathcal{F}_{xy}(\mathbf{q})\), together with the three components of the quantum metric tensor \(g_{xx}(\mathbf{q})\), \(g_{yy}(\mathbf{q})\), \(g_{xy}(\mathbf{q})\). Results from experimental data (upper row) are compared with numerical calculations (lower row). Parameters: \((V_{0},\ \Omega_{0},\ \delta)=(4.0,1.0,-0.2)E_{\rm r}\). (b) The Chern number \(\mathcal{C}\) and the quantum volume \(\text{vol}_{g}\) versus the detuning \(\delta\). The circles and squares with error bars are from experimental results. The dash pink and green curves are from numerical calculations. Parameters: \((V_{0},\ \Omega_{0})=(4.0,1.0)E_{\rm r}\).
functions of the detuning, validating such an inequality experimentally. The inequality is related to superfluidity or superconductivity [38]. Thereafter, we may use the inequality to roughly evaluate the topological properties in 2D Raman lattices: If \(\text{vol}_{g}<1\), \(\mathcal{C}=0\); If \(\text{vol}_{g}\geq 1\) and \(\delta\neq 0\), \(|\mathcal{C}|=1\). Note that the quantum volume takes non-integer value that can be straightforwardly interpreted from the positive semi-definite of the the quantum metric tensor as elaborated in [39].
Our measurements of the quantum geometric tensor via Bloch state tomography in 2D optical Raman lattices exhibit a paradigm to investigate geometry and topology for quantum states. Since such tomography technique only requires accurate control over the momentum transfer and the phase of the Raman pulse, it can be readily generalized to interacting systems [40], multi-band systems [41], and 3D topological systems [30]. It also opens the possibility of spatiotemporal coherent manipulation towards detecting new dynamical topological states of matter. For instance, people could apply the tomography to quench dynamics with the goal of probing the complete structure of Hopf fibration [42; 43; 44; 45]. The quantum metric tensor also provides important information that the Berry curvature incapable to tell when the Berry curvature vanishes or diverges, including geometric or topological properties [46], non-Hermitian systems [47; 48], geometric orbital susceptibility [49] as well as Bose and Fermi superfluid [38; 50]. Such situation can be reached by adjusting the parameters of the Raman lattices so that there exist gapless bands and one can detect certain phenomena in topological semi-metals governed by the quantum metric tensor, such as the dynamics of the wave packet [15; 47].
We acknowledge insightful discussions with Ji-Zhou Wu. This work was supported by the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0302001 and 2021ZD0302100), the National Natural Science Foundation of China (Grant No. 12025406 and 12104445), Anhui Initiative in Quantum Information Technologies (Grant No. AHY120000), Shanghai Municipal Science and Technology Major Project (Grant No. 2019SHZDZX01), and the Strategic Priority Research Program of Chinese Academy of Science (Grant No. XDB28000000). J.-Y.Z. acknowledges support from the startup grant of University of Science and Technology of China (Grant No. KY2340000152) and the Sponsored by Shanghai Pujiang Program (Grant No. 21PJ1413600). C.-R.Y. acknowledges support from China Postdoctoral Science Foundation (Grant No. 2021M703112).
|
2305.14163
|
Leveraging Open Information Extraction for More Robust Domain Transfer
of Event Trigger Detection
|
Event detection is a crucial information extraction task in many domains,
such as Wikipedia or news. The task typically relies on trigger detection (TD)
-- identifying token spans in the text that evoke specific events. While the
notion of triggers should ideally be universal across domains, domain transfer
for TD from high- to low-resource domains results in significant performance
drops. We address the problem of negative transfer in TD by coupling triggers
between domains using subject-object relations obtained from a rule-based open
information extraction (OIE) system. We demonstrate that OIE relations injected
through multi-task training can act as mediators between triggers in different
domains, enhancing zero- and few-shot TD domain transfer and reducing
performance drops, in particular when transferring from a high-resource source
domain (Wikipedia) to a low(er)-resource target domain (news). Additionally, we
combine this improved transfer with masked language modeling on the target
domain, observing further TD transfer gains. Finally, we demonstrate that the
gains are robust to the choice of the OIE system.
|
David Dukić, Kiril Gashteovski, Goran Glavaš, Jan Šnajder
|
2023-05-23T15:27:35Z
|
http://arxiv.org/abs/2305.14163v2
|
# Leveraging Open Information Extraction for
###### Abstract
Event detection is a crucial information extraction task in many domains, such as Wikipedia or news. The task typically relies on trigger detection (TD) - identifying token spans in the text that evoke specific events. While the notion of triggers should ideally be universal across domains, domain transfer for TD from high-to low-resource domains results in significant performance drops. We address the problem of negative transfer for TD by coupling triggers between domains using subject-object relations obtained from a rule-based open information extraction (OIE) system. We demonstrate that relations injected through multi-task training can act as mediators between triggers in different domains, enhancing zero- and few-shot TD domain transfer and reducing negative transfer, in particular when transferring from a high-resource source Wikipedia domain to a low-resource target news domain. Additionally, we combine the extracted relations with masked language modeling on the target domain and obtain further TD performance gains. Finally, we demonstrate that the results are robust to the choice of the OIE system.
## 1 Introduction
Event detection (ED) is considered an important part of the information extraction pipeline in natural language processing (NLP). ED systems are typically closed-domain and work by filling predefined event-specific slots evoked by an event _trigger_ - a span of words in the input sequence that most clearly evokes a particular event type. A typical closed-domain ED workflow consists of trigger detection (TD) and trigger classification (TC) tasks Xiang and Wang (2019), aiming first to locate the triggers in the text and then assign each trigger an event type. Once triggers in the text have been identified, the next step may be to detect the arguments of the corresponding events, such as participants, location, and time. Based on triggers, detected events can be used for many downstream tasks, including knowledge graph construction Zhang et al. (2021), information retrieval Glavas and Snajder (2013), text summarization Zhang et al. (2023), and aspect-based sentiment analysis Tang et al. (2022).
The concept of an event appears intuitive at first glance, and one might expect the notion of triggers to be universal across the domains. However, research in NLP has struggled to provide a clear-cut operational definition of an event, giving rise to diverse annotation schemes, e.g., Doddington et al. (2004); Pustejovsky et al. (2005); Shaw et al. (2009); Cybulska and Vossen (2014); Song et al. (2015). The differences between annotation schemes, alongside the usual distribution shifts between domains, make TD domain transfer a real challenge. Empirical evidence has demonstrated significant performance drops in zero- and low few-shot scenarios when attempting TD from the high-resource source to the low-resource target domain - a phenomenon known as _negative transfer_Ngo Trung et al. (2021); Meftah et al. (2021); Wang et al. (2019). In the absence of an effective domain transfer method for TD, each new domain will require the manual an
Figure 1: Result of triple extraction with MinIE and trigger detection model output on Wikipedia sentence.
notation of trigger spans and event types, which is a cumbersome and resource-intensive task.
One way to facilitate the TD domain transfer may be by introducing a proxy task that will align the triggers in the two domains. One such task is subject-relation-object (SRO) extraction. Recent work by Deng et al. (2022) showed that trigger and argument detection could be aligned with the SRO triple extraction in Chinese. They annotated triggers and arguments in news titles with the assumption that extracted subjects and objects can be mapped to arguments and relations to triggers. Both ED and SRO extraction tasks aim at inducing the predicate-argument structure, although triples are more general and need not be event-centric. Consequently, there will be a significant overlap between SRO triples on one side and triggers and arguments on the other, which could possibly be leveraged to facilitate TD domain transfer.
Open Information Extraction (OIE) systems (Banko et al., 2007) can automatically extract SRO triples in a domain-independent manner (Sun et al., 2018). Although recent OIE systems are trained in a supervised manner (Kolluru et al., 2020; Kotnis et al., 2022), traditional OIE systems typically do not require training or domain-specific pre-processing of the input text (Lauscher et al., 2019). With the release of more performant rule-based OIE systems such as Stanford OIE (Angeli et al., 2015) or MinIE (Gashteovski et al., 2017), the cost of triple extraction from arbitrary data became negligible. Rule-based OIE systems are still better than neural OIE systems (Gashteovski et al., 2022). Also, neural systems are slower (Kotnis et al., 2022). The result of triple extraction with the MinIE system is shown in Figure 1 and illustrates the overlap between rule-based extraction of relation _broke into_ and the trigger _broke_ discovered by trigger detection model pre-trained on manually annotated data from the Wikipedia domain.
This paper addresses the challenge of negative transfer for TD by leveraging OIE to align the event triggers between two domains. While annotating more target domain data to improve the TD domain transfer is costly, extracting relations on arbitrary domain data with rule-based OIE systems can be done cheaply and at scale. With this in mind, we investigate remedies to overcome low TD domain transfer performance by binding triggers from domains through automatically extracted subject-object relations. More precisely, we combine the domain trigger annotations and the OIE relation extractions in zero- and few-shot setups with multi-task model designs and various training regimes. Although the relations do not always map ideally to triggers, we observe that relations extracted with MinIE bring stability when transferring TD from the high-resource source domain to the low-resource target domain. We demonstrate that OIE relations combined with large language models and transfer training regimes adopted from work on language transfer (Schmidt et al., 2022) can improve target domain TD performance by reducing the distribution shift between domains. Furthermore, following the recent work in pre-trained language model domain adaptation (Gururangan et al., 2020), we show that adding an auxiliary objective of masked language modeling (MLM) on the target domain alongside coupling source with target domain triggers via relations can bring additional gains. Finally, we show that observed gains from relations are robust to the choice of the OIE system. To the best of our knowledge, this is the first work that improves TD domain transfer with OIE relations in a multi-task model design with transfer training regimes.
## 2 Leveraging OIE for Trigger Detection
We tackle TD as a sequence labeling task where each token is classified as either part of the trigger span or outside of it. We use a standard approach where each model is trained to tag tokens in the sequence with TD labels and the IOB2 (inside, outside, begin) tagging scheme (Ratnaparkhi, 1998). Analogously, we modeled relation detection (RD) as a sequence labeling task with the appropriate IOB2 tags.
We tackled the TD domain transfer with three separate model designs building on the RoBERTa-base model for token classification: _vanilla_, _implicit multi-task_, and _explicit multi-task_. In multi-task setups (Zhang et al., 2022), we used gold trigger labels from both the source and target (in few-shot scenarios) domains. Additionally, we utilized silver post-processed relation extractions obtained via the rule-based MinIE OIE system to couple triggers across domains through relations. For the _vanilla_ model, only the gold trigger labels were employed. Section 3 gives the details on the source and target datasets, data pre-processing, and the relation extraction method.
Vanilla.The first, _vanilla_ model is simply a RoBERTa-base transformer with a token classification layer on top designed for solving the TD as a token classification task.
Implicit Multi-task.The _implicit multi-task_ model uses relations in both the training and inference stages. The model works by randomly initializing a relation label embedding matrix with three rows (one for each IOB2 tag) prior to training. It concatenates the representation of each token from the input sentence with the relation embedding corresponding to the OIE relation extraction. This concatenated input is then passed to the token classification layer, which performs TD. This should force the model to learn the trigger-relation connection implicitly and sway it to increase the trigger labeling recall when applied to the target domain. The reasoning behind recall increase is that relations are a more general concept than triggers, which on average, encapsulate fewer tokens than relations do. The relation embeddings and other model parameters are updated based on TD loss at training time. However, relation embeddings are fixed at inference time and serve as a lookup table. When we transfer to a different domain, we can leverage the same OIE extractor to obtain relation extractions on target data. We can get TD predictions on the target data based on the target domain relation extractions, the fine-tuned _implicit_ model, and the trained relation embeddings matrix.
Explicit Multi-task.This model exploits silver relation labels only during training. _Explicit_ model uses two token classification heads: one for the TD and the other for the RD task. The loss is calculated by averaging TD and RD losses on a mini-batch basis. Each sequence is essentially fed twice - once with a gold trigger and once with silver relation labels. At inference time, only the TD head is used. We also experimented with one token classification head for both tasks where the training is conducted similarly to the _explicit_ model.1
Footnote 1: In our preliminary experiments, this led to consistently lower results than all other models, so we omitted them.
## 3 Experimental Setup
Our experiments examine the transfer from a high-resource source domain to a low-resource target domain, which is the common transfer direction. In essence, we chose to transfer from Wikipedia to the news domain based on available TD datasets. For facilitating few-shot TD domain transfer, we employed _joint_ and _sequential_ transfer training regimes in combination with multi-task models introduced in Section 2.
### Datasets and Preprocessing
As the high-resource source domain, we use the MAVEN dataset, which is annotated with trigger annotations on the sentence level from the Wikipedia domain. The low-resource target domain datasets were selected from the news domain and also had annotations on sentence level or could be easily separated into sentences with trigger annotations. We opted for ACE 2005, EDNYT, and the EVEXTRA datasets. The dataset statistics are summarized in Table 1.
Maven.The MAssive eVENt detection dataset (MAVEN) (Wang et al., 2020) from the English Wikipedia domain is the largest freely available dataset suitable for TD in terms of coverage of event types and triggers (Wang et al., 2020). The size and coverage of event types make MAVEN the ideal source data for TD domain transfer. The dataset comes with predefined train, validation, and test splits. However, no gold labels were published for the MAVEN test set. Therefore, we used the official validation set as a test set2 and randomly sampled \(20\%\) of sentences from the training data as a new validation set before conducting any experiments. The sentences came tokenized.
Footnote 2: We use the test set only to measure the performance of the source model.
ACE 2005.The 2005 Automatic Content Extraction (ACE) (Doddington et al., 2004) is a widely used ED dataset compiled from various news sources in multiple languages. We used only the English train, validation, and test splits. We preprocess ACE 2005 with a standard tool to obtain tokens, sentences, and splits.3 Although ACE is sizable in terms of the number of sentences,
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multicolumn{1}{c}{**Dataset**} & \multicolumn{3}{c}{**Train**} & \multicolumn{3}{c}{**Valid**} & \multicolumn{3}{c}{**Test**} \\ \cline{2-10} \multicolumn{1}{c}{} & \#Clean & \#T & \#Rie & \#Clean & \#T & \#Rie & \#Clean & \#T & \#Rie \\ \hline MAVEN & 29844 & 2063 & 15590 & 6437 & 6018 & 3940 & 8012 & 2499 & 4805 \\ AC2-N05 & 14672 & 3256 & 1704 & 873 & 5-48 & 446 & 711 & 295 & 412 \\ EDNYT & 1824 & 1500 & 1164 & 95 & 74 & 65 & 198 & 355 & 115 \\ EVEXTRA & 8534 & 7056 & 5864 & 1130 & 902 & 700 & 3462 & 2077 & 1590 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Number of sentences per split for each of the used datasets (#Sent) next to the number of sentences that contain triggers (#Tr) and relations after post-processing of MinIE triple extractions (#Re).
many ACE sentences do not contain any triggers as pointed to by Wang et al. (2020) (cf. Table 1).
EDNYT.The event detection dataset, which we refer to as EDNYT (Event Detection New York Times) dataset (Maisonnave et al., 2020), was created by sampling sentences for annotation from the New York Times archive. The authors sampled sentences from three episodes of real-world financial crises. Hence, this dataset is focused on a narrow topic inside the news domain. The dataset was not tokenized but was split in advance into train and testing, where the test set comprised \(10\%\) of all data. We obtained a validation set by randomly sampling \(5\%\) of the train data. The sentences were tokenized with spaCy.4 A negligible percent of sentences had trigger spans that could not be aligned with spaCy tokenization, and these were discarded.
Footnote 4: [https://spacy.io](https://spacy.io)
EVEXTRA.A large newspaper corpus annotated with triggers was created by labeling English articles collected with EMM NewsBrief, a service for collecting news stories.5 The EVEXTRA dataset (Glavas and Snajder, 2015) was tokenized and contained splits of articles into sentences. However, the sentences were not split into the train, validation, and test sets. Thus, we randomly assigned sentences to train, validation, and test splits in a 70/10/20 ratio, respectively, considering that sentences from the same article end up in the same split. A few sentences were dropped because it was not possible to align the trigger annotations with tokens.
Footnote 5: [https://emm.newsbrief.eu/NewsBrief](https://emm.newsbrief.eu/NewsBrief)
Extracting Relations.We used the MinIE system to extract triples from sentences for our main experiments. MinIE was chosen based on its verified usefulness for many downstream tasks tested with BenchIE benchmark and evaluation framework for OIE systems (Gashteovski et al., 2022; Friedrich et al., 2022). Since this rule-based OIE system extracts all possible triples from the input text sequence and has demonstrated minor extraction flaws, we employed a set of rules and heuristics to post-process the extracted triples and clean extracted relations. First, we removed implicit triple extractions6 and discarded all non-consecutive subject, relation, or object extractions. We also removed extractions that were not complete triples or where relations contained more than five tokens and all extractions with some other order than the subject-relation-object one. Next, we removed subject and object extraction information from the sentences. If, after that, there were still multiple relation extractions for the same sentence, we tried to merge the relations. The merging process was designed to keep all the relations if the tokens were not shared between them. In the case of shared tokens, we kept only the relation extraction with the highest number of tokens that make up the relation. If all the relation extractions were filtered out in the process, that sentence was considered a sentence without relations and was used for training as an example with all \(\Box\) labels. Identical sentences were dropped at the end. We applied this process to each split for the source and target datasets.7 All these heuristics were designed to artificially improve the alignment of automatically extracted relations with labeled triggers. The final number of sentences with relation extractions (i.e., not all \(\Box\) tags) per split is in Table 1.
Footnote 6: OIE systems often incorporate binding tokens like _is_ in cases when there are no existing relations within the sequence.
### Training Regimes
On top of adding relations through multi-task model design, the key to reducing negative transfer were different training regimes for few-shot setups. The first was the _in-domain training_, which boils down to fine-tuning the _vanilla_, _implicit_, or _explicit_ model on available few-shot target domain examples. The remaining two training regimes were inspired by recent findings in language transfer, namely _joint training_ and _joint transfer_ with mixed batches and _sequential transfer_(Meftah et al., 2021; Schmidt et al., 2022).
Joint Training and TransferThe _joint_ regime relies on mixed batches. The loss is calculated in mini-batches where each batch consists predominantly of source gold trigger examples and a much lower fixed share of few-shot target gold trigger examples. Fine-tuning is performed for a fixed number of epochs. In all _joint_ experiments, we created mixed mini-batches with five randomly sampled few-shot target examples and 27 source examples. If more than five few-shot examples were available, five were consistently randomly sampled from the few-shot pool to include in each mini-batch and ensure a consistent batch size of 32 sentences through experiments. The _joint_ loss was calculated as an average of the source TD loss and five-shot loss
on randomly selected target examples. The idea here is that the fewer few-shot examples should contribute to the update of model parameters with equal weight as the abundant source examples and ultimately prevent the model from overfitting on source data. This was precisely the case for language transfer in research by Schmidt et al. (2022). In our experiments, _joint training_ means training from the RoBERTa-base with _vanilla_, _implicit_, or _explicit_ model design, while _joint transfer_ means training from RoBERTa-base fine-tuned for TD on source training data with same model designs as _joint training_. Both regimes perform training with mixed batches. Effectively, _joint transfer_ uses source training data twice, while both _joint training_ and _joint transfer_ use target training data once during fine-tuning. _Joint transfer_ regime applied to multi-task models utilizes source relations in the first step and source and target relations in the second training step.
Sequential TransferSimilarly to _joint transfer_, training in _sequential transfer_ manner entails starting from the fine-tuned RoBERTa-base for TD on the source domain data with _vanilla_, _implicit_, or _explicit_ model designs and further fine-tuning on all available few-shot examples for a fixed number of epochs.
### Training Details and Hyperparameters
In our experiments, we explored the effectiveness of zero- and few-shot training approaches. Each training regime and model came with its implementation specifics and nuances. We list them here. All experiments shared a pre-trained RoBERTa-base model for token classification with implementation from Hugging Face8Liu et al. (2019). The model inputs were not lowercased. The models were trained with the cross-entropy loss on _Ampere A100 GPU_. Since RoBERTa-base works on input split into subwords, the TD loss was adjusted to take into account only the first token of each tokenized word from the input sequence. We employed the Adam optimizer Kingma and Ba (2014) as an optimization algorithm with starting learning rate of \(0.00001\). Our preliminary experiments found that incorporating a learning rate scheduler was beneficial. Specifically, we utilized a multiplicative learning rate scheduler with a multiplying factor of \(0.99\), which multiplies the learning rate in each epoch, lowering it throughout training.
Footnote 8: [https://huggingface.co](https://huggingface.co)
All models were trained for a fixed number of \(10\) epochs. When training on the source domain, we used the source validation set to select the best model based on the micro F1 TD performance. Specifically, we chose the model from the epoch that yielded the highest validation TD performance.9 When training on the source domain, _implicit_ model was additionally optimized with a simple grid search over the relation label embedding matrix hidden size and learning rate for it. We tried hidden sizes of \(10,50,100\), and \(300\) and learning rates of \(0.0001,0.00005\), and \(0.00001\). When doing few-shot fine-tuning for _joint transfer_ and _sequential transfer_, we fixed the hidden size to the one that yielded the highest source validation set F1 score. For the _joint training_ and _in-domain training_ experiments, we arbitrarily fixed the embedding size of the _implicit_ model to \(300\) and \(10\) across all the experiments, respectively. Throughout our experiments, we maintained a consistent batch size of \(32\). Padding was applied to match the length of the longest example in each mini-batch. Also, we employed gradient clipping of model parameters to a maximum value of \(1.0\) after each mini-batch update.
Footnote 9: We also experimented with selecting the model based on the MLM perplexity of the target validation set, but that approach resulted in lower transfer results. It was essentially a trade-off for the model between learning TD adequately or adjusting to the target domain at the expense of TD performance.
As stated in the few-shot _joint/sequential transfer_ regimes, we initiated fine-tuning from the best model selected on the source validation set. Further, we fine-tuned it for a fixed number of \(10\) epochs, following a similar procedure as the _joint/in-domain training_ regimes. In the second stage of the _joint transfer_ for the _explicit model_, we averaged the source TD and RD loss with the target few-shot TD and RD loss on a mini-batch basis. Similarly, we averaged the source TD and target few-shot TD loss for the _joint transfer_ of the _implicit model_.
Regarding the details of the MLM training, we used a token-level masking probability of \(15\%\), and the masking procedure was inherited from Devlin et al. (2019). We update the model's parameters for _in-domain training_ in an alternate fashion inside each epoch: first, based on target training data MLM loss, and then on target TD loss. The second update is the same as _in-domain training_ without MLM. The _sequential training_ works the same as
without MLM. Only the loaded pre-trained model differs. To incorporate MLM before applying _sequential transfer_, we first fine-tune each model in the same described alternate fashion, but with updates based on MLM loss on target training data and TD loss on source training data.
## 4 Results and Discussion
The main results of our experiments use MinIE as a relation extractor in multi-task model designs. We report all results in this subsection for \(0,5,10,50,100,250\), and \(500\) shots. To evaluate the performance of our models on the token classification TD task, we utilize the IOB2 tagging scheme and provide micro F1 scores. All results are reported under strict evaluation mode, ensuring a rigorous assessment of each model's performance. We average all results over three different seeds. In addition, for our few-shot experiments, we perform an additional averaging process by randomly sampling five different subsets from the target data training set. Moreover, we take precautions to ensure that the samples in each draw are consistent across experiments and that the few-shot samples pulled from the target training set exclusively consist of examples containing triggers.
### Main Results
The main results of our work are in Table 2. When doing the zero-shot TD transfer by training on MAVEN as a source and predicting on news datasets as targets, we witness the effect of negative transfer. The drops are immense compared to the performance of the models trained on all ACE 2005, EDNYT, or EVEXTRA training data. Even in this worst-case zero-shot setup, multi-task _implicit_ and _explicit_ models bring gains compared to _vanilla_ ones. When the number of shots increases, inter
\begin{table}
\begin{tabular}{c|l c c c c c c c c} \hline \hline \multicolumn{2}{c}{**Training Regime**} & \multicolumn{3}{c}{**ACE 2005 (0.706)**} & \multicolumn{3}{c}{**EDNYT (0.702)**} & \multicolumn{3}{c}{**EVEXTRA (0.893)**} \\ \cline{3-10} \multicolumn{2}{c}{} & \multicolumn{1}{c}{**Vanilla**} & **Implicit** & **Explicit** & **Vanilla** & **Implicit** & **Explicit** & **Vanilla** & **Implicit** & **Explicit** \\ \hline \multirow{5}{*}{
\begin{tabular}{c} **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** **ResNet** \\ **ResNet** \\ **ResNet** \\ **ResNet** **ResNet** \\ **ResNet** \\ **ResNet** **ResNet** \\ **ResNet** \\ **ResNet** **\\ **ResNet** **ResNet** \\ **ResNet** **\\ **ResNet** **ResNet** \\ **ResNet** \\ **ResNet**ResNet** \\ **ResNet** **ResNet** \\ **ResNet** \\ **ResNet** **ResNet** \\ **ResNet** ** \\ **ResNet** **ResNet** \\ **ResNet** **ResNet** \\ **ResNet** **\\ **ResNet** **ResNet** \\ **ResNet** **ResNet** \\ **ResNet** **ResNet** \\ **ResNet** **ResNet** **\\ **ResNet** **ResNet** \\ **ResNet** **\\ **ResNet** **ResNet** **\\ **ResNet** **ResNet** \\ **ResNet** **\\ **ResNet** **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **ResNet** \\ **ResNet** **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **ResNet** \\ **ResNet** **\\ **ResNet** **\\ **ResNet** **ResNet** **\\ **ResNet** **\\ **ResNet** **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** ** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** ** **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** ** **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet **\\ **ResNet** **\\ **ResNet **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet **\\ **ResNet** **\\ **ResNet** **\\ **ResNet **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet** **\\ **ResNet **\\ **ResNet** **\\ **ResNet **\\ **ResNet** **\\ **ResNet** **\\ **ResNet **\\ **ResNet** **\\ **ResNet** **\\ **ResNet **\\ **ResNet ** **\\ **ResNet **\\ **ResNet **\\ **ResNet** **\\ **ResNet** **\\ **ResNet **\\ **ResNet ** **\\ **ResNet** **\\ **ResNet **\\ **ResNet **\\ **ResNet** **\\ **ResNet ** **\\ **ResNet **\\ **ResNet** **\\ **ResNet **\\ **ResNet **\\ **ResNet **\\ **ResNet **\\ **ResNet **\\ **ResNet **\\ **ResNet ** **\\ **ResNet **\\ **ResNet** **\\ **ResNet **\\ **ResNet **\\ **ResNet **\\ **ResNet **ResNet** **\\ **ResNet **\\ **ResNet **\\ **ResNet ** **\\ **ResNet** ** \\ **ResNet **\\ **ResNet ** **\\ **ResNet ** **\\ **ResNet **\\ **ResNet ** **\\ **ResNet ** **ResNet **\\ **ResNet ** **\\ **ResNet **ResNet **\\ **ResNet **ResNet **\\ **ResNet **\\ **ResNet ** **ResNet **\\ **ResNet **ResNet **\\ **ResNet ** \\ **ResNet **ResNet **\\ **ResNet ** **ResNet **\\ **ResNet ** **\\ **ResNet **\\ **ResNet ** **ResNet **\\ **ResNet ** **\\ **ResNet ** **\\ **ResNet ** **ResNet \\ **ResNet ** **\\ **ResNet ** **ResNet **
esting effects emerge. On average, relations help achieve higher target domain performance for a low to moderate number of shots. Nevertheless, when we have \(500\) or even \(250\) target examples, the effects of relations become negligible, except for the EVEXTRA dataset, where the gains from relations are consistent regardless of the number of shots or training regime. When considering all training regimes, the _implicit_ model outperforms the _explicit_ model. Contrary to the findings from language transfer (Schmidt et al., 2022), _joint_ regimes were consistently worse compared to _in-domain training10_ and _sequential transfer_. These findings are notable since _joint_ is worse performance-wise and takes absurdly more resources and time to train. With \(500\) shots, _in-domain training_ and _sequential transfer_ come close to the in-domain performance for each news dataset. For the low number of shots (\(5\) and \(10\)), doing _in-domain training_ is pointless. It is always better to transfer sequentially in that case. However, increasing the number of shots in combination with _in-domain training_ can lead to better performance than doing the _sequential transfer_.
Footnote 10: Except for \(5\) and \(10\) shots.
### Adding Auxiliary MLM Objective
Building on findings from recent pre-trained language model domain adaptation, we investigate whether MLM can further boost TD transfer from Wikipedia to the news domain. Since _joint_ regimes were consistently worse in main results, we examine the MLM effect only on _in-domain training_ and _sequential transfer_. We achieve this by adding token-level MLM as an auxiliary training objective through an extra MLM head in all model variants. The head is active, and its parameters are updated during training and not used during inference. The results of our MLM experiments are in Figure 2. _Sequential transfer_ proved to be more efficient than _in-domain training_. On average, MLM with relations embodied into _implicit_ model in _sequential transfer_ regime outperforms the best results without MLM. Only for EVEXTRA, there is no difference in using relations or not when we incorporate MLM into the _sequential transfer_.
### The Effect of the OIE System
Finally, to examine the effect of using only one specific rule-based OIE system, we replace MinIE
Figure 2: TD domain transfer micro F1 scores when transferring from MAVEN as a source to ACE2005, EDNYT, and EVEXTRA as targets. Brackets next to the target dataset report in-domain performance when using all available target training data. The upper plots show results for training starting from the vanilla or multi-task transformer (_in-domain training_). Lower plots show results for training starting from vanilla or multi-task transformer fine-tuned for TD on MAVEN source training data (_sequential transfer_). _Dash-dotted_ models used masked language modeling (MLM) on target domain training data as an auxiliary training objective. X-axis shots are shown on an ordinal scale. _Implicit_ and _Explicit_ setup include MiniIE relation labels into training, while _Vanilla_ does not. All scores were obtained by averaging over three seeds, and all few-shot experiments were additionally averaged across five different few-shot samples.
with Stanford OIE as a relation extractor. We post-process the relations in the same manner as for MiniIE. The results are conducted without MLM and for _in-domain training_ and _sequential transfer_ training regimes. We gather the results in Table 3. The results show that the gains from relations in _implicit_ and _explicit_ multi-task training designs are not due to the quality of MinIE extractions and persist for the Stanford OIE system. The difference between using MinIE and Stanford OIE is insignificant. One can achieve similar, if not almost identical, results with each extractor and a shared post-processing procedure.
## 5 Conclusion
We showed that OIE relations could be utilized to improve the TD domain transfer in zero- and few-shot scenarios. The best improvements were achieved with _implicit multi-task_ model and _sequential transfer_ training regime. We also demonstrated that more noticeable gains could be achieved when combining OIE relations with MLM as an auxiliary task. This is especially evident for the models pre-trained with TD task on the source domain and with MLM training objective on the target domain in the _implicit multi-task_ model design. Replacing MinIE with Stanford OIE revealed that gains on the target domain for the TD task are not dependent on a specific rule-based OIE system. To further exploit the properties of OIE systems, future work should consider experimenting with more datasets and diverse domains like cybersecurity or biomedical, evaluating multiple OIE systems, exploring different transfer directions, and applying the coupling concept to other NLP tasks such as event argument detection, part-of-speech tagging, or named entity recognition. Leveraging OIE extractors on various data sources has the potential to enhance overall performance on various in and out-of-domain NLP tasks.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{**ACE 2005 (0.706)**} & \multicolumn{4}{c}{**EDNYT (0.702)**} & \multicolumn{4}{c}{**EVEXTRA (0.893)**} \\ \cline{2-13}
**Training Regime** & \multicolumn{3}{c}{**MiniIE**} & \multicolumn{3}{c}{**Stanford OIE**} & \multicolumn{3}{c}{**MiniIE**} & \multicolumn{3}{c}{**Stanford OIE**} & \multicolumn{3}{c}{**MiniIE**} & \multicolumn{3}{c}{**Stanford OIE**} \\ \cline{3-13} & & **Implicit** & **Explicit** & **Implicit** & **Explicit** & **Implicit** & **Explicit** & **Implicit** & **Explicit** & **Implicit** & **Explicit** & **Explicit** \\ \hline \multirow{3}{*}{\begin{tabular}{c} **Training Regime** \\ \end{tabular} } & 0-Shot & 0.237 & 0.240 & 0.237 & **0.242** & 0.399 & **0.408** & 0.401 & 0.406 & 0.650 & 0.653 & 0.650 & 0.657 \\ \cline{2-13} & 5-Shot & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 \\ & 10-Shot & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 \\ & 50-Shot & 0.466 & 0.417 & **0.467** & 0.446 & 0.601 & 0.597 & 0.601 & **6.055** & 0.774 & 0.757 & **0.775** & 0.765 \\ & 100-Shot & **0.529** & 0.511 & **0.529** & 0.515 & 0.632 & 0.611 & **6.633** & 0.615 & 0.812 & 0.801 & **0.814** & 0.805 \\ & 250-Shot & **0.569** & 0.550 & 0.550 & 0.557 & **0.654** & 0.642 & 0.652 & 0.638 & **0.847** & 0.835 & 0.846 & 0.840 \\ & 500-Shot & **0.600** & 0.584 & 0.598 & 0.585 & 0.658 & **0.666** & 0.657 & 0.662 & **0.862** & 0.854 & 0.861 & 0.852 \\ \hline \multirow{3}{*}{
\begin{tabular}{c} **Training Regime** \\ \end{tabular} } & 5-Shot & 0.294 & 0.276 & **0.296** & 0.263 & **0.283** & **0.466** & 0.448 & 0.468 & 0.464 & **0.661** & 0.653 & **0.661** & 0.658 \\ & 10-Shot & 0.374 & 0.330 & **0.375** & 0.350 & **0.521** & 0.490 & 0.520 & 0.512 & **0.693** & 0.680 & **0.693** & 0.688 \\ & 50-Shot & **0.506** & 0.463 & **0.506** & 0.476 & **0.592** & 0.568 & 0.591 & 0.570 & **0.764** & 0.741 & 0.763 & 0.747 \\ & 100-Shot & **0.548** & 0.501 & **0.548** & 0.525 & **0.616** & 0.584 & 0.615 & 0.587 & 0.795 & 0.773 & **0.796** & 0.775 \\ & 250-Shot & **0.577** & 0.556 & **0.577** & 0.568 & 0.644 & 0.607 & **0.647** & **0.602** & **0.835** & 0.813 & 0.834 & 0.818 \\ & 500-Shot & **0.609** & 0.586 & 0.602 & 0.584 & 0.652 & 0.640 & **0.653** & 0.627 & **0.857** & 0.836 & 0.836 & 0.845 \\ \hline \hline \end{tabular}
\end{table}
Table 3: TD domain transfer micro F1 scores when transferring from MAVEN as a source to ACE2005, EDNYT, and EVEXTRA as targets w.r.t. different OIE systems. Brackets next to the target dataset report in-domain performance when using all available target training data. _In-domain training_ denotes training from vanilla or multi-task transformer, and _sequential training_ refers to training from vanilla or multi-task transformer fine-tuned for TD on MAVEN source training data. The best results by dataset and model for each training regime and OIE extractor are in **bold**. The results are presented for MiniIE and Stanford OIE systems for _Implicit_ and _Explicit_ model designs, which leverage the OIE relations. All scores were obtained by averaging over three seeds, and all few-shot experiments were additionally averaged across five different few-shot samples.
|
2307.05398
|
Long-range interactions in a quantum gas mediated by diffracted light
|
A BEC interacting with an optical field via a feedback mirror can be a
realisation of the quantum Hamiltonian Mean Field (HMF) model, a paradigmatic
model of long-range interactions in quantum systems. We demonstrate that the
self-structuring instability displayed by an initially uniform BEC can evolve
as predicted by the quantum HMF model, displaying quasiperiodic "chevron"
dynamics for strong driving. For weakly driven self-structuring, the BEC and
optical field behave as a two-state quantum system, regularly oscillating
between a spatially uniform state and a spatially periodic state. It also
predicts the width of stable optomechanical droplets and the dependence of
droplet width on optical pump intensity. The results presented suggest that
optical diffraction-mediated interactions between atoms in a BEC may be a route
to experimental realisation of quantum HMF dynamics and a useful analogue for
studying quantum systems involving long-range interactions.
|
Gordon Robb, Josh Walker, Gian-Luca Oppo, Thorsten Ackemann
|
2023-07-11T15:59:12Z
|
http://arxiv.org/abs/2307.05398v1
|
# Long-range interactions in a quantum gas mediated by diffracted light
###### Abstract
A BEC interacting with an optical field via a feedback mirror can be a realisation of the quantum Hamiltonian Mean Field (HMF) model, a paradigmatic model of long-range interactions in quantum systems. We demonstrate that the self-structuring instability displayed by an initially uniform BEC can evolve as predicted by the quantum HMF model, displaying quasiperiodic "chevron" dynamics for strong driving. For weakly driven self-structuring, the BEC and optical field behave as a two-state quantum system, regularly oscillating between a spatially uniform state and a spatially periodic state. It also predicts the width of stable optomechanical droplets and the dependence of droplet width on optical pump intensity. The results presented suggest that optical diffraction-mediated interactions between atoms in a BEC may be a route to experimental realisation of quantum HMF dynamics and a useful analogue for studying quantum systems involving long-range interactions.
Systems involving long-range interactions, such as those occurring in gravitational physics or plasma physics, display several unusual behaviours e.g. extremely slow relaxation and existence of quasi-steady states [1]. Recently, there has been significant interest in quantum systems involving long range interactions e.g. ion chains, Rydberg gases and cold atomic gases enclosed in optical cavities [2].
The Hamiltonian Mean Field (HMF) model [1] was introduced as a generic classical model of long-range interacting systems e.g. self-gravitating systems [3]. It involves \(N\) particles on a ring which experience a pairwise cosine interaction. It also arises as a model of a system of X-Y rotors coupled with infinite range. Extension of the HMF model to describe quantum systems was first carried out by Chavanis [4; 5] and the dynamics of this quantum HMF model was investigated more recently by Plestid & O'Dell [6; 7] who demonstrated that the model exhibited violent relaxation of an initially homogeneous state to a structured state and possessed bright soliton solutions.
Cold atomic gases enclosed in cavities exhibit phenomena demonstrating universal behaviours common to many different physical systems e.g. the behaviour of a cold, thermal gas in a cavity undergoing viscous momentum damping induced by optical molasses beams is related to the Kuramoto model [8; 9] which describes synchronisation of globally coupled phase oscillators. It has been shown [10; 11] that in the absence of momentum damping, a thermal gas in a cavity can exhibit dynamics similar to that of the classical HMF model. In the case of a quantum degenerate gas, e.g. a Bose-Einstein Condensate (BEC), its dynamical behaviour in a cavity has been mapped onto the Dicke-model describing coupled spins and superradiance [12], but to date no experimental realisation of the quantum HMF model has been described or proposed.
Here we investigate a system consisting of a BEC interacting with an optical field via single mirror feedback (SMF) as shown schematically in Fig. 1. In this BEC-SMF system, coupling between atoms arises due to diffraction, involves many transverse modes and optical forces directed perpendicular to the propagation direction of the optical fields. This is significantly different from cavity systems (such as e.g. [9; 12]), where the dominant coupling between atoms arises from interference between a pump field and cavity modes. We show that under certain conditions, the equations describing the dynamics of the BEC and the optical fields can be mapped onto the quantum HMF model [4; 5; 6]. Using this connection, we then investigate dynamical instabilities of initially homogeneous distributions of BEC density and optical intensity and also the existence of spatially localised states reminiscent of quantum droplets observed in dipolar BECs [13; 14]. The model we use to describe the BEC-SMF system was originally studied in [15] as an extension of that used to study self-structuring of a classical, thermal gas, observed experimentally in [16], with the thermal gas replaced with a BEC. We consider a BEC with negligible atomic collisions and describe the evolution of the BEC wavefunction, \(\Psi(x,t)\) with the
Figure 1: Schematic diagram of the single mirror feedback (SMF) configuration showing a BEC interacting with a forward propagating optical field (\(F\)) and a retroreflected/backward propagating optical field (\(B\)).
Schrodinger equation :
\[i\hbar\frac{\partial\Psi(x,t)}{\partial t}=-\frac{\hbar^{2}}{2m}\frac{\partial^{2 }\Psi(x,t)}{\partial x^{2}}+\frac{\hbar\delta}{2}s(x,t)\Psi(x,t) \tag{1}\]
where \(m\) is the atomic mass, \(\delta=\omega-\omega_{a}\) is detuning, \(s(x,t)=|F|^{2}+|B(x,t)|^{2}\) is the atomic saturation parameter due to the forward and backward optical fields where \(|F|^{2}=\frac{I_{F}}{I_{sat}\Delta^{2}}\), \(|B|^{2}=\frac{I_{B}}{I_{sat}\Delta^{2}}\) and \(I_{F}\), \(I_{B}\) are the intensities of the forward (F) and backward (B) fields respectively. \(I_{sat}\) is the saturation intensity on resonance, \(\Delta=\frac{2\delta}{1}\) and \(\Gamma\) is the decay rate of the atomic transition. It has been assumed that \(|\Delta|\gg 1\) and that consequently \(s\ll 1\) so that the atoms remain in their ground state. In addition, longitudinal grating effects due to interference between the counterpropagating optical fields are neglected.
In order to describe the optical field in the gas we assume that the gas is sufficiently thin that diffraction can be neglected, so that the forward field transmitted through the cloud is
\[F_{tr}=\sqrt{p_{0}}\exp{(-i\chi_{0}n(x,t))} \tag{2}\]
where \(p_{0}=|F(z=0)|^{2}\) is the scaled pump intensity, \(\chi_{0}=\frac{b_{0}}{2\Delta}\) is the susceptibility of the BEC, \(b_{0}\) is the optical thickness of the BEC at resonance and \(n(x,t)=|\Psi(x,t)|^{2}\) is the local BEC density, which for a BEC of uniform density is \(n(x,t)=1\).
The backward field, \(B\), at the BEC completes the feedback loop. As the field propagates a distance \(2d\) from the BEC to the mirror and back, optical diffraction plays a critical role by converting phase modulations to amplitude modulations and consequently optical dipole forces. The relation between the Fourier components of the forward and backward fields at the BEC is
\[B(q)=\sqrt{R}F_{tr}(q)e^{i\frac{q^{2}d}{k_{0}}} \tag{3}\]
where \(R\) is the mirror reflectivity, \(q\) is the transverse wavenumber and \(k_{0}=\frac{2\pi}{\lambda_{0}}\). It was shown in [15] that this system exhibits a self-structuring instability where the optical fields and BEC density develop modulations with a spatial period of \(\Lambda_{c}=\frac{2\pi}{q_{c}}\), where the critcal wavenumber, \(q_{c}\), is
\[q_{c}=\sqrt{\frac{\pi}{2}\frac{k_{0}}{d}}. \tag{4}\]
The reason for this instability is that BEC density modulations (which produce refractive index modulations) with spatial frequency \(q_{c}\), produce phase modulations in \(F_{tr}\) which are in turn converted into intensity modulations of \(B\) (see Eq. (3)). These intensity modulations produce dipole forces which reinforce the density modulations, resulting in positive feedback and instability of the initial, homogeneous state. A condition of this instability is that the pump intensity exceeds a threshold value, \(p_{th}\)[15], which for \(q=q_{c}\) can be written as
\[p_{th}=\frac{2\omega_{r}}{b_{0}R\Gamma}, \tag{5}\]
where \(\omega_{r}=\frac{hq_{c}^{2}}{2m}\).
The optomechanical self-structuring exhibited by the BEC-SMF model of Eq. (1)-(3) derived in [15] can be reduced to that of the quantum HMF model, originally proposed in [4; 5] and revisited in [6; 7]. We express the optical intensity, \(s(x,t)\), in terms of \(n\) (density) using Eq. (2). Assuming \(\chi_{0}n\ll 1\) as in [17], then \(F_{tr}\approx\sqrt{p_{0}}(1+i\chi_{0}n(x,t)).\) It is assumed that the BEC density and (backward) optical field consist of a spatially uniform component and a spatial modulation with spatial frequency, \(q_{c}\), so
\[B(q_{c})=\sqrt{R}F_{tr}(q_{c})e^{i\frac{q_{c}^{2}d}{k_{0}}}=i\sqrt{R}F_{tr}(q_ {c}) \tag{6}\]
i.e. phase modulation of \(F_{tr}\) becomes amplitude modulation of \(B\). Expressing
\[F_{tr}(x,t)=F_{tr}^{(0)}+F_{tr}^{(q_{c})}e^{iq_{c}x}+F_{tr}^{(- q_{c})}e^{-iq_{c}x}\] \[n(x,t)=1+n^{(q_{c})}e^{iq_{c}x}+n^{(q_{c})}{}^{*}e^{-iq_{c}x}\]
then substitution of the above into Eq. (2) shows that
\[\left.\begin{array}{c}F_{tr}^{(0)}=\sqrt{p_{0}}(1+i\chi_{0})\approx\sqrt{p_{0 }}\\ F_{tr}^{(q_{c})}=i\sqrt{p_{0}}\chi_{0}n^{(q_{c})}\\ F_{tr}^{(-q_{c})}=i\sqrt{p_{0}}\chi_{0}n^{(q_{c})}{}^{*}\end{array}\right\}. \tag{7}\]
Using a similar expansion of \(B(x,t)\) and then Eq. (6),(7) produces
\[B=\sqrt{Rp_{0}}-\sqrt{Rp_{0}}\chi_{0}n^{(q_{c})}e^{iq_{c}x}-\sqrt{Rp_{0}}\chi_ {0}n^{(q_{c})}{}^{*}e^{-iq_{c}x}.\]
Writing \(n^{(q_{c})}=|n^{(q_{c})}|e^{-i\phi}\), then
\[B=\sqrt{Rp_{0}}-2\sqrt{Rp_{0}}\chi_{0}|n^{(q_{c})}|\cos(q_{c}x-\phi), \tag{8}\]
which allows us the optical field intensities in Eq. (1) to be written in terms of the BEC density :
\[s(x,t)\approx p_{0}+Rp_{0}-4Rp_{0}\chi_{0}|n^{(q_{c})}|\cos(q_{c}x-\phi). \tag{9}\]
Note that if the assumption \(\chi_{0}n\ll 1\) was relaxed, additional terms with spatial frequency \(2q_{c}\) would also be present. As \(n^{(q_{c})}\) is described by \(n^{(q_{c})}=\frac{1}{L}\int_{0}^{L}|\Psi(x,t)|^{2}e^{-iq_{c}x}\ dx\), where \(L\) is the BEC length then
\[|n^{(q_{c})}|\cos(q_{c}x-\phi)=\frac{1}{2\pi}\int_{0}^{2\pi}|\Psi(\theta^{ \prime},t)|^{2}\cos(\theta-\theta^{\prime}))\ d\theta^{\prime}\]
where \(\theta=q_{c}x\) and it has been assumed that \(\Psi\) is spatially periodic with period \(\Lambda_{c}\). Consequently, Eq. (9) can be written as
\[s(x,t)=p_{0}+Rp_{0}-4Rp_{0}\chi_{0}\Phi(\theta,t) \tag{10}\]
where the non-local potential, \(\Phi(\theta,t)\), is \(\Phi(\theta,t)=\frac{1}{2\pi}\int_{0}^{2\pi}|\Psi(\theta^{\prime},t)|^{2}\cos( \theta-\theta^{\prime}))\ d\theta^{\prime}.\) The constant term in Eq. (10) results in a constant potential energy contribution to Eq.(1), which can be eliminated by the transformation, \(\Psi\) via \(\Psi=\Psi^{\prime}\exp(-i\frac{(1+R)p_{0}\delta}{2}t)\), so that the Schrodinger equation from Eq. (1) becomes a Gross-Pitaevskii Equation (GPE) analogue
\[i\frac{\partial\Psi^{\prime}}{\partial t}=-\omega_{r}\frac{\partial^{2}\Psi^{ \prime}}{\partial\theta^{2}}-\epsilon\Phi(\theta,t)\Psi^{\prime} \tag{11}\]
where \(\epsilon=2\delta Rp_{0}\chi_{0}=\frac{Rp_{0}b_{0}\Gamma}{2}=\frac{p_{0}}{p_{ th}}\omega_{r}.\) Eq.(11) has the same effective GPE-like form as that of the quantum HMF model [4; 6]. Note that \(\epsilon>0\) always, which corresponds to the case of the ferromagnetic quantum HMF model. The order parameter or magnetization, \(M\), is essentially the Fourier component of the BEC density with spatial frequency \(q_{c}\)[5; 6],
\[M=\left|\frac{1}{2\pi}\int_{0}^{2\pi}|\Psi|^{2}e^{i\theta}\ d\theta\right|. \tag{12}\]
In order to demonstrate that Eq. (1)-(3) can exhibit dynamical behaviour associated with the quantum HMF model, we consider two example cases : strong driving, far above threshold i.e. \(p_{0}\gg p_{th}\), and weak driving, just above threshold i.e. \(p_{0}\) only marginally exceeds \(p_{th}\). These cases of strong and weak driving can be interpreted physically as that where the structuring nature of the instability completely dominates delocalising quantum pressure in the BEC, and that where the effects of quantum pressure are significant, respectively. In both cases we restrict the values of \(b_{0}\), \(\Delta\) etc. such that \(\chi_{0}n\ll 1\), for consistency with the assumption made when deriving Eq. (11) from Eq. (13). Fig. 2 shows an example of self-structuring displayed by the BEC-SMF model, Eq. (1)-(3), in the case where the system is driven strongly, far above the instability threshold i.e. \(p_{0}\gg p_{th}\). The system spontaneously develops a modulated optical intensity and modulated density with a spatial period of \(\Lambda_{c}\). The spatio-temporal distribution of the BEC density and optical intensity develop intricate "chevron" structures similar to those observed in [6] produced by a "quantum Jeans instability" [5].
Fig. 3 shows an example of self-structuring when the system is driven weakly, marginally above the instability threshold. Again, both the BEC and optical field spontaneously develop a modulation with a spatial period of \(\Lambda_{c}\), but the evolution of the system is qualitatively different from the strongly driven case shown in Fig. 2. In the weakly-driven case, the BEC density distribution consists of what were termed "monoclusters" in [6] and the chevrons are absent. The temporal behaviour is also different in the two cases. For weak driving, after development of the optical and BEC structures they disperse and reform regularly whereas in the strongly driven case the temporal behaviour is more complex, with a quasiperiodic sequence of dispersal and revival.
This mapping between the BEC-SMF model of Eq. (1)-(3) and the quantum HMF model when \(\chi_{0}n\ll 1\) allows us to gain some insight into the behaviour of the BEC-SMF system. It explains the similarity in the evolution of the BEC density shown in Fig. 2 with that displayed by the quantum HMF model in [6] i.e. the chevron structures. In the weakly driven regime, it allows additional
insight if we assume a wavefunction of the form
\[\Psi(\theta,t)=c_{0}(t)+c_{1}(t)\cos(\theta) \tag{13}\]
i.e. representing two states, one of which, \(|0\rangle\), is spatially uniform, and the other, \(|1\rangle\), which is spatially periodic with spatial period \(\Lambda_{c}\). Using this two-state ansatz, the effective GPE equation of the quantum HMF model in Eq. (11) can be rewritten as an equation for the order parameter/ or "magnetization", \(M\) (see supplementary material):
\[\left(\frac{dM}{dt}\right)^{2}+\frac{\epsilon^{2}}{2}M^{4}-\omega_{r}^{2} \left(\frac{\epsilon}{\omega_{r}}-1\right)M^{2}=0 \tag{14}\]
which has the solution
\[M(t)=\sqrt{2}\frac{\omega_{r}}{\epsilon}\sqrt{\frac{\epsilon}{\omega_{r}}-1} \;\text{sech}\left[\omega_{r}\sqrt{\frac{\epsilon}{\omega_{r}}-1}(t-t_{0})\right] \tag{15}\]
where \(t_{0}=\frac{\cosh^{-1}\left(\frac{\sqrt{2}\frac{\omega_{r}}{\omega_{r}}\sqrt{ \frac{\epsilon}{\omega_{r}}-1}}{M_{0}}\right)}{\omega_{r}\sqrt{\frac{\epsilon }{\omega_{r}}-1}}\) and \(M_{0}=M(t=0)\).
Fig. 4 (inset) shows the evolution of \(M\) as calculated from Eq. (15) and from the BEC-SMF model (Eq. (1)-(3)), when the system is driven weakly. The analytical expression for \(M\) in Eq. (15) and the numerical calculation agree well for the first period of the evolution, which in the numerical simulation then repeats periodically as in fig. 3. The behaviour of the system in the weakly driven regime is therefore similar to that of a two-state quantum system where the BEC density (and consequently the optical intensity) oscillates spontaneously in time between a spatially uniform state and a spatially structured state. Eq. (15) predicts that the maximum value of the order parameter, \(M\) scales with distance from threshold \(\propto(p_{0}-p_{th})^{1/2}\), similar to the mean-field Ising model. Fig. 4 shows that this scaling behaviour is produced by the BEC self-structuring model (Eq. (1)-(3)).
In addition to formation of global structures i.e spatially periodic patterns, it has been shown that spatially localised structures can also arise in the BEC-SMF system [17]. These structures were termed "droplets" in [17] due to the similarity with quantum droplets in dipolar BECs [13; 14]. An example of a stable droplet in the BEC-SMF system is shown in Fig. 5 as calculated from Eq.(1.3). It can be seen that a BEC of width smaller than \(\Lambda_{c}\) maintains its shape due to its interaction with the optical field which it generates. The existence of soliton solutions for the quantum HMF model was discovered in [7], which showed that they are similar to strongly localised gap solitons which can exist for BECs in optical lattices [18], with the difference that in the quantum HMF model the lattice is not externally imposed, but self-generated by the BEC. Here we show that the mapping of the BEC-SMF system as described by Eq. (1)-(3) to the quantum HMF model as described by Eq. (11) allows determination of the width of the droplet and its dependence on the parameters of the system e.g. pump intensity, \(p_{0}\).
Assuming that the profile of the BEC density is Gaussian with width \(\sigma_{x}\) i.e. \(\Psi(x)\propto\exp(-\frac{x^{2}}{2\sigma_{x}^{2}})\), then the value of \(\sigma_{x}\) which minimises the energy functional, \(E(\sigma_{x})\), defined as
\[E(\sigma_{x})=\frac{1}{2\pi}\int_{0}^{2\pi}\Psi^{*}\left[-\omega_{r}\frac{ \partial^{2}\Psi^{\prime}}{\partial\theta^{2}}-\epsilon\Phi(x,t)\right]\Psi \;d\theta \tag{16}\]
can be shown to be (see supplementary material)
\[\frac{\sigma_{x}}{\Lambda_{c}}=\frac{1}{2\pi}\left(\frac{\omega_{r}}{\epsilon }\right)^{1/4}\propto\left(p_{0}b_{0}R\right)^{-1/4}. \tag{17}\]
This is consistent with a more rigorous derivation of soliton solutions for the quantum HMF model [7], with density profiles described by parabolic cylinder functions of characteristic width \(\propto\epsilon^{-1/4}\). The predicted dependence of droplet width, \(\sigma_{x}\), on pump intensity, \(p_{0}\), is confirmed in Fig. 5, where the stable droplet width is calculated from Eq. (1)-(3) for different pump intensities and is plotted against \(p_{0}\). The power-law scaling \(\sigma_{x}\propto p_{0}^{-1/4}\) predicted by energy minimisation of the quantum HMF model agrees well with the results of the simulations so long as \(\Delta\) is sufficiently large that condition \(\chi_{0}n\ll 1\) is well satisfied. This scaling behaviour shows that the profile and characteristic width of these optomechanical droplets are more closely related to those of localised gap solitons [7; 18] in a self-generated lattice than to other types of localised structures e.g. non-linear Schrodinger equation solitons or quantum droplets observed in dipolar BECs [13; 14].
Figure 4: Maximum value of \(M\) as a function of \(p_{0}\), calculated from Eq. (1)-(3). All other parameters used are as for Fig. 3. Inset shows evolution of \(M\) calculated from Eq. (15) (dashed line), and from a numerical solution of Eq. (1)-(3)) (full line), for one period of oscillation when the system is driven weakly (\(p_{0}=1.05p_{th}=2.098\times 10^{-10}\)). All other parameters used are as for Fig. 3.
In conclusion, we have shown that a BEC interacting with an optical field via a feedback mirror can be a realisation of the quantum HMF model. We demonstrated that the self-structuring of an initially uniform BEC displays features observed previously in the quantum HMF model: for strong driving, chevons appear in the BEC density; for weak driving, the BEC behaves as a two-state quantum system, with the order parameter or magnetisation evolving as a series of sech pulses. The mapping to the quantum HMF model also allowed prediction the dependence of BEC droplet width on pump intensity, which agreed well with simulations of the BEC-SMF model. These results suggest that optical diffraction-mediated interaction between atoms in a BEC may be a promising candidate for experimental realisation of quantum HMF dynamics and consequently be a versatile testing ground for models of quantum systems involving long-range interactions.
We acknowledge useful discussions with G. Morigi.
|
2305.13383
|
Sensitivities to feebly interacting particles: public and unified
calculations
|
The idea that new physics could take the form of feebly interacting particles
(FIPs) - particles with a mass below the electroweak scale, but which may have
evaded detection due to their tiny couplings or very long lifetime - has gained
a lot of traction in the last decade, and numerous experiments have been
proposed to search for such particles. It is important, and now very timely, to
consistently compare the potential of these experiments for exploring the
parameter space of various well-motivated FIPs. The present paper addresses
this pressing issue by presenting an open-source tool to estimate the
sensitivity of many experiments - located at Fermilab or the CERN's SPS, LHC,
and FCC-hh - to various models of FIPs in a unified way: the Mathematica-based
code SensCalc.
|
Maksym Ovchynnikov, Jean-Loup Tastet, Oleksii Mikulenko, Kyrylo Bondarenko
|
2023-05-22T18:00:35Z
|
http://arxiv.org/abs/2305.13383v3
|
# Sensitivities to feebly interacting particles:
###### Abstract
The idea that new physics could take the form of feebly interacting particles (FIPs) -- particles with a mass below the electroweak scale, but which may have evaded detection due to their tiny couplings or very long lifetime -- has gained a lot of traction in the last decade, and numerous experiments have been proposed to search for such particles. It is important, and now very timely, to consistently compare the potential of these experiments for exploring the parameter space of various well-motivated FIPs. The present paper addresses this pressing issue by presenting an open-source tool to estimate the sensitivity of many experiments -- located at Fermilab or at the CERN's SPS, LHC, and FCC-hh -- to various models of FIPs in a unified way: the Mathematica-based code SensCalc.
## 1 Introduction
The well-known shortcomings of the Standard Model suggest to us the existence of new physics "Beyond the Standard Model" (BSM), that is generally expected to involve new particles. There is currently no clear theoretical guidance, nor experimental hints, about the mass of the hypothetical new particles, which could range from sub-eV all the way up to the Planck scale. Particles with a mass below the electroweak scale are of particular interest experimentally, since they may be numerously produced at accelerators. Past experiments have already excluded the largest values of the couplings for such particles; hence they are called _feebly interacting particles_, or _FIPs_ for short. FIPs may be searched for both at the main detectors of colliders (ATLAS, CMS and LHCb at the LHC, or their equivalents at future colliders such as the FCC-hh) which are located very close to the collision point, or at so-called _lifetime-frontier_ experiments, which re-use existing facilities or infrastructure and place a displaced decay volume near an interaction point or target. Lifetime-frontier
experiments may be broadly split into two classes [1]: collider-based, which consist of a decay volume placed near the interaction points of ATLAS, CMS, and LHCb, and extracted-beam experiments, which use an extracted beam line hitting a target.
During the last few years, many lifetime-frontier experiments have been proposed. Among extracted-beam experiments, we can list SHiP [2; 3], SHADOWS [4], and HIKE [5] at the SPS, and DUNE [6; 7] and DarkQuest [8] at Fermilab. The proposed LHC-based experiments include MATHUSLA [9] and FACET [10], associated with CMS; FASER [11], SND@LHC [12] (together with their upgrades, AdvSND and FASER2) and ANUBIS [13], close to the ATLAS interaction point; Codex-b [14] near LHCb; and AL3X [15] at ALICE. Furthermore, lifetime-frontier experiments will likely remain part of the physics program of future colliders, such as the FCC-hh [16].
In order to evaluate the potential of those experiments to search for generic FIPs, the PBC initiative has proposed [1] a few benchmark models. They include dark photons, millicharged particles, dark scalars, heavy neutral leptons, and axion-like particles coupled to various SM particles.
While some of the experiments from the above list are already running, many are still at the status of proposals. Their design is not finalized yet, and is still undergoing optimization. Their sensitivity can be optimized by focusing on two key aspects: increasing the rate of events that contain FIPs, and reducing the Standard Model (SM) backgrounds. Studying the background requires knowing the detailed specifications of the experimental setup, background-reducing systems, and surrounding infrastructure. As a result, full simulations are required, which accurately trace each event starting from the initial proton collision and ending with the interactions of the background particles with the detector material. Most of the experimental proposals claim to achieve zero background level. In contrast, the evaluation of the FIP event rate is comparatively less affected by these complexities. This is the case, in particular, when the FIPs are produced at the collision point. They would then propagate through the infrastructure without being affected (due to their tiny interaction strength), and decay or scatter inside the decay volume with some tiny probability. If the reaction products reach the detector and satisfy some simple kinematic cuts, they could typically be detected with \(\approx 1\) efficiency. Therefore, the sensitivity1 of a given experiment to FIPs is determined mainly by 1) the distribution of FIPs at the
facility housing the experiment and 2) the geometry of the experiment itself.
Despite the relative simplicity of estimating the sensitivity to FIPs, there exists a caveat that can make their comparison challenging: the sensitivity estimates performed by the various experimental collaborations are not publicly accessible. This is crucial for three reasons. First, there is often no unique description of the production and decay of a given FIP in the literature. This is related to either theoretical uncertainties in the description of the FIP phenomenology, or different conventions in the definition of the model. As a result, different collaborations can end up using different FIP descriptions; sometimes, even the definition of the FIP coupling is different (see Appendix A). Secondly, due to the rapid pace of change as the experiment's design is being optimized, there may exist a mismatch between, on the one hand, the experimental setup and/or the assumptions used and, on the other hand, the reported sensitivity, even within a same document (see Fig. 4 and the corresponding discussion). Indeed, in order to update the sensitivity while the setup is undergoing optimization, collaborations would need to re-launch full-scale simulations, which require a lot of time, computational resources and person-power. Finally, these calculations are black-box: they do not provide a qualitative understanding of the sensitivity. This problem becomes especially important when comparing the sensitivities of various experiments to understand which one is better suited to probe a given region of the FIP parameter space.
To address these issues, a public tool that can calculate the sensitivity of various experiments to FIPs -- in a unified and transparent way -- is required. Several publicly available packages can already perform such sensitivity calculations [17; 18]. However, they are limited to a specific type of facilities: either beam dump experiments or colliders. This paper presents the Mathematica[19] code SensCalc[20], that can evaluate the sensitivity of the various experiments proposed at LBNE, SPS, LHC, and FCC-hh to various FIPs.2 The code is based on a semi-analytic approach developed in Ref. [21] and further improved and cross-checked in Refs. [16; 22; 23] (see also [24; 25]), where the number of events is approximated by the integral of several quantities: the FIP angle-energy distribution, its decay probability, the geometric acceptance of FIPs, and the acceptance of its decay products. Most of these quantities can be accurately computed analytically, which is especially attractive as it improves the transparency of the computations.
Footnote 2: Available at [https://doi.org/10.5281/zenodo.7957785](https://doi.org/10.5281/zenodo.7957785).
The present paper is organized as follows. In Sec. 2, we discuss the semi-analytic method that we use for calculating the sensitivity, together with its validation and limitations. In Sec. 3, we provide a brief description of SensCalc, specifying the list of currently implemented experiments and models of FIPs. We also compare it with other publicly available packages for computing the sensitivity, as well as with SensMC[26], a simplified Monte-Carlo simulation that we have specifically developed
to validate SensCalc. Finally, we conclude in Sec. 4.
## 2 Semi-analytic approach to calculate sensitivities
### Method
This work concentrates on FIPs produced at the collision point (or close to it). In this case, the production is unaffected by the surrounding infrastructure. We calculate the number of events involving a decaying FIP using the following expression:
\[N_{\rm ev}\,=\,\sum_{i}N_{\rm prod}^{(i)}\int dEd\theta dz\,\ f^{(i)}(\theta,E) \cdot\epsilon_{\rm az}(\theta,z)\cdot\frac{dP_{\rm dec}}{dz}\cdot\epsilon_{ \rm dec}(m,\theta,E,z)\cdot\epsilon_{\rm rec} \tag{1}\]
The quantities entering Eq. (1) are the following (see also Fig. 1):
Figure 1: Illustration of the impact of different contributions to the number of events (1), taking as an example a beam dump experiment with a detector located downstream of the decay volume. Consider a FIP decaying at coordinates \((\theta,z)\), where \(\theta\) is the polar angle relative to the beamline, \(z\) is the longitudinal displacement from the target, and the azimuthal angle \(\phi\) has been omitted from the diagram. The differential probability for a FIP with energy \(E\) to decay there is \(f(\theta,E)dP_{\rm dec}/dz\). The azimuthal coordinate \(\phi\) of the decaying FIP (whose trajectory is shown by the red arrow) must be within the decay volume, which restricts the available decay positions to the blue dashed line. These limitations are included in the azimuthal acceptance \(\epsilon_{\rm az}\). Next, at least two of the FIP decay products (the green arrows) have to point to the detector; this is accounted for in \(\epsilon_{\rm dec}\). Depending on the setup and the FIP, this requirement may significantly limit the decay volume’s “useful” angular coverage. In particular, for 2-body decays into stable particles, the decay products can only point to the detector if the decayed FIP also points to the detector. Only the narrow angular domain that the detector covers contributes to the number of events.
\(N_{\rm prod}^{(i)}\) is the total number of FIPs produced by the process \(i\), e.g., decays of mesons, direct production by proton-target collisions, etc. (see fig. 2).
* \(z\), \(\theta\) and \(E\) are, respectively, the position along the beam axis, the polar angle, and the energy of the FIP.
* \(f^{(i)}(\theta,E)\) is the differential distribution of FIPs in polar angle and energy, for FIPs produced through the process \(i\).
* \(\epsilon_{\rm az}(\theta,z)\) is the azimuthal acceptance: \[\epsilon_{\rm az}=\frac{\Delta\phi_{\rm decay\ volume}(\theta,z)}{2\pi}\] (2) where \(\Delta\phi\) is the fraction of azimuthal coverage for which FIPs decaying at \((z,\theta)\) are inside the decay volume.
* \(\frac{dP_{\rm dec}}{dz}\) is the differential decay probability: \[\frac{dP_{\rm dec}}{dz}=\frac{\exp[-r(z,\theta)/l_{\rm dec}]}{l_{\rm dec}} \frac{dr(z,\theta)}{dz},\] (3) with \(r=z/\cos(\theta)\) being the modulus of the displacement of the FIP decay position from its production point, and \(l_{\rm dec}=c\tau\sqrt{\gamma^{2}-1}\) is the FIP decay length in the lab frame.
* \(\epsilon_{\rm dec}(m,\theta,E,z)\) is the decay products acceptance, i.e. among those FIPs that are within the azimuthal acceptance, the fraction of FIPs that have at least two decay products that point to the detector and that may be reconstructed. Schematically, \[\epsilon_{\rm dec}={\rm Br}_{\rm vis}(m)\times\epsilon_{\rm dec}^{\rm(geom)} \times\epsilon_{\rm dec}^{\rm(other\ cuts)}\] (4) Here, \({\rm Br}_{\rm vis}\) denotes the branching ratio of the FIP decays into final states that are detectable; depending on the presence of a calorimeter (EM and/or hadronic), \({\rm Br}_{\rm vis}\) may encompass only those states featuring at least two charged particles, or it may also include some neutral states such as photons and \(K_{L}^{0}\). \(\epsilon_{\rm dec}^{\rm(geom)}\) denotes the fraction of decay products that point to the end of the detector, and \(\epsilon_{\rm dec}^{\rm(other\ cuts)}\) is the fraction of these decay products that additionally satisfy the remaining cuts (e.g., the energy cut, etc.).
* \(\epsilon_{\rm rec}\) is the reconstruction efficiency, i.e. among the FIP decays that are within \(\epsilon_{\rm az}\) and \(\epsilon_{\rm dec}\), the fraction of them that the detector can successfully reconstruct.
Most quantities entering Eq. (1) can be accurately estimated analytically and cross-checked separately, which makes the approach (1) very transparent. Namely, the azimuthal acceptance is completely determined by the geometry of the decay
volume, which is typically very simple. Once \(\epsilon_{\rm az}\) is computed, a simple way to cross-check it is to verify that the integral
\[{\cal V}=2\pi\int d\theta drr^{2}(z,\theta)\sin(\theta)\epsilon_{\rm az}=2\pi \int d\theta dz\frac{z^{2}}{\cos^{3}(\theta)}\sin(\theta)\epsilon_{\rm az} \tag{5}\]
matches the total volume of the decay volume.
Depending on the production channel, evaluating the FIP distribution function \(f^{(i)}(\theta,E)\) may require some external input. This is the case, for instance, for FIPs produced directly in inelastic proton collisions -- where one needs to simulate \(f^{(i)}(\theta,E)\) using e.g. PYTHIA 8 to account for showering and hadronization -- or for FIPs produced in the interactions of secondary particles, either in their decays or scattering with the material (see Fig. 2 for examples) -- where one needs to know the distribution of secondaries \(f_{\rm secondary}(\theta,E)\). Nevertheless, in the latter case, once \(f_{\rm secondary}(\theta,E)\) has been computed, the distribution of FIPs can then be derived analytically without the need for external tools.
\(\epsilon_{\rm dec}\) may in principle be estimated qualitatively by comparing the opening angle \(\Delta\theta_{\rm dec}\) between the decay products with the angle \(\Delta\theta_{\rm det}\) covered by the detector as seen from the production point. In the simplest case of a 2-body decay into two massless particles, the opening angle is \(\Delta\theta_{\rm dec}\simeq 2\arcsin(\gamma^{-1})\), where \(\gamma\) is the boost factor of the FIP. If \(\Delta\theta_{\rm dec}\gtrsim\Delta\theta_{\rm det}\), then \(\epsilon_{\rm dec}\approx 0\), otherwise \(\epsilon_{\rm dec}\approx 1\). Because the detector angle is smallest at the beginning of the decay volume while the opening angle decreases as \(E_{\rm FIP}^{-1}\), \(\epsilon_{\rm dec}\) effectively imposes a cut from below on the FIP energy and the displacement of its decay position from the beginning of the decay volume. If the decay volume is simultaneously the detector (as in the case of, e.g., neutrino detectors), \(\epsilon_{\rm dec}^{\rm(geom)}\equiv 1\).
In order to more accurately estimate \(\epsilon_{\rm dec}\) -- by accounting for the experiment geometry, the presence of a dipole magnet, different FIP decay topologies (such as multi-body decays or decays into unstable particles), and various other selections imposed on the decay products -- a _separate_ simulation can be performed (see details about the simulation in Sec. 3).
Figure 2: Examples of production processes for various FIPs: (a) proton bremsstrahlung (for the dark photon \(V\)), (b) coherent scattering off nuclei (for the ALP \(a\) coupling to photons), (c) decays of \(B\) mesons into a FIP and another meson \(h\) (for HNLs \(N\)).
Finally, the computation of \(\epsilon_{\rm rec}\) would require running the full simulation, including the detector response. As such, it goes beyond the scope of the present semi-analytic approach. However, we believe that it is possible to perform an adequate pre-selection with the help of \(\epsilon_{\rm dec}^{\rm(other~{}cuts)}\) -- for instance, by requiring a minimum energy or \(p_{\rm T}\) above which the particles are detected with high efficiency (see, e.g., [3]) -- such that, conditioned on this pre-selection, \(\epsilon_{\rm rec}\sim{\cal O}(1)\).
Last but not least, this semi-analytic method allows for a simple analysis of the number of events in the limit of long lifetimes \(c\tau\langle\sqrt{\gamma^{2}-1}\rangle\gg l_{\rm experiment}\), where \(l_{\rm experiment}\) is the length scale of the experiment. The number of events then reduces to a simple expression:
\[N_{\rm ev}\approx\sum_{i}N_{\rm prod}^{(i)}\cdot\frac{(z_{\rm max}-z_{\rm min} )}{c\tau}\times\epsilon, \tag{6}\]
where \(\epsilon\) is the total acceptance:
\[\epsilon=\frac{1}{z_{\rm max}-z_{\rm min}}\int d\theta dEdz~{}\epsilon_{\rm az }\cdot\frac{\epsilon_{\rm dec}}{\cos(\theta)\sqrt{\gamma^{2}-1}} \tag{7}\]
It may be decomposed as
\[\epsilon=\langle\epsilon_{\rm FIP}\rangle\times\langle\epsilon_{\rm decay} \rangle\times\langle(\gamma^{2}-1)^{-1/2}\rangle, \tag{8}\]
where \(\langle\epsilon_{\rm FIP}\rangle\) is the mean probability for the FIP to intersect the decay volume, \(\langle\epsilon_{\rm decay}\rangle\) is the mean probability for the decay products to meet the decay products acceptance criteria, and \(\langle(\gamma^{2}-1)^{-1/2}\rangle\) is the mean inverse \(p/m\) among the FIPs meeting the azimuthal and decay acceptance criteria. This representation is particularly useful when discussing the impact of the geometry on the event rate and when comparing the potential of various experimental setups [27].
The semi-analytic approach presented here is also well suited for estimating the sensitivity to FIP scatterings, which is the main signature in models of light dark matter. In this case, the differential decay probability should be replaced with the scattering probability
\[\frac{dP_{\rm scatt}}{d\theta dEdz}=n_{\rm detector}\frac{d^{2}\sigma_{\rm scatt }}{d\theta dE}, \tag{9}\]
where \(n_{\rm detector}\) is the number density of target particles inside the detector, and \(d^{2}\sigma_{\rm scatt}/d\theta dE\) is the differential cross-section for the scattering of FIPs off the target particles.
### Validation and limitations
The semi-analytic approach presented above has been used to estimate the sensitivities of various experiments at the SPS [22], LHC [16; 23], and FCC-hh [16]. The experimental setups considered cover various options: on-axis and off-axis placements, different decay volume shapes, and different detector orientations relative to
the beamline. These estimates, carried out using our semi-analytical method, have been found to be in good agreement with the estimates available in the literature, including simulations-based ones. In particular, Fig. 3 shows the comparison of the sensitivity of the SHiP experiment to heavy neutral leptons (HNLs) and dark photons obtained using Eq. (1) with the sensitivity obtained by the SHiP collaboration using the FairShip simulation. In the case of dark photons, the slight differences in the sensitivity can be explained by the different elastic proton form factors used to describe the production probability. In the case of HNLs, the discrepancy at the upper bound follows from the monochromatic approximation of the HNL energy spectrum used when computing the sensitivity shown in the SHiP paper [28].
If the assumptions are well-controlled, the semi-analytic approach can agree very well with simulations. Fig. 4 compares the sensitivity of SHADOWS and MATHUSLA to dark scalars as computed via Eq. (1) and calculated independently by SensMC, a simple weight-based Monte-Carlo that we have implemented as described in Appendix C (see Table 1 for the detailed description of the setup and of the scalar phenomenology used to compute the sensitivity). In these calculations, we did not impose any cuts on the decay products apart from the geometric requirement \(\epsilon_{\text{decay}}^{\text{geom}}\), therefore the sensitivities shown are optimistic. The agreement between the two approaches is very good for all masses.
As a further demonstration of the importance of having open-access sensitivity calculations with clear and controllable assumptions and inputs, we have also included in Fig. 4 the sensitivities reported in the respective collaboration papers: the SHADOWS LoI [4], and the MATHUSLA EoI [32]. These sensitivities differ greatly from those we obtained for two main reasons. First, both collaborations use a different description of the scalar production, based on the inclusive estimate where the decay of a \(B\) meson into a scalar is described as the decay of its constituent \(b\) quark. Second, the assumptions about the experimental setups that have been used
Figure 3: Comparison of the 90% CL sensitivity of the SHiP experiment to heavy neutral leptons (**left panel**) and dark photons (**right panel**), obtained using Eq. (1) within the framework of SensCalc and derived using the FairShip simulations [28; 29]. The old ECN4 configuration of SHiP has been considered here.
to compute the sensitivity differ from what is actually described in the documents. In the case of SHADOWS, Ref. [4] used not the setup described within that same work (and summarized in Table 1), but a more optimistic setup located closer to the target and the beamline.3 In the case of MATHUSLA, the acceptance of the decay products was assumed to be 1 in [32]. These differences can significantly affect the reported sensitivity.
Footnote 3: From private communications with the representatives of the SHADOWS and MATHUSLA collaborations.
Finally, the predictions of our method agree with other publicly available packages -- FORESEE and ALPINIST, as will be discussed in more detail in Sec. 3.2.
The simplicity of our semi-analytic method incurs a number of limitations. First, SensCalc cannot provide the full event record associated with each FIP decay or interaction, i.e. the set of all initial, intermediate and final-state particles, including their full kinematics. Instead, it averages over all events that pass the selection. Therefore, it does not allow studying the reconstruction of the FIP parameters, such as its mass, for which the detailed event information is essential. Second, this approach assumes that the surrounding infrastructure does not influence the production of the FIPs. While this is often true in the case of FIPs produced at the collision point or close to it, the situation is different for non-prompt production;
\begin{table}
\begin{tabular}{|c|c|c|} \hline Experiment & SHADOWS & MATHUSLA@CMS \\ \hline \((x,y,z)_{\text{min}}\), m & (-1,0,14) & (0,60,68) \\ \hline Fid. dim, m\({}^{3}\) & \(2.5\times 2.5\times 20\) & \(100\times 25\times 100\) \\ \hline Det. dim., m\({}^{3}\) & \(2.5\times 2.5\times 12\) & \(100\times 5\times 100\) \\ \hline Detector plane & \(xy\) & \(xz\) \\ \hline Requirement for decay products & Point to the end of detector Oppositely charged, or neutral & Point to the end of detector Oppositely charged \\ & No other cuts & No other cuts \\ \hline \(B\) distribution & [30] & [17] \\ \hline Scalar production & Exclusive production, [31] \\ \hline Scalar decays & Following [31] \\ \hline \end{tabular}
\end{table}
Table 1: Description of the experimental setups and of the scalar phenomenology used to obtain the sensitivity shown in Fig. 4. The rows indicate respectively: the closest distance from the collision point to the decay volume (the \(z\) axis being along the beamline), the decay volume dimensions, the detector dimensions, the orientation of detector layers, the decay products acceptance criteria, the distribution of \(B\) mesons used to calculate the flux of scalars, the scalar production branching ratios, and the description of the scalar lifetime and decays. The description of the experiments has been taken from Refs. [4] (SHADOWS) and [32] (MATHUSLA@CMS). For the description of the scalar production, we followed the PBC recommendations [1].
examples of which include FIPs originating from the decays of long-lived \(K^{\pm/0}\) mesons or from neutrino up-scatterings (the neutrino dipole portal [33; 34]), as well as the conversion of photons into axion-like particles (ALPs) in the magnetic field at the LHC [35].
## 3 SensCalc
### Description
The code SensCalc consists of a few Mathematica notebooks that compute the number of events for various FIPs (see Table 3 for the list of the currently available models). Four notebooks have to be run sequentially: Acceptances.nb, FIP distribution.nb, FIP sensitivity.nb, and Plots.nb, see Fig. 5.
**In the first notebook, Acceptances.nb,** the user specifies the experimental setup -- the geometry and dimensions of the decay volume and detector, as well as some details about the detector such as the presence of an ECAL and dipole magnet, see Fig. 6. The list of the experiments currently implemented in SensCalc is provided in Table 2. The user can easily implement new experiments, or modify one
Figure 4: Comparison of the predictions of SensCalc (solid blue) with the SensMC Monte-Carlo code used for validation (dashed black; described in App. C), for the sensitivity of the experiments located off-axis. SHADOWS (**left**, with the setup described in Ref. [4]) and MATHUSLA [32] (**right**) are considered for the comparison. The description of the experiments has been taken from the collaboration papers. To simplify the comparison and because SensMC cannot simulate it, the effect of the dipole magnet has not been included. The numbers of events produced by the two approaches (both with and without including \(\epsilon_{\text{dec}}\)) mostly agree within 20%. Discrepancies are caused mostly by different treatments of the scalar decay and small differences in the numbers describing the production and decay phenomenology of the scalar. The solid red lines show the sensitivities reported in the collaboration documents [4; 32]. The discrepancy between these calculations and our estimates is discussed in the main text.
of the already implemented setups, which may be useful when optimizing an experiment. Some past experiments are also included: CHARM [40] and BEBC [41] at the SPS. In this notebook, the user must also provide all the relevant quantities such as the number of protons on target (or the integrated luminosity for LHC- and FCC-hh-based experiments), the target material, and the production cross-sections for secondary particles (mesons and heavy bosons). For the implemented experiments, these parameters are already listed in the notebook.
Once the setup is fixed, the notebook evaluates the angular coverage of the experiment and \(\epsilon_{\rm dec}\) for various FIPs. Concretely, it first defines the grid of the FIP masses \(m\), FIP energies \(E\), and its decay coordinates within the decay volume: the polar angle \(\theta\) and the longitudinal displacement from the target along the beam axis \(z\). Using these coordinates, the notebook then evaluates \(\epsilon_{\rm az}(\theta,z)\) and the list of azimuthal angles \(\phi\) for which the FIP is inside the decay volume.
Having the grid \((E,\theta,z,\phi)\), the notebook then simulates the FIP decays using its dominant decay channels and calculates the decay acceptance \(\epsilon_{\rm dec}(m,\theta,E,z)\) by
\begin{table}
\begin{tabular}{|c|c|} \hline Facility & List of experiments \\ \hline SPS & SHiP [3], NA62\({}_{\rm dump}\)[5], HIKE\({}_{\rm dump}\)[5], SHADOWS [4] \\ \hline Fermilab (dump) & DUNE, DUNE-PRISM [36], DarkQuest [8] \\ \hline LHC & FASER/FASER2/FASER\(\nu\)/FASER\(\nu\)2 [37; 38; 39] \\ & SND@LHC/advSND [12; 39] \\ & FACET [10], MATHUSLA [32], Codex-b [14], ANUBIS [13] \\ \hline FCC-hh & Analogs of the LHC-based experiments [16] \\ \hline \end{tabular}
\end{table}
Table 2: List of the experiments whose geometry is currently implemented in SensCalc, along with, for each experiment, a reference containing a description of the setup used.
Figure 5: Sketch of the modular structure of SensCalc. The notebook Acceptances.nb produces the list of acceptances \(\epsilon_{\rm az}\) and \(\epsilon_{\rm dec}\) entering Eq. (1) for the selected experiment. The notebook FIP distribution.nb computes the distribution of FIPs \(f(m,\theta,E)\) at the facility housing the experiment. The notebook FIP sensitivity.nb uses as input the outputs of the two previous notebooks to calculate the tabulated number of events, and then calculates the sensitivity in the mass-coupling plane as a function of the remaining parameters such as the minimal number of events and any additional model-specific parameters. Finally, Plots.nb produces the sensitivity plots from the output of the previous notebook.
averaging over these decays and \(\phi\). The averaging over \(\phi\) is already possible at this stage since the other quantities that determine the number of events (1) do not depend on the azimuthal angle. Namely, the differential decay probability \(dP_{\rm dec}/dz\) only depends on \(z\) and \(\theta\), while the FIP distribution function is typically isotropic in \(\phi\).
The decay channels implemented for each FIP are listed in Table 4. For 3-body decays, the distribution of the decay products is generated taking into account both the phase space and the matrix element of the process. If the FIP decay products are short-lived, the routine decays them until only metastable particles are left. By default, those are \(\gamma,e,\mu,K^{0}_{L},\pi^{\pm},K^{\pm}\). The decays of particles with many modes (such as \(D,B,\tau\)) are approximated by some representative decays; for example, for \(\tau\) this is a 3-body decay into one charged particle and two neutrinos, while for \(D,B\) these are multi-hadronic decays.
Let us now discuss the computation of \(\epsilon_{\rm dec}\) in more details. The main acceptance criterion is the requirement that the trajectories of at least two decay products with zero total electric charge are within the acceptance of the detector until its final plane. Decays into pure neutral final states (i.e., photons or \(K^{0}_{L}\)) are also included if a calorimeter is present. If the detector includes a magnetic spectrometer, the components of the charged particles' coordinates and momenta are shifted by a kick right after the magnet in order to approximate the effect of the magnetic field. In addition to this geometric requirement, \(\epsilon_{\rm dec}\) may also include various kinematic cuts. The implemented cuts include the energy cut, the transverse momentum cut, the transverse impact parameter cut, and the spatial separation cut for neutral particles
Figure 6: Visualizations of the geometries of the SHiP (**left**) and MATHUSLA (**right**) experiments, as implemented in SensCalc (in the notebook Acceptance.nb). The blue domain corresponds to the decay volume, while the red domain shows the detector. The descriptions of the two geometries have been taken from the SHiP LoI [3] and Ref. [32].
in the calorimeter. By complete analogy, the user may impose further kinematic cuts. Although the cuts are applied at the Monte-Carlo truth level, i.e. they are implemented without considering reconstruction effects such as the finite resolution of 4-momenta measurements, they can already give us some understanding of the effects that a realistic event reconstruction would have on the signal yield. Such reconstruction effects could in principle be approximated by e.g. applying some smearing to the kinematics variables of the decay products, according to the detector resolution. Note that the acceptance criterion includes partially reconstructible states, i.e. final states for which the FIP invariant mass cannot be reconstructed from the detected decay products.
The output of the first notebook is a table with the following columns:
\[\{m,\theta,E,z,\epsilon_{\text{az}},\epsilon_{\text{dec}}\} \tag{10}\]
**The second notebook, FIP distribution.nb,** computes the angle-energy distribution of the FIPs produced by various facilities and mechanisms. The list of implemented production channels, along with the relevant references used to describe the production, can be found in Table 3. Many production mechanisms require knowing the distributions of the parent particles at the given facility, such as mesons, heavy SM bosons, and photons -- including those produced in secondary interactions. We provide them as tabulated distributions in polar angle and energy, which we generate following the literature or just using available distributions from
\begin{table}
\begin{tabular}{|c|c|c|} \hline Model & Ref. & Production channels \\ \hline BC1 & [29, 42] & Decays of \(\pi,\eta,\eta^{\prime}\), mixing with \(\rho^{0}\) \\ & & Proton bremsstrahlung, Drell-Yan process \\ \hline BC4, BC5 & [31, 43] & 2-/3-body decays of \(B\), decay \(h\to SS\) \\ \hline BC6-8 & [44] & 2-/3-body decays of \(B,D,W\) \\ \hline BC9 & [18, 45] & Coherent production: Primakov process, \(pZ\) scattering \\ & & Decays of \(\pi^{0},\eta\) \\ \hline BC10 & [1] & \(B\) decay \\ \hline BC11 & [18, 46, 47] & Decays of \(B\), mixing with \(\pi^{0}/\eta/\eta^{\prime}\) \\ & & Deep-inelastic production \\ \hline \end{tabular}
\end{table}
Table 3: FIP production channels in the various models implemented in SensCalc with: the benchmark names according to PBC [1], the reference used to describe the production channels, and the list of the production channels implemented in SensCalc. The models are dark photons (BC1), dark scalars with Higgs mixing (BC4) and also with the quartic coupling (BC5), Heavy neutral leptons with arbitrary mixing patterns (including the limiting cases of the pure mixing with \(\nu_{e}\), \(\nu_{\mu}\), or \(\nu_{\tau}\) (BC6–BC8)), and ALPs coupling to photons (BC9), fermions (BC10) and gluons (BC11).
existing studies (see also Appendix B for a description of how we have generated the distributions of parent particles). Users may easily replace the included distributions with their own differential flux. With the distribution of parent particles at hand, we then derive the distribution of FIPs. If the FIPs are produced in decays, we compute their phase space in the rest frame of the parent particle and then boost it to the lab frame. In the case of 3-body decays, the phase space takes into account the matrix element of the process. For FIPs produced via elastic scattering, we adopt the differential cross-section of the process from existing studies, and then convolve it with the distribution of the parent particles. Should the need arise, new production channels may be added by the user, following the above examples.
Such a derivation of the FIP distribution is not possible, however, in the case of FIPs that are produced inelastically in proton-proton collisions (such as via the Drell-Yan process for dark photons or deep-inelastic production of ALPs through the gluon coupling), which require an external simulation. In this case, we use MadGraph5_aMC@NLO (v3.4.2) [48] with a model implemented in FeynRules[49] and exported to the UFO format [50]. To account for showering and hadronization, the events simulated in MadGraph are further processed by PYTHIA 8[51]; see also Appendix B.1 for details. The UFO files and the tabulated FIP distributions are provided alongside SensCalc.
The output of the second notebook is a tabulated distribution of the form
\[\{m,\theta,E,f^{(i)}\}, \tag{3.2}\]
where the last column is the value of the FIP distribution function for the given \((m,\theta,E)\) and the production mechanism \(i\). Some examples of computed distribution functions are shown in Fig. 7.
Let us highlight an important point. Since the FIP distributions are determined mainly by the kinematics of the collisions, they can be considered identical for the different experiments housed at a same facility, assuming that the colliding particles are the same.4 For collider experiments, we typically deal with proton-proton collisions, and this notebook only needs to be run once to obtain the distributions. In the case of beam dump experiments, some differences may arise as a result of different target/beam dump compositions. When the FIP is produced via the decays of secondaries, this only affects the overall scaling of the secondaries production cross section, which depends on the atomic number \(A\): \(\sigma_{\text{prod,second}}\propto A^{0.29}\)[52]. Therefore, as in the collider case, the notebook only needs to be run once. If, however, the FIP is produced in scattering processes, then different targets may affect not only the normalization but also the shape of the distribution. To take this into account, we generate the fluxes for a few common types of targets.
Footnote 4: This is not the case for non-prompt production of FIPs, which goes beyond the scope of the present discussion.
The notebooks <FIP> sensitivity.nb(with <FIP> replaced by the actual FIP) evaluate the sensitivity of the chosen experiment to the corresponding FIP. This is done via computing a tabulated number of events. There is a dedicated notebook for each FIP. First, the notebook imports the acceptance data computed by
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Model & Ref. & Decay channels & Decay channels (\(\epsilon_{\rm dec}\)) \\ \hline \multirow{3}{*}{BC1} & [29; 42] & \(ee,\mu\mu,\tau\tau\) & \(ee,\mu\mu\) \\ & [29; 42] & \(\pi\pi,3\pi,4\pi,KK,m\lesssim 2\) GeV & \(\pi\pi,4\pi,m_{V}<2\) GeV \\ & & \(q\bar{q},m\gtrsim 2\) GeV & \(q\bar{q},m_{V}>2\) GeV \\ \hline \multirow{2}{*}{BC4, BC5} & [31; 43] & \(ee,\mu\mu,\tau\tau\), & \(ee,\mu\mu,\tau\tau\) \\ & & \(\pi\pi,KK,4\pi,DD,BB\) & \(\pi\pi,KK,DD,\tau\tau,BB\) \\ \hline \multirow{3}{*}{BC6-8} & [44] & \(3\nu,ll\nu\) & \(ll\nu\) \\ & & \({\rm meson}+l/\nu,m\lesssim 1\) GeV & \(lq\bar{q}^{\prime}/\nu q\bar{q},m\geq 1.5\) GeV \\ & & \(\nu q\bar{q},lq\bar{q}^{\prime},m\gtrsim 1\) GeV & \({\rm meson}+l/\nu,m<1.5\) GeV \\ \hline BC9 & [18] & \(\gamma\gamma\) & \(\gamma\gamma\) \\ \hline BC10 & [1] & \(ee,\mu\mu,\tau\tau\) & \(ee,\mu\mu,\tau\tau\) \\ \hline \multirow{3}{*}{BC11} & [18; 46] & \(\gamma\gamma\) & \multirow{3}{*}{\(\gamma\gamma,GG\)} \\ & & \(GG,m>1.5\) GeV & \\ \hline \end{tabular}
\end{table}
Table 4: Decay channels of the FIPs implemented in SensCalc. From left to right: the model name according to PBC [1] (see also the caption of Table 3), the reference used to describe the decays, the decay channels used to calculate the lifetime \(\tau_{\rm FIP}\) and the branching ratio of visible decays, and the decay channels used to calculate the decay acceptance \(\epsilon_{\rm dec}\).
Figure 7: Examples of angle-energy distributions \(f^{(i)}(\theta,E)\) for ALPs coupled to photons (**left**) and dark scalars with a non-zero quartic coupling (**right**), produced by the notebook FIP distribution.nb.
Acceptances.nb, the distributions produced by FIP distribution.nb, as well as the relevant quantities defining the FIP phenomenology, such as the production branching ratios, lifetimes, and branching ratios of the decays into visible states at the given experiment. It then maps them to logarithmic scale and interpolates them to obtain the functions entering Eq. (1).
Depending on the FIP, there may exist uncertainties in the description of its production and decay, which may significantly affect the event rate. This is the case, e.g., for dark scalars, where one may describe their production inclusively or exclusively; and for dark photons, for which the description of the proton bremsstrahlung channel depends on the maximal allowed \(p_{T}\) and on the minimal energy allowed to be transferred to the dark photon. The user has the freedom to tune these parameters.
In addition, there may exist model-specific parameters that must be selected before performing the computation. For instance, in the case of HNLs, this is their nature (Dirac or Majorana) and their mixing pattern \(U_{e}^{2}:U_{\mu}^{2}:U_{\tau}^{2}\).
During the computation, this notebook produces intermediate results that may be useful for the sensitivity analysis. This includes the differential number of events with respect to \(\theta,E\), or \(z\), as well as the number of events as a function of the mass and coupling, see Fig. 8. Last but not least, the notebook also shows the behavior of the overall acceptances \(\epsilon\), cf. Eq. (7).
Once the tabulated number of events has been produced, the notebook computes the sensitivities. To this end, the user needs to select the critical number of events as well as some model-specific parameters. For example, for dark scalars, one needs to specify the value of the branching ratio \(\text{Br}(h\to SS)\), which is non-zero in the presence of the quartic coupling \(\mathcal{L}\propto hSS\) (see Appendix A for detail). Because the critical number of events can be freely specified, the user can compute both "exclusion" sensitivity limits -- corresponding to 2.3 expected events at 90% CL -- or
Figure 8: Examples of the output produced by the notebook FIP sensitivity.nb. **Left panel**: differential number of events with respect to the FIP’s energy for various production channels. **Right panel**: density plot of the total number of events as a function of the FIP mass and coupling. As an example, dark photons at FACET are considered. No cuts on the decay products other than the geometric acceptance have been applied.
"discovery" sensitivity limits by (externally) providing the critical \(N_{\rm ev}\) corresponding to the desired significance level and background expectation.
Finally, the notebook Plots.nbplots the sensitivities obtained in the previous notebook. It scans over available sensitivity files, imports those needed by the user, and finally produces the figures (see e.g. Fig. 9).
The user interaction with the various notebooks, such as choosing the experiment, selecting the cuts and the particular FIP model, is organized via dialog windows. This makes running the notebooks straightforward for FIPs and experiments that are already implemented.
To successfully run the notebooks, the user needs to install two dependencies: FeynCalc[53], which is a Mathematica package for the symbolic evaluation of Feynman diagrams, and a C compiler that is recognized by Mathematica.
The performance of the code has been tested on various machines and operating systems. For instance, on a Windows laptop with 16 GB of RAM, 8 CPU cores, and Mathematica 12.1, the typical time required to compute the sensitivity from scratch is \(\mathcal{O}(1\) hour) -- depending on the FIP type and on the mass-coupling grid density. This time is reduced if the FIP distribution has already been pre-generated.
SensCalc still offers significant potential for further improvement. Of particular interest would be the possibility to compute the sensitivity to additional FIP models, including those for which the main signature is scatterings with the detector material. Another well-motivated extension would be to support ALPs with an arbitrary coupling pattern.
Figure 9: Example of a sensitivity plot produced by the notebook Plots.nb, for the model of dark scalars.
A particularly interesting improvement would be to implement an approximate description of the hadronic decays of heavy FIPs (\(m_{\rm FIP}\gg 1\;\mbox{\rm GeV}\)). Currently, SensCalc describes them perturbatively - via decays into quarks and gluons. Their subsequent showering and hadronization in-flight would require external tools such as PYTHIA 8, and hence goes beyond the scope of the present semi-analytic approach.5 However, it may be possible to implement this feature approximately, for instance by simulating the phase space of FIP decays for several masses in PYTHIA 8, then selecting typical sets of decay products, and finally using their pre-computed phase space when evaluating the decay products acceptance.
Footnote 5: Nevertheless, the accuracy of the current method remains surprisingly good even in this regime. This can be understood as follows: because the momentum flow of the hadronized decay products is determined by the kinematics of the incoming jets, the geometric part of \(\epsilon_{\rm dec}\) (Eq. (4)) still describes the decay adequately. This qualitative argument is in agreement with Fig. 3: the dominant decays of the HNLs and dark photons with mass \(m_{\rm FIP}\gtrsim 1\;\mbox{\rm GeV}\) include a quark-antiquark pair, and nevertheless, the semi-analytic approach agrees well with simulations.
Finally, the implementations of the various experiments should be updated according to their latest specifications, which may differ from those listed in currently available documents. This may be done by contacting the representatives of the collaborations.
We are planning to add the above features in a future code update.
### Comparison with similar software packages
At the moment of releasing SensCalc, there are two publicly available codes for computing the sensitivity of lifetime-frontier experiments to decaying FIPs: FORESEE[17] and ALPINIST[18].
FORESEE is a Python-based code developed to evaluate the sensitivities of the far-forward experiments at the LHC and FCC-hh. The currently implemented models of FIPs include dark scalars, dark photons, ALPs coupling to \(W\) bosons, millicharged particles, and up-philic scalars. The package includes the tabulated distributions of various SM particles, including photons, mesons, and heavy bosons. Apart from the tabulated number of events as a function of the FIP mass and coupling, it can additionally produce detailed event records in the HepMC format, which may then be passed to e.g. a detector simulation software. By default, FORESEE does not calculate the acceptance of the decay products; instead it only requires the FIP to decay inside the decay volume, although the user may impose various cuts.
ALPINIST computes the sensitivity of extracted-beam experiments -- including those at the SPS, Fermilab, and some past experiments -- to ALPs couplings to various SM particles. Its modules use Mathematica, ROOT, and Python. The prominent feature of the code is that it can handle generic ALPs with simultaneous couplings to \(W\) bosons, gluons, and the \(U_{Y}(1)\) field. Unlike FORESEE, the computation also incorporates the reconstruction of the decay products inside a detector, at the price
of a longer computation time to obtain the tabulated number of events. Only fully reconstructible states are considered. The output of ALPINIST consists of data files with the mass-coupling dependence of the number of events for various production and decay modes.
The predictions of SensCalc agree well with the results of ALPINIST (see Fig. 10) and FORESEE (the comparison between the semi-analytic approach and FORESEE is discussed in Ref. [23]).
Unlike these two software packages, SensCalc is not restricted to a particular facility. In addition, among the implemented FIP models, it considers for the first time HNLs with arbitrary mixing patterns. The main limitation of SensCalc compared to FORESEE is that it cannot generate detailed event records, while compared to ALPINIST, it is that it does not (currently) consider generic ALPs and does not perform a detailed event reconstruction.
## 4 Conclusion
Feebly interacting particles (FIPs) are present in a broad class of new-physics scenarios that attempt to resolve the known problems of the Standard Model. Their search at various facilities and experiments collectively forms the lifetime frontier of particle physics. During the last decade, many lifetime-frontier experiments have been proposed, that differ in the housing facility, geometric location, and detector technology. With a few exceptions, most of these experiments are not approved yet, and their design is not finalized. Their sensitivities to FIPs are computed by the col
Figure 10: Comparison of the sensitivity of SHiP to ALPs coupling to photons as computed by SencCalc (blue line) and ALPINIST[18] (red line). The definition of the coupling, as well as the fractions of mesons leading to the production of ALPs via photo-conversion, are taken from Ref. [18]. The use of different setups for SHiP in the sensitivity calculations may explain the discrepancy at the upper bound: we have used the currently considered ECN3 configuration from Ref. [3], for which the decay volume is located 12 m closer to the target, while Ref. [18] considers the old configuration from Ref. [54].
laborations themselves, using internal tools which are not publicly accessible. This makes it difficult to control the inputs to the computations, such as the model of the production and decay. It is therefore crucial to have a publicly available tool for computing the sensitivity of those experiments to various FIPs in a uniform, fast and well-controlled way.
The present paper addresses this issue by presenting SensCalc -- a Mathematica-based code for evaluating the sensitivity of various experiments to decaying, long-lived FIPs, based on a semi-analytic approach developed in a number of previous studies (see Sec. 2.1) and cross-checked against various state-of-the-art packages (see Sec. 2.2).
SensCalc already supports a broad range of models and experiments (see Sec. 3.1), with more to be added in future versions. Models currently implemented include dark photons, dark scalars, heavy neutral leptons with various mixing patterns, and axion-like particles coupled to different SM particles. Numerous experiments have been implemented, located at any of the following facilities: the SPS, LBNE, LHC, and FCC-hh. The user retains full control over every aspect of the sensitivity calculation: from the geometry of the experiment and the distribution of the FIP's parent particles to the branching ratios of the FIP production/decay modes and the requirements on the decay products. Besides contributing to the transparency and trustworthiness of the results, this also allows users to easily modify the underlying assumptions as needed, or to add their own models and experiments to SensCalc.
By publicly providing a transparent, semi-analytic method to consistently compute the expected signal at various lifetime-frontier experiments, SensCalc can help address the discrepancies that currently exist in the literature between the descriptions of FIPs and acceptances employed by different collaborations. This is a timely and necessary contribution to the field of FIP searches, as many experiments are currently undergoing active development and optimization, while funding bodies and hosting facilities must decide which projects to prioritize. SensCalc can help with the former by providing fast (re-)calculation of the expected signal as the experiment's design evolves, and with the latter by ensuring a fair and consistent comparison of the expected signals between the proposed experiments, with well-controlled assumptions thanks to a uniform and well-validated implementation of the official PBC benchmarks. This could be particularly relevant in the context of the ECN3 hall upgrade at the CERN SPS, in which a number of experiments are currently being considered for inclusion, namely HIKE, SHiP, and SHADOWS.
###### Acknowledgements.
We thank Alexey Boyarsky and Oleg Ruchayskiy, who supervised the authors on the present topic in the past, and helped develop the foundations of the approach described in this paper. We thank Felix Kahlhoefer and Jan Jerhot for helpful discussions on the phenomenology of ALPs and the ALPINIST code, and Felix Kling for
discussions on FORESEE. We also thank Thomas Schwetz, Nashwan Sabti, Vsevolod Syvolap, Felix Kahlhoefer and Inar Timiryasov for reading the manuscript at different stages of its writing. Finally, we thank the users of the mathematica.stackexchange.com website, who greatly helped us optimize some elements of the code. MO received support from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 860881-HIDDeN. OM is supported by the NWO Physics Vrij Programme "The Hidden Universe of Weakly Interacting Particles" with project number 680.92.18.03 (NWO Vrije Programming), which is (partly) financed by the Dutch Research Council (NWO). KB is partly funded by the INFN PD51 INDARK grant. JLT acknowledges partial financial support by the Spanish Research Agency (Agencia Estatal de Investigacion) through the grant IFT Centro de Excelencia Severo Ochoa No CEX2020-001007-S, by the grant PID2019-108892RB-I00 funded by MCIN/AEI/ 10.13039/501100011033, by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 860881-HIDDeN, and by the grant Juan de la Cierva FJC2021-047666-I funded by MCIN/AEI/10.13039/501100011033 and by the European Union "NextGenerationEU"/PRTR.
## Conflict of Interest Statement
The authors of the present manuscript are also members of the SHiP collaboration, which represents one of the experimental proposals currently competing for funding and access to facilities, notably as part of the ongoing Physics Beyond Colliders study and in the context of the upcoming upgrade of the ECN3 hall at CERN. The present manuscript solely reflects the authors' views, and not those of the SHiP collaboration.
## Appendix A Uncertainties in the description of FIPs
### Discrepancies in the literature
The description of the FIP production and decay, and sometimes even the definition of the FIP couplings, may vary among the sensitivity estimates performed by the different collaborations. One example is the dark scalar \(S\). Following the PBC report [1], the SHiP collaboration uses the exclusive description of the production of \(S\), while other collaborations adopt instead the inclusive description (see the discussion in Ref. [55]). In the domain \(m_{S}\gtrsim 2-3\) GeV, where the inclusive approach breaks down, the difference in the number of produced scalars between these two descriptions may be a factor of 20 or more. Another problem arises from the theoretical uncertainty on the hadronic decay width, which may be as large as a factor of 100 [43, 55] (see also a recent discussion in Ref. [56]). While SHiP and SHADOWS assume the decay width computed in Ref. [43], the FASER collaboration [57] uses the decay width from Ref. [58]. Depending on the calculation used, the sensitivity may therefore differ significantly.
Another example is with ALPs \(a\) coupling to gluons. The PBC report defines an interaction of the form \(\mathcal{L}\propto ag_{a}G^{\mu\nu,a}\tilde{G}^{a}_{\mu\nu}\), where \(G^{\mu\nu}\) is the gluon field strength and \(g_{a}\) is a fixed dimensionful coupling. Theoretical works often [46; 47] adopt a different definition, \(\mathcal{L}\propto ag_{s}^{2}g_{a}G^{\mu\nu,a}\tilde{G}^{a}_{\mu\nu}\), where \(g_{s}=g_{s}(m_{a})\) is the QCD coupling. The latter definition is used by ALPINIST[18] for computing the sensitivity of beam dump experiments to ALPs (and their results are used by the SHiP, HIKE, and SHADOWS collaborations in Ref. [59]). Furthermore, while some collaborations [14] include the production of ALPs through gluon fusion, others do not (this is the case in particular of ALPINIST[18]).
Another problem arises with ALPs that couple to fermions. The PBC [1] recommends including only the decays into leptons in the total width -- even though it may be dominated by hadronic decays in the mass range \(m_{a}\gtrsim 2m_{\pi}\) -- while some collaborations also include hadronic channels [14].
Such mismatches between the assumptions used to compute different sensitivities are particularly problematic when said sensitivities are shown in the same plot -- such as e.g. in the FIPs 2022 proceedings [59] -- without emphasizing that the underlying assumptions differ.
### Definition of the FIP couplings used in SensCalc
The effective Lagrangians of the models implemented in SensCalc are:
* **BC1** (dark photons): \[\mathcal{L}_{\rm int}=-\epsilon eV_{\mu}J^{\mu}_{\rm EM}\] (10) where \(V_{\mu}\) is the dark photon field, \(J^{\mu}_{\rm EM}\) is the EM current, and \(e=\sqrt{4\pi\alpha_{\rm EM}}\) is the EM coupling.
* **BC4** and **BC5** (dark scalars): \[\mathcal{L}_{\rm eff}\supset m_{h}^{2}\theta hS+\frac{\alpha}{2}hS^{2},\] (11) where \(\theta\) is the mixing angle and \(\alpha\) is the quartic coupling. By default, the sensitivity is evaluated assuming a constant branching ratio \({\rm Br}(h\to SS)\propto\alpha^{2}\).
* **BC6, BC7, BC8** (HNLs): \[\mathcal{L}_{\rm int}=\sum_{\alpha=e,\mu,\tau}U_{\alpha}\bar{N}\left(\frac{g} {\sqrt{2}}\gamma^{\mu}P_{L}l_{\alpha}W_{\mu}+\frac{g}{2\cos(\theta_{W})}\gamma ^{\mu}P_{L}\nu_{\alpha}Z_{\mu}\right)+{\rm h.c.},\] (12) where \(N\) is the HNL, \(U_{\alpha}\) the mixing angle, \(g\) the weak coupling, and \(l_{\alpha},\nu_{\alpha},W,Z\) the SM fields. The HNL may be either a Dirac or a Majorana particle.
* **BC9** (ALPs coupling to photons): \[\mathcal{L}_{\rm int}=\frac{g_{a}}{4}aF_{\mu\nu}\tilde{F}^{\mu\nu},\] (13)
where \(a\) is the ALP field, \(g_{a}\) is a dimensionful coupling, and \(F_{\mu\nu},\bar{F}_{\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\alpha\beta}F^{\alpha\beta}\) are the EM field strength and its dual.
* **BC10** (ALPs coupling to fermions): \[\mathcal{L}_{\rm int}=\frac{g_{Y}}{2v_{H}}(\partial_{\mu}a)\sum_{\alpha}\bar{ f}\gamma^{\mu}\gamma_{5}f,\] (100) where \(g_{Y}\) is a dimensionless coupling, \(v_{H}\approx 246\) GeV is the Higgs VEV, and \(f\) are SM fermions.
* **BC11** (ALPs coupling to gluons): \[\mathcal{L}_{\rm int}=g_{a}g_{s}^{2}aG_{\mu\nu}^{a}\tilde{G}^{\mu\nu,a},\] (101) where \(g_{s}\) is the strong coupling constant, \(a\) is the ALP field, \(g_{a}\) is a dimensionful constant, \(G_{\mu\nu}^{a}\) is the gluon field strength, and \(\tilde{G}_{\mu\nu}^{a}=\frac{1}{2}\epsilon_{\mu\nu\alpha\beta}G^{\alpha\beta,a}\) is its dual field strength. Everywhere except for the production of ALPs from DIS, we follow the definition of \(g_{s}\) from Ref. [18]. In the DIS case, we employ the running of \(g_{s}\) associated with the default PDF set in MadGraph.
## Appendix B Inputs used for generating the FIP distributions
### DIS processes
DIS production channels are relevant for several FIPs considered in SensCalc: dark photons and ALPs with the gluon coupling. We simulate the production of these particles in MadGraph5_aMC@NLO, interfaced with PYTHIA 8 for showering and hadronization. The hard processes that we simulate are the lowest-order and next-to-lowest-order processes for quark and gluon fusion:
\[q+\bar{q}\to V,\quad q+\bar{q}\to V+j,\quad G+G\to a,\quad G+G\to a+j \tag{102}\]
As for the scale of the process, we choose the invariant mass of the quark-antiquark pair (dynamical_scale_choice = 4). Although SensCalc already includes the tabulated angle-energy distributions of the FIPs produced by DIS, it also includes the UFO files for the models of dark photons and ALPs, allowing the user to re-generate the distributions under different assumptions if needed.
The DIS production suffers from significant theoretical uncertainties. First, the choice of scale becomes important for light FIPs with masses \(m_{\rm FIP}\simeq 1-2\) GeV, where the uncertainties in the production cross-section may become \(\mathcal{O}(1)\). Second, the minimal parton energy fraction required to produce a FIP is \(x_{\rm min}=m_{\rm FIP}^{2}/s_{\rm pp}\). For experiments like the LHC/FCC-hh and GeV-scale FIPs, \(x_{\rm min}\) can be as tiny as \(10^{-8}\); this domain is only explored experimentally and is therefore subject to theoretical uncertainties (see Ref. [61]). This becomes especially problematic in the case of the FCC-hh. Because of this, we do not consider the DIS production channel for the FCC-hh-based experiments.
### Production by secondary particles
Another important FIP production mechanism is through secondary particles -- either in their decays or scatterings. We handle this case by either generating the distributions of secondary particles using approaches from the literature, or directly using pre-calculated distributions. The list of references is provided in Table 5.
## Appendix C SensMC: a simplified Monte-Carlo used for validation
As an additional cross-check of SensCalc, we have implemented SensMC[26], a small, customizable weight-based Monte-Carlo simulation, as an alternative way of numerically integrating Eq. (1) for FIPs produced in meson decays. It makes extensive use of importance sampling in order to handle the (typically tiny) branching ratios of mesons to FIPs and the (possibly very displaced) decay vertex of the FIP. SensMC is written in the Julia programming language [62] in order to combine performance and readability, and it is released alongside SensCalc in the same repository [20], as well as on GitHub.6
Footnote 6: The GitHub repository can be found at [https://github.com/JLTastet/SensMC](https://github.com/JLTastet/SensMC).
SensMC numerically estimates Eq. (1) using Monte-Carlo integration with importance sampling, by randomly generating a large number of weighted samples whose expectation values are \(N_{\rm ev}\), and finally averaging them. The value of each random sample is computed as follows:
1. A meson species is randomly sampled based on the proportion of produced mesons of this species, with the event weight initially set to the total number of mesons produced across all species. The meson momentum is then randomly sampled from a precomputed spectrum (either a list for the spectrums from FairShip[28] or a grid for those from FORESE[17]). To account for potential variations in the atomic weight of the target, that would affect the overall normalization of the spectrums, the event is optionally reweighted using the formula \(w_{A}=w_{\rm Mo}(A/96)^{0.29}\)[52], with \(A\) denoting the atomic weight of the target and assuming that the spectrums were initially computed for a molybdenum target (as is the case for the FairShip spectrums).
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Particle & LBNE & SPS & LHC & FCC-hh \\ \hline \(\pi^{0}/\eta/\eta^{\prime}/\gamma\) & [45] & [45] & [60] & [60] \\ \hline \(B,D\) & [18] & [30] & [17] & [17] \\ \hline \(W,h,Z\) & – & – & [17] & [17] \\ \hline \end{tabular}
\end{table}
Table 5: List of the references used to generate, or directly take, the distributions of secondary particles that may produce FIPs.
2. The FIP production channel is randomly selected with a probability proportional to its branching ratio, and the event is reweighted by the total branching ratio to FIPs of the particular meson species. Upon the decay of the parent meson, the momenta of its decay products, including the FIP, are uniformly sampled in phase space. The present simulation currently does not take into account the matrix elements because it cannot compute them all, however the logic needed to handle them is already present, allowing the user to implement their own matrix elements if needed.
3. The FIP's decay vertex is then selected randomly along its trajectory by either a) sampling the proper lifetime from an exponential distribution and calculating the corresponding distance in the lab frame or b) employing importance sampling, which restricts the position of the decay vertex to a shell covering the full decay volume, and then reweights the event by the ratio of the true decay distribution to the importance distribution. The FIP decay mode is selected similarly to its production mode, with a sampling probability proportional (and in most cases equal) to its branching ratio; and the event is reweighted by the total branching ratio of the implemented channels. The momenta of the FIP decay products are uniformly sampled in phase space in the current version (but matrix elements could in principle be taken into account, just like for the FIP production).
4. Following a similar procedure, any unstable Standard Model particles are recursively decayed until only metastable particles (that live long enough to be detected) remain, assuming the branching ratios listed in the particletools Python package. The acceptance condition is then evaluated on the set of final metastable particles produced in the FIP decay. The event weight is recorded, along with whether the event is accepted or not.
Because each event is initially weighted by the total number of mesons, all event weights must finally be divided by the number of generated events. The sum of weights then provides a numerical estimate of the total number of physical events (with the FIP decay within the "shell" in case importance sampling is used), while the sum of event weights multiplied by their corresponding (binary) acceptances gives the total number of accepted events, and is independent of the specific importance distribution as long as it fully covers the decay volume.
The sensitivity curve is computed iteratively, starting from a coarse grid in \((\log(m),\log(\theta))\) that covers the region where the experiment is susceptible to be sensitive. The expected number of accepted events is computed at each grid point. The multi-dimensional bisection method (MDBM) [63] is then used to iteratively refine the grid in the vicinity of the iso-contour corresponding to (for example) 2.3 accepted events (for an exclusion sensitivity at the 90% confidence level), effectively
bisceting it without the need to evaluate a dense grid, which would be computationally costly. The final curve is then obtained from bilinear interpolation of the sparse grid values.
|
2310.17353
|
Cultural Adaptation of Recipes
|
Building upon the considerable advances in Large Language Models (LLMs), we
are now equipped to address more sophisticated tasks demanding a nuanced
understanding of cross-cultural contexts. A key example is recipe adaptation,
which goes beyond simple translation to include a grasp of ingredients,
culinary techniques, and dietary preferences specific to a given culture. We
introduce a new task involving the translation and cultural adaptation of
recipes between Chinese and English-speaking cuisines. To support this
investigation, we present CulturalRecipes, a unique dataset comprised of
automatically paired recipes written in Mandarin Chinese and English. This
dataset is further enriched with a human-written and curated test set. In this
intricate task of cross-cultural recipe adaptation, we evaluate the performance
of various methods, including GPT-4 and other LLMs, traditional machine
translation, and information retrieval techniques. Our comprehensive analysis
includes both automatic and human evaluation metrics. While GPT-4 exhibits
impressive abilities in adapting Chinese recipes into English, it still lags
behind human expertise when translating English recipes into Chinese. This
underscores the multifaceted nature of cultural adaptations. We anticipate that
these insights will significantly contribute to future research on
culturally-aware language models and their practical application in culturally
diverse contexts.
|
Yong Cao, Yova Kementchedjhieva, Ruixiang Cui, Antonia Karamolegkou, Li Zhou, Megan Dare, Lucia Donatelli, Daniel Hershcovich
|
2023-10-26T12:39:20Z
|
http://arxiv.org/abs/2310.17353v1
|
# Cultural Adaptation of Recipes
###### Abstract
Building upon the considerable advances in Large Language Models (LLMs), we are now equipped to address more sophisticated tasks demanding a nuanced understanding of cross-cultural contexts. A key example is recipe adaptation, which goes beyond simple translation to include a grasp of ingredients, culinary techniques, and dietary preferences specific to a given culture. We introduce a new task involving the translation and cultural adaptation of recipes between Chinese and English-speaking cuisines. To support this investigation, we present CulturalRecipes, a unique dataset comprised of automatically paired recipes written in Mandarin Chinese and English. This dataset is further enriched with a human-written and curated test set. In this intricate task of cross-cultural recipe adaptation, we evaluate the performance of various methods, including GPT-4 and other LLMs, traditional machine translation, and information retrieval techniques. Our comprehensive analysis includes both automatic and human evaluation metrics. While GPT-4 exhibits impressive abilities in adapting Chinese recipes into English, it still lags behind human expertise when translating English recipes into Chinese. This underscores the multifaceted nature of cultural adaptations. We anticipate that these insights will significantly contribute to future research on culturally-aware language models and their practical application in culturally diverse contexts.
## 1 Introduction
Cooking recipes are a distinct form of procedural text whose accurate interpretation depends on several factors. Familiarity with ingredients and measurement units, common sense about the cooking environment, and reasoning about how tools and actions affect intermediate products in the cooking process are necessary to successfully craft a recipe. Such knowledge varies by culture and language, as a result of geography, history, climate, and economy (Albala, 2012). These factors impact the frequency of ingredients usage, the available forms and cost of heat for cooking, common taste profiles, written recipe style, etc. (SS2).
Identifying and adapting to cultural differences in language use is important and challenging (Hershscovich et al., 2022). Recipe translations with current machine translation technology may gloss over culture-specific phraseology or yield mistranslations due to a lack of grounding in the physical and cultural space. Literal translations are often opaque or odd: a Chinese dish, " " (literally, 'husband and wife lung slices'), can be adapted in translation to 'Sliced Beef in Chili Sauce' for English-speaking cooks. Structural patterns in recipes in different cultures (e.g., _mise en place_1) additionally make straightforward recipe translation difficult: cuisines differ in dish preparation methods, and temporal dependencies between actions complicate the disentanglement of recipe actions (Kiddon et al., 2015; Yamakata et al., 2017).
Footnote 1: In French cooking, _mise en place_ is the practice of measuring out and cutting all ingredients in advance.
In this work, we introduce the task of adapting cooking recipes across languages and cultures. Beyond direct translation, this requires adaptation with respect to style, ingredients, measurement units, tools, techniques, and action order preferences. Focusing on recipes in Chinese and English, we automatically match pairs of recipes for the same dish drawn from two monolingual corpora, and train text generation models on these pairs. We evaluate our methodology with human judgments and a suite of automatic evaluations on a gold standard test set that we construct. We provide ample evidence that reci
to more than mere translation and find that models finetuned on our dataset can generate grammatical, correct, and faithful recipes, appropriately adapted across cultures. Intriguingly, Large Language Models (LLMs) outperform our finetuned models in both automatic and human evaluations, even without training on our paired dataset. This unexpected result opens multiple avenues for future research, including how large-scale pre-training could complement our dataset and nuanced evaluation metrics that could better capture the complexities of recipe adaptation. Our contributions are as follows:
(a) We introduce the task of cross-cultural recipe adaptation and build a bidirectional Chinese-English dataset for it, **CulturalRecipes** (SS3).
(b) We experiment with various sequence-to-sequence approaches to adapt the recipes, including machine translation models and multilingual language models (SS6).
(c) We evaluate and analyze the differences between Chinese and English-speaking cultures as reflected in the subcorpora (SS4) and to the translation and adaptation of recipes (SS6).
Our dataset, code, and trained models will be freely available upon publication.
## 2 Cultural Differences in Recipes
Extensive cross-cultural culinary research reveals compelling differences in ingredients, measurement units, tools, and actions, each reflecting historical, geographical, and economic influences unique to each culture (Albala, 2012). For example, the historical reliance on open flame cooking in China has cultivated an array of oil-based cooking techniques exclusive to Chinese cuisine. Further complexities arise from culture-specific terminologies for cooking methods and dish names, which pose formidable challenges to translation and adaptation (Rebecchi and da Silva, 2017). Additionally, the visual presentation of online recipes exhibits striking contrasts across different cultural contexts (Zhang et al., 2019). Delving deeper, culinary preferences also demonstrate regional patterns in flavor profiles; Western cuisines tend to combine ingredients that share numerous flavor compounds, while East Asian cuisines often intentionally avoid such shared compounds (Ahn et al., 2011). These intricate cultural nuances underscore the complexity and diversity inherent in global culinary practices, thereby emphasizing the intricacy involved in adapting recipes across different cultures.
Examples.Figure 1 presents a Mandarin Chinese recipe and its human-authored adaptation to American English, highlighting key differences:
(1) _Ingredients._ Distinct ingredients feature prominently in each recipe; the Chinese version highlights 'rice wine','red beans', and'red beans', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', ', 'green', 'green', ', 'green', 'green', 'green', 'green', 'green', ', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', ', 'green', ', 'green', 'green', 'green', ', 'green', 'green', 'green', 'green', 'green', ', 'green', 'green', ', 'green', 'green', ', 'green', 'green', 'green', 'green', 'green', 'green', 'green', ', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', ', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', ', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', ', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', ', 'green', 'green', 'green', ', 'green', 'green', 'green', 'green', 'green', ', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', 'green', ', 'green', 'green', ', 'green', 'green', ', 'green', 'green', 'green', 'green', ', 'green', 'green', ', 'green', 'green', 'green', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', 'green', 'green', ', 'green', ', 'green', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', 'green', ', 'green', 'green', ', 'green', ', 'green', ', 'green', ', 'green', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', ', 'green', ', 'green', ', 'green', ', 'green', ', ', 'green', ', 'green', ', ', ', 'green', ', 'green', ', 'green', ', 'green', ', ', 'green', ', 'green', ', 'green', ', ', 'green', ', 'green', ', 'green', ', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', 'green', ', 'green', ', 'green', ', 'green', 'green', ', 'green', ', 'green', ', 'green', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', ', 'green', ', 'green', ', 'green', ', 'green', ', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ', 'green', ',
to remove unwanted flavors, are rarely found in English recipes. These differences highlight the subtle cultural nuances in similar recipes.
Over-generalization and bias.In a study of cultural adaptation, it is important to recognize that the concept of "culture" is multifaceted and complex. When we refer to Chinese- and English-speaking cultures throughout this work, we make the simplifying assumption that there are general features that characterize the cooking of these cultures and make them distinct in certain systematic ways. We recognize that there is enormous diversity within these simplistic categories,2 but as a first step towards the adaptation of recipes across cultures, we restrict ourselves to the coarse-grained level only.
Footnote 2: For example, southern and northern Chinese cuisines are vastly different, with rice and wheat as staples respectively.
To enable the development and benchmarking of recipe adaptation, we build a dataset for the task.
## 3 The CulturalRecipes Dataset
Our dataset, _CulturalRecipes_, builds on two existing large-scale recipe corpora in English and Chinese, respectively. We create two collections of automatically paired recipes, one for each direction of adaptation (English\(\rightarrow\)Chinese and Chinese\(\rightarrow\)English), which we use for training and validation in our recipe adaptation experiments (SS6). Additionally, _CulturalRecipes_ incorporates a small test set of human adaptations expressly crafted for the task in each direction, serving as references in our experimental evaluation.
### Recipe Corpora
We source recipes from two monolingual corpora: **RecipeNLG**[10] and **XiaChuFang**[11].3 RecipeNLG consists of over 2M English cooking recipes. It is an extension of Recipe1M[12] and Recipe1M+[13], with improvements in data quality. XiaChuFang consists of 1.5M recipes from the Chinese recipe website xiachufang.com, split into a training and evaluation set. We use the training set and clean it by removing emojis,4 special symbols, and empty fields. We use the title, ingredients, and cooking steps fields of the recipes from both corpora. The recipes in RecipeNLG consist of nine ingredients and seven steps on average, and in XiaChuFang, of seven ingredients and seven steps. As these two corpora are independent and monolingual, discovering recipe equivalents between them is not trivial.
Footnote 3: For license details, please refer to [https://recipnlg.cs.put.pozan.pl/dataset](https://recipnlg.cs.put.pozan.pl/dataset) for RecipeNLG and [https://xiachufang.com/principle](https://xiachufang.com/principle) for XiaChuFang
Footnote 4: Despite their potential significance, we remove emojis since they occur only in a few XiaChuFang recipes.
### Recipe Matching Rationale
Our recipe matching procedure relies on the following assumption: if two recipes have the same title, they describe the same dish. This assumption can be applied even in a monolingual context: if two recipes are both titled 'Veggie Lasagna', we can assume that they describe the same dish [10, 11]. It is permissible that there is some mismatch in the set of ingredients, in the number and sequence of steps, in the measurement units and exact amounts, etc. The same assumption can be said to hold for a recipe with a slightly different, but semantically equivalent title, e.g., 'Vegetable Lasagna'. Similarly, if we take the Chinese recipe title, we translate it to 'Cabbage tomato beef soup' and we find a recipe with a very similar title in English, e.g., 'Cabbage beef soup', we can assume that these two recipes describe the same dish. The degree to which this assumption holds depends on the quality of translation of recipe titles from one language into the other, on the measure of similarity, and on how much distance we allow for between two recipe titles before they are no longer considered semantically equivalent. These factors guide our approach to building a silver-standard dataset for the task, further described below, with the proce
Figure 2: Training and validation (left) and test (right) silver-standard data compilation in the direction Chinese\(\rightarrow\)English. The process is analogous for the opposite direction.
dure also visualized in Figure 2, and the statistics of the resulting datasets reported in Table 1.5
Footnote 5: Prior to the procedure described below, we filter out recipes longer than 512 subword tokens (arbitrarily using the mT5 tokenizer; Xue et al., 2021) to facilitate using the neural approaches described in §6.
### Silver-standard Data
Training and validation sets.We obtain training recipe pairs by (1) automatically translating all recipe titles in the Chinese corpus to English using a pre-trained machine translation model (Tiedemann and Thottingal, 2020);6 (2) encoding all English and translated Chinese titles with the MPNet sentence encoder (Song et al., 2020)7 to obtain two embedding spaces; and (3) in each direction (English\(\rightarrow\)Chinese and Chinese\(\rightarrow\)English), retrieving up to \(k=10\) nearest neighbors per source title from the target space, and filtering out any neighbors that have a cosine similarity against the source title lower than 0.85.8 The resulting sets, one in each direction, contain multiple reference targets for each source recipe. We further split the matches into training and validation sets.
Footnote 6: Helsinki-NLP/pous-mt-zh-en
Footnote 7: sentence-transformers/all-mpnet-base-v2
Footnote 8: The similarity threshold for retrieval was chosen through manual inspection of the quality of retrieved pairs.
We recognize that the aforementioned procedure can be susceptible to various sources of noise due to the translation of titles, the encoder representations, and the fixed similarity threshold. We trust that the signal-to-noise ratio should still be sufficient to enable model learning, but for evaluation we need cleaner, more representative data.
Test set.We are able to eliminate one of the aforementioned sources of noise by collecting manual translations of Chinese recipe titles into English and vice versa from websites that explicitly mention the original dish name when presenting an adapted version.9 This should resolve issues like being translated literally by an automatic MT system (see SS1). To supplement these titles with a corresponding list of ingredients and steps, we look up each title in the recipe corpus of the corresponding language and find the most similar title within, allowing for different capitalization, punctuation and slight differences in word choice and order, e.g., 'Rice with caramelized leeks' and 'Caramelized Leek Rice' (we manually inspect candidate matches to ensure semantic equivalence).
Footnote 9: For Chinese\(\rightarrow\)English we use _Easy Chinese Recipes_, _Recipes_ Archives, _Asian Food Archives_, _Authentic Chinese Recipes_; for English\(\rightarrow\)Chinese, _Christine’s Recipes_ and Wikipedia. We convert any traditional Chinese text to simplified Chinese using zhconv to match our other data sources.
The resulting test set closely resembles the training data, thus allowing us to determine how well the models we train do in the setting they were trained for (mapping between automatically matched recipes). In order to evaluate the models' ability to perform the true task we want to solve, i.e. adapting specific recipes from one culture to another, we also construct a gold-standard test set.
### Gold-standard Test Data
We include human-written adaptations in our dataset as the ground truth for reference-based evaluations (SS5.1, SS5.2) and as a point of comparison
\begin{table}
\begin{tabular}{l l|c c c|c c} \hline \hline & & \multicolumn{2}{c|}{**\# Recipes**} & \multicolumn{2}{c}{**Mean \# Tokens**} \\ & & & **Source** & **Target** & **Source** & **Target** \\ \hline \multirow{3}{*}{**English**} & _zh\(\rightarrow\)en_ & 44,5k & 144,6k & 159.1 & 140.2 \\ & _en\(\rightarrow\)zh_ & 43,8k & 120,7k & 117.1 & 164.8 \\ \hline \multirow{3}{*}{**English**} & _zh\(\rightarrow\)en_ & 82 & 82 & 140.5 & 144.7 \\ & _en\(\rightarrow\)zh_ & 52 & 52 & 122.7 & 153.3 \\ \hline \multirow{3}{*}{**English**} & _zh\(\rightarrow\)en_ & 25 & 25 & 139.8 & 97.1 \\ & _en\(\rightarrow\)zh_ & 41 & 41 & 115.7 & 176.5 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of (many-to-many) training, (one-to-one) silver-standard and gold-standard (human-written) evaluation sets for both directions. _zh_: Chinese. _en_: English. We count tokens with whitespace tokenization for English and jieba text segmentation for Chinese.
Figure 3: Screenshot from our human recipe adaptation platform, demonstrating the English\(\rightarrow\)Chinese direction, with the source recipe on the left. On the right, participants should adapt the title, ingredients and steps based on their culinary knowledge and cultural habits.
in human evaluations (SS5.3). We select 41 English recipes and 25 Chinese recipes manually from the silver test sets to adapt each to the other culture.
We develop an in-house web application as our recipe writing platform, illustrated in Figure 3. Our guidelines encourage participants to adapt recipes based on their culinary knowledge and cultural customs. We give participants the option to skip a recipe if they are not able to confidently adapt it. Six native Chinese speakers proficient in English with experience in both Chinese and Western cooking volunteered for the task, spending 6.4 minutes on average to adapt a recipe. Subsequently, three of the authors, fluent in both English and Chinese, who have substantial cooking experience, hand-corrected and improved all adapted recipes, including filtering incomplete source recipes, and correcting grammatical errors, spelling mistakes, and non-executable recipe expressions.
## 4 Corpus Analysis
Here, we perform a data-driven analysis to investigate how the cultural differences discussed in SS2 are realized in English and Chinese recipe corpora through the lens of distributional semantics.
### Embedding Alignment
In this analysis, we train static monolingual word embeddings on English and Chinese recipe data, respectively, as a means of capturing their distributional properties. While the global geometry of English and Chinese distributional spaces is similar [17], we hypothesize that cultural differences would lead to mismatches in the local geometry of the two spaces [21]. We test this hypothesis through cross-lingual embedding alignment, wherein the English and Chinese embeddings are aligned through a linear mapping to obtain a cross-lingual embedding space, in which semantic equivalents between the two languages should occupy a similar position.
We train monolingual word embeddings using Word2Vec based on a skipgram model by [14] on the entire English and Chinese corpora (SS3.3),10 and align them using VecMap [15] with weak supervision from a seed dictionary of 15 culturally neutral word pairs we manually curate.11
Footnote 10: We train 300-dimensional embeddings for 5 epochs using a minimum frequency count of 10, window size of 5, and 10 negative samples. Chinese text is tokenized with jieba.
### Analysis
We use the top 100 most common Chinese content words in the XiaChuFang dataset (not included in our seed dictionary) as query terms and retrieve their five nearest neighbors in the English embedding space, thus inducing a bilingual lexicon from the cross-lingual embedding space [16]. We manually evaluate this dictionary for correct literal translations and report performance in terms of \(\mathrm{Precision@}5\): the ratio of query words for which the correct translation is among the word's five nearest neighbors in the target space [17]. The equation is defined as:
\[\mathrm{Precision@}k=\frac{N@k}{N}\]
where \(N@k\) is the number of pairs with the correct literal translation in top \(k\) nearest neighbors and \(N\) is the total number of pairs.
The result is 68% (i.e. 68 of 100 query words were correctly mapped), which indicates that (a) the global geometry of the two embedding spaces is indeed similar and VecMap has successfully aligned them using a seed lexicon of just 15 word pairs; and that (b) in the majority of the cases there is a 1:1 match between the Chinese and English words. More interesting, however, are the 32 words without a literal match. Here we find that 26 map onto what can be considered a cultural equivalent, while the other six can be considered accidental errors (due to lacking quality in the monolingual embeddings and/or inaccuracies in the alignment). We provide qualitative examples in Table 2.
A successful word match can be exemplified by 'fruit', which correctly aligns with its English
\begin{table}
\begin{tabular}{l l l} \hline \hline Source & Target & Nearest Neighbors \\ \hline \hline \multirow{3}{*}{
\begin{tabular}{} \end{tabular} } & fruit & fruit, fruits, kiwi, strawberry, seasonal \\ & salad & feta, lebanese, bruschetta, tabbouleh, caesar \\ & tofu & boiled, ham, sausage, bacon, kielbasi \\ & starch & flour, beaten, salt, shortening, pwdr \\ & chopstick & fork, spatula, toothpick, wooden, knives \\ & steam & bake, 350, pans, boil, oblong \\ \hline \hline \end{tabular}
\end{table}
Table 2: Top-5 examples from bilingual lexicon induction with underlined literal matches, mismatches, and matches that can be attributed to cultural differences.
equivalent 'fruit' among the top five nearest neighbors. An instance of an inadvertent misalignment, however, can be observed with \(\frac{\text{\tiny{min}}}{\text{\tiny{$\mathcal{D}$}}}\frac{\text{\tiny{$n$}}}{ \text{\tiny{$\mathcal{D}$}}}\)'salad'. It is mapped closer to salad ingredients, other side dishes, and particular salad types, rather than precisely corresponding to the English term'salad'.
Certain instances of misalignment can be attributed to cultural differences between English and Chinese culinary practices. Take for instance the ingredient 'tofu', a staple protein source in Chinese cuisine, which aligns with 'ham','sausage', and 'bacon'--protein-rich food items prevalent in English-speaking cuisines. Similarly,'starch' is matched with 'flour'. In terms of kitchen utensils, 'chopsticks' corresponds to 'fork','spatula', and 'toothpick', which perform comparable functions in Western cuisine settings. Furthermore, the cooking technique'steam' maps onto 'bake', a heat-processing method more frequently used in English recipes. These examples underscore the cultural discrepancies between English and Chinese recipes, emphasizing that recipe adaptation goes beyond mere translation.
## 5 Cross-cultural Recipe Adaptation Task
We propose the task of cross-cultural recipe adaptation, which extends the task of machine translation with the requirement of divergence from the source text semantics in order to address cultural differences in the target culture. While translation studies have long considered culture Bassnett (2007), this is not yet explored in machine translation. Our matched cross-lingual corpora allow us to inform recipe adaptation by both language and culture simultaneously. In SS6 we adopt an end-to-end sequence-to-sequence approach to the task to establish a set of baselines since this is the dominant approach in machine translation.
The evaluation of cultural adaptation should prioritize meaning preservation while allowing divergences in meaning as long as they stem from cross-cultural differences. This subjective criterion is challenging to implement, as cross-cultural differences, and by extension, the task itself, are not well-defined. As common in text generation tasks, we first adopt reference-based automatic evaluation metrics (SS5.1). Furthermore, to capture structural similarity between references and predictions, we employ meaning representations for evaluation (SS5.2). Crucially, since reference-based metrics are often unreliable for subjective tasks Reiter (2018), we additionally perform human evaluation (SS5.3).
### Surface-based Automatic Evaluation
We use various metrics to assess the similarity between the generated and reference recipes. We use three overlap-based metrics: BLEU Papineni et al. (2002), a precision-oriented metric based on token \(n\)-gram overlap and commonly used in machine translation evaluation, ChrF Popovic (2015), a character-level F-score metric that does not depend on tokenization,12 and ROUGE-L Lin (2004), a recall-oriented metric based on longest common subsequences and widely used in summarization evaluation;13 and one representation-based metric, BERTScore Zhang et al. (2019), based on cosine similarity of contextualized token embeddings14 and shown to correlate better with human judgments than the above metrics in various tasks.
Footnote 12: For BLEU and ChrF, we use SacreBLEU Post (2018) version 2.3.1 with default parameter settings.
Footnote 13: For evaluation, we replace newlines with spaces in all reference and generated recipes. We segment Chinese text to words with jieba.
Footnote 14: We rely on bert-base-uncased for representing English text and bert-base-chinese for Chinese text.
### Structure-aware Automatic Evaluation
Standard metrics may not effectively capture semantic similarity between texts due to sensitivity to surface form. To address this, we employ graph representations, a favored choice for capturing the flow of cooking actions, tool usage, and ingredient transformations in recipes Mori et al. (2014); Kiddon et al. (2015); Jermsurawong and Habash (2015); Yamakata et al. (2016). These allow for an examination of structural differences influenced by language and culture Wein et al. (2022). Here, we leverage Abstract Meaning Representation AMR; Banarescu et al. (2013), a general-purpose graph meaning representation, to represent recipes.
To generate AMR graphs, we employ XAMR Cai et al. (2021),15 a state-of-the-art cross-lingual AMR parser that can parse text from five different languages into their corresponding AMR graphs. It is based on a sequence-to-sequence model, utilizing mBART Liu et al. (2020) for both encoder and decoder initialization.
Footnote 15: We use the trained AMR parser model from [https://github.com/jcyk/XAMR](https://github.com/jcyk/XAMR).
To assess the similarity between model-generated and reference texts' AMRs, we use the _Smatch_ metric Cai and Knight (2013), which
aligns both graphs and computes the F1 score that measures normalized triple overlap.
### Human Evaluation
While the above automatic metrics provide quantifiable results, they inherently suffer from the limitation of depending on a fixed reference set. In reality, there exist multiple legitimate ways to adapt a recipe. To address this, we propose four criteria for human evaluation, which we conduct on the gold-standard test set.
We have evaluators assess the outputs from all methods, including the human-written adaptations, on four dimensions key to the cultural adaptation of recipes: (1) _Grammar_--The generated recipe is grammatically sound and fluent; (2) _Consistency_--The output aligns with the format of a fully executable recipe encompassing coherent title, ingredients, and cooking steps; (3) _Preservation_--The adapted recipe largely retains the essence of the source recipe, producing a dish akin to the original; (4) _Cultural Appropriateness_--The generated recipe integrates well with the target cooking culture, aligning with the evaluator's culinary knowledge and recipe style expectations. Evaluators mark each dimension on a 7-point Likert scale Likert1932, where a higher score indicates superior performance. A single evaluator rates each recipe pair separately and independently.
Crowdsourcing Evaluation.We recruit evaluators on Prolific16 and deploy our evaluation platform on the same in-house web application used for human recipe writing (SS3.4). To ensure the evaluation validity, we require participants to be native speakers of the target language and proficient in the source language for each adaptation direction. Additionally, participants must successfully undergo a comprehension check, guided by our evaluation tutorial. Each evaluator is required to evaluate two example recipes for the comprehension check and three recipes for our tasks. This rigorous screening process secures the reliability and accuracy of the evaluations conducted for our study.
Footnote 16: [https://www.prolific.co/](https://www.prolific.co/)
## 6 Experiments
Here we describe our recipe adaptation experiments and results, using the CulturalRecipes dataset introduced in SS3. Due to their success in machine translation, we experiment with three end-to-end sequence-to-sequence classes of models to adapt recipes across cultures: (finetuned) machine translation models, finetuned multilingual encoder-decoder models, and prompt-based (zero-shot) multilingual language modeling. Additionally, we evaluate the automatic matching approach used in our dataset construction. These will serve as baselines for future work on this task.
### Experimental Setup
We use our silver training set for finetuning in each direction and evaluate on both the silver and gold test sets. We represent a recipe as a concatenation of title, ingredients, and steps, each section prefixed with a heading ('Title:', 'Ingredients:' and 'Steps:', for both English and Chinese recipes).17
Footnote 17: We treat these headings as language-invariant meta-text, which is removed in post-processing prior to evaluation.
Automatic matching.Since the source recipes used in the creation of the gold-standard test set are a subsample of the ones found in the silver-standard test set, we have matches for them in the target language retrieved based on title similarity (see SS3.3 for a reminder of how the silver-standard test set was constructed). We evaluate these retrieved matches against the gold-standard human-written references, to determine whether title-based retrieval is a viable method for recipe adaptation.
Machine translation.Recognizing the intrinsic translation component of recipe adaptation between languages, we leverage pre-trained machine translation systems in our experiments. We experiment with opus-mt models Tiedemann and Thottingal (2020),18 which show a strong performance in machine translation. We first evaluate them in zero-shot mode (MT-zs), that is, purely as machine translation models, and additionally after finetuning using our training and validation sets (MT-ft).
Footnote 18: Helsinki-NLP/opus-mt-{zh-en/en-zh}
Multilingual language modeling.We finetune multilingual encoder-decoder pre-trained language models on the CulturalRecipes dataset. Such models perform well on translation tasks Tang et al. (2020) and are generally trained on abundant monolingual as well as parallel data, so they could prove more suitable for the recipe domain and for our ultimate goal, recipe adaptation. We choose mT5-base Xue et al. (2021),19 a multilingual multi-task text-to-text transformer pre-trained on a Common Crawl-based dataset containing 101 languages,
and mBART50 (Tang et al., 2020),20 a variant of mBART (Liu et al., 2020) based on a multilingual autoencoder finetuned for machine translation.
Footnote 20: facebook/mbart-large-50
Prompting LLMs.Building on the remarkable performance of Multilingual LLMs in zero-shot translation without additional finetuning or in-context learning (Wang et al., 2021), we explore their recipe translation and adaptation capabilities.
We use BLOOM (Scao et al., 2022), an LLM trained on the multilingual ROOTS corpus (Laurencon et al., 2022).21 Using the ROOTS search tool (Piktus et al., 2023), we find it does not contain our recipe corpora. As BLOOM is an autoregressive language model trained to continue text, we prompt as follows for English\(\rightarrow\)Chinese:
Footnote 21: bigscience/bloom-7b1, a 7B-parameter model with a 2k-token length limit. Preliminary experiments showed poor results with BLOOMZ-7B, mT0-xxl-mt and FLAN-T5-xxl (Chung et al., 2022), which are finetuned on multitask multilingual prompts (Munenighoff et al., 2022)—they are biased towards short outputs, prevalent in their training tasks.
Footnote 22: gpt-4-0314 via the OpenAI API (8k-token length limit).
\[\texttt{[English recipe]}\stackrel{{\texttt{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\tiny\mbox{\mbox{\tiny\mbox{\mbox
with diverse models excelling in different criteria. Structure-aware automatic evaluation results generally match other automatic results: MT-ft performs best on Chinese\(\rightarrow\)English, while mT5-base performs best on English\(\rightarrow\)Chinese.
**Automatic evaluation on the gold test sets.** Moving to the gold-standard test set results in Table 4, we gain further intriguing insights. The significant performance gap between MT-zs and MT-ft reemphasizes that the recipe pairs in our dataset are not merely translations of each other. Moreover, it underscores the systematic patterns in the matched pairs within our training corpus (reflecting the cultural adaptation of recipes) can indeed be learned via finetuning on retrieved recipes. In this scenario, the LLMs BLOOM, ChatGLM2 and GPT-4 outperform the finetuned methods. Particularly in the Chinese\(\rightarrow\)English direction, LLMs consistently match or surpass the performance of the next best finetuned approach. Notably, a comparison of the average length of model predictions shows a tendency of LLMs to produce longer predictions than their counterparts, with GPT-4 generating double the number of tokens compared to other methods. Interestingly, the retrieval method scores are comparable to the finetuned models in both directions and sometimes even surpass them. Despite this, LLMs continue to prove more effective overall. _Smatch_ scores show performance differences consistent with BERTScore across models for both silver and gold-standard test sets, with the exception that BLOOM slightly outperforms GPT-4 in Chinese\(\rightarrow\)English.
**Human evaluation.** Table 5 showcases the results of human evaluation, with abbreviations GRA, CON, PRE, and CUL representing Grammar, Consistency, Preservation, and Cultural Apppropriateness, respectively.26 GPT-4 excels significantly across all metrics in the Chinese\(\rightarrow\)English direction, even surpassing explicit human adaptation. Recipes retrieved from popular websites are a close second in GRA and CON, reflecting their high quality. However, the targeted adaptations written by humans who were explicitly instructed to adapt the source recipe to the target culture, perform better in PRE and CUL. For English\(\rightarrow\)Chinese, GPT-4 remains the top performer only in CUL, while mT5 parallels the retrieved recipes in this metric. Notably, ChatGLM2 surpasses even human writers in CON and PRE, but not in GRA.
Footnote 26: We exclude mBART50 due to its architectural and performance similarity to mT5.
**Correlation of automatic metrics with humans.** To determine the reliability of automatic metrics in assessing the quality of recipe adaptations, we examine their correlation with human evaluations across the four metrics and their average. We use Kendall correlation, which is the official meta-evaluation metric used by WMT22 metric shared
\begin{table}
\begin{tabular}{l|c c c c c|c} \hline \hline Method & **BLEU** & **ChrF** & **R-L** & **B-Sc** & **Smatch** & **\# Tok.** \\ \hline \hline \multicolumn{7}{c}{**Chinese \(\rightarrow\) English**} \\ \hline MT-zs\({}^{\dagger}\) & 5.3 & 29.1 & 22.4 & 59.4 & 30.6 & 77.5 \\ MT-ft & **28.0** & 42.5 & 19.6 & 59.9 & 28.1 & 103.6 \\ mT5 & 14.0 & 31.6 & 17.8 & 59.5 & 25.5 & 87.4 \\ mBART50 & 10.2 & 33.9 & 19.7 & 60.5 & 27.3 & 93.2 \\ BLOOM\({}^{\dagger}\) & 22.3 & 48.3 & 29.5 & 62.5 & **33.7** & 110.0 \\ ChatGLM2 & 18.3 & 41.8 & 26.8 & 61.9 & 28.8 & 174.3 \\ GPT-4\({}^{\dagger}\) & **28.0** & **50.3** & **30.8** & **66.5** & 33.4 & 216.6 \\ Retrieval\({}^{\dagger}\) & 16.8 & 37.8 & 20.5 & 61.7 & 26.6 & 150.7 \\ \hline \hline \multicolumn{7}{c}{**English \(\rightarrow\) Chinese**} \\ \hline MT-zs\({}^{\dagger}\) & 10.6 & 6.9 & 60.8 & 69.8 & 29.4 & 108.0 \\ MT-ft & 13.6 & 28.3 & 53.8 & 70.5 & 24.5 & 88.5 \\ mT5 & 16.6 & 28.1 & 53.4 & 70.7 & 25.3 & 78.6 \\ mBART50 & 11.8 & 25.4 & 54.8 & 69.7 & 23.5 & 100.3 \\ BLOOM\({}^{\dagger}\) & 20.0 & 11.5 & 50.8 & 66.4 & 28.6 & 154.7 \\ ChatGLM2 & 22.4 & 11.0 & 54.3 & 75.2 & 28.8 & 153.2 \\ GPT-4\({}^{\dagger}\) & 21.1 & 21.9 & **61.0** & **77.8** & **29.6** & 213.3 \\ Retrieval\({}^{\dagger}\) & **32.8** & **33.6** & 52.9 & 68.4 & 25.0 & 130.3 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Automatic reference-based evaluation results on the gold-standard human test sets. \({}^{\dagger}\) indicates methods without training for the task (zero-shot).
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline Method & **GRA** & **CON** & **PRE** & **CUL** \\ \hline \multicolumn{5}{c}{**Chinese \(\rightarrow\) English**} \\ \hline MT-zs & 2.6 \(\pm\)1.5 & 2.4 \(\pm\)1.7 & 2.3 \(\pm\)1.4 & 2.7 \(\pm\)1.6 \\ MT-ft & 4.5 \(\pm\)1.8 & 3.7 \(\pm\)2.0 & 3.0 \(\pm\)2.1 & 4.3 \(\pm\)2.1 \\ mT5 & 4.1 \(\pm\)2.1 & 3.8 \(\pm\)2.1 & 3.2 \(\pm\)2.2 & 3.7 \(\pm\)2.2 \\ BLOOM & 3.3 \(\pm\)2.0 & 3.3 \(\pm\)2.0 & 3.4 \(\pm\)2.0 & 2.8 \(\pm\)1.8 \\ ChatGLM2 & 4.1 \(\pm\)2.4 & 4.3 \(\pm\)2.2 & 4.6 \(\pm\)2.1 & 4.0 \(\pm\)2.3 \\ GPT-4 & **6.0** \(\pm\)1.2 & **6.1** \(\pm\)1.3 & **5.9** \(\pm\)1.0 & **6.0** \(\pm\)1.2 \\ Human & 4.2 \(\pm\)2.1 & 4.4 \(\pm\)1.9 & 4.5 \(\pm\)1.9 & 4.6 \(\pm\)1.9 \\ Retrieval & 5.1 \(\pm\)1.7 & 4.9 \(\pm\)2.0 & 4.3 \(\pm\)2.3 & 3.8 \(\pm\)2.0 \\ \hline \hline \multicolumn{5}{c}{**English \(\rightarrow\) Chinese**} \\ \hline MT-zs & 2.3 \(\pm\)1.6 & 2.7 \(\pm\)2.0 & 3.5 \(\pm\)2.2 & 2.3 \(\pm\)1.7 \\ MT-ft & 4.8 \(\pm\)2.2 & 3.1 \(\pm\)2.2 & 2.5 \(\pm\)1.9 & 3.2 \(\pm\)2.0 \\ mT5 & 4.3 \(\pm\)2.0 & 3.4 \(\pm\)2.1 & 2.8 \(\pm\)2.0 & 3.5 \(\pm\)1.9 \\ BLOOM & 3.8 \(\pm\)2.1 & 4.2 \(\pm\)2.1 & 4.6 \(\pm\)1.9 & 3.0 \(\pm\)1.6 \\ ChatGLM2 & 5.4 \(\pm\)1.7 & **5.3** \(\pm\)1.7 & **5.7** \(\pm\)1.6 & 4.1 \(\pm\)2.3 \\ GPT-4 & 5.3 \(\pm\)2.0 & 5.1 \(\pm\)2.0 & 5.2 \(\pm\)1.9 & **4.4** \(\pm\)2.0 \\ Human & **5.8** \(\pm\)1.1 & 5.1 \(\pm\)1.9 & 5.5 \(\pm\)1.6 & 4.3 \(\pm\)1.8 \\ Retrieval & 4.5 \(\pm\)1.9 & 3.9 \(\pm\)2.0 & 3.3 \(\pm\)2.0 & 3.5 \(\pm\)1.7 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Human evaluation results on the gold-standard test sets: average and standard deviation across recipes for each method and metric, ranging from 1 to 7. Note that different participants manually adapted (“Human”) and evaluated the recipes.
task (Freitag et al., 2022).
As illustrated in Table 6, all cases exhibit a positive correlation, albeit with varying strengths from weak to moderate, and with inconsistent performance between the two adaptation directions. For Chinese\(\rightarrow\)English, ChrF and BERTScore indicate the strongest correlation with the average of all criteria. BERTScore further stands out by demonstrating the highest correlation with each individual criterion. On the other hand, for English\(\rightarrow\)Chinese, BLEU performs comparably well, thus highlighting that the effectiveness of these metrics can vary based on the direction of adaptation. ROUGE-L, however, displays a significantly lower correlation, suggesting its limitations in evaluating recipe adaptations. Finally, we observe that _Smatch_ is not significantly correlated with human judgments, possibly due to noise introduced by parsing errors.27
Footnote 27: Inspecting XAMR outputs, we notice recurrent errors in both languages, likely attributable to the unique recipe genre. Common culinary actions are often incorrectly represented or overlooked: in English, actions like ‘oil’ or ‘grease’ are treated as objects. Similarly in Chinese, many actions are often omitted or associated with unrelated concepts.
CUL presents the weakest correlation with most automatic metrics, underscoring the current limitations of automated evaluations in assessing the cultural alignment of recipes, and highlighting the essential role of human evaluators. Notably, correlations for English\(\rightarrow\)Chinese generally exhibit greater strength than Chinese\(\rightarrow\)English. This discrepancy is likely due to the variation in sample sizes between the two directions.
## 7 Analysis and Discussion
Our findings reinforce previous research asserting the cultural bias of LLMs--specifically GPT-4--towards Western, English-speaking, U.S. culture, as exemplified in the food domain (Cao et al., 2023; Naous et al., 2023; Keleg and Magdy, 2023; Palta and Rudinger, 2023). However, our results also offer a more nuanced perspective. While GPT-4 demonstrates an exceptional ability to adapt to Chinese cuisine, its linguistic and semantic capabilities are outperformed by ChatGLM2 in English\(\rightarrow\)Chinese. To delve deeper into these intriguing results, this section examines the strategies these models employ in the adaptation task.
Quantitative analysis.Referring back to the analysis from SS4, we choose a subset of six words and examine how they are handled by four models (MT-zs, MT-ft, and mT5 and GPT-4). Specifically, we measure the rate of literal translation of these concepts by each model, in the context of the recipes from the silver-standard test set of CulturalRecipes.28 For instance, in adapting from English to Chinese, we identify _baking_ as an English-specific concept. We count the appearances of related terms such as 'bake', 'roast', 'broil', and 'oven' in English source recipes, denoted as \(c_{source}\). For each instance, we tally the occurrences of the direct translation, \(\frac{\text{``in$}}{\text{``in$}}\), in the corresponding Chinese recipes, denoted as \(c_{target}\), from either model predictions or retrieved references. We calculate the literal translation rate as \(\frac{c_{target}}{c_{source}}\). Figure 4 visualizes the results for five culturally-specific concepts and a universally applicable concept, 'oil'.
Footnote 28: We use the silver-standard test set rather than the gold-standard test set for its comparatively larger size.
We include 'oil' as a sanity check and indeed see that the literal rate of translation is high in both the references and in all model predictions.
The references show a low to medium rate of literal translations for the remaining five concepts, confirming their cultural specificity. MT-zs often translates these concepts literally, as could be expected from a machine translation model designed for near-literal translation--the difference is especially noticeable for the concepts'steam' and 'cheese'. The finetuned models MT-ft and mT5, on the other hand, learn to avoid literal translation, presumably opting for culturally-appropriate alternatives instead--for'steam' for example none of the 12 occurrences of the concept in the source Chi
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline & **BLEU** & **ChrF** & **R-L** & **B-Sc** & **Smatch** \\ \hline \multicolumn{6}{c}{**Chinese \(\rightarrow\) English**} \\ \hline
**GRA** & 0.135 & 0.250* & 0.135 & 0.257* & 0.021 \\
**COR** & 0.151 & 0.268* & 0.180 & 0.294* & 0.065 \\
**PRE** & 0.174 & 0.312* & 0.261* & 0.260* & 0.176 \\
**CUL** & 0.120 & 0.216* & 0.189 & 0.237* & 0.071 \\ _avg._ & 0.153 & 0.255* & 0.202* & 0.277* & 0.079 \\ \hline \multicolumn{6}{c}{**English \(\rightarrow\) Chinese**} \\ \hline
**GRA** & 0.286* & 0.353* & 0.201* & 0.278* & 0.070 \\
**COR** & 0.227* & 0.232* & 0.183* & 0.217* & 0.116 \\
**PRE** & 0.268* & 0.180* & 0.218* & 0.247* & 0.124 \\
**CUL** & 0.216* & 0.268* & 0.155 & 0.219* & 0.081 \\ _avg._ & 0.290* & 0.295* & 0.221* & 0.272* & 0.117 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Kendall correlation of human evaluation results with automatic metrics. Statistically significant correlations are marked with *, with a confidence level of \(\alpha\) = 0.05 before adjusting for multiple comparisons using the Bonferroni correction Bonferroni, 1936).
nese recipe are literally translated in the predictions of MT-ft and mT5.
An interesting trend emerges in GPT-4 predictions, where literal translations are found at a high rate for all concepts, often close to 100%. While this seems counter-intuitive considering the goal of adapting the culturally-specific ingredients and cooking methods, in the next section we find that GPT-4 employs a slightly different strategy than just substituting these ingredients and methods.
Qualitative Analysis.We present a qualitative analysis highlighting the adaptation strategies adopted by models, specifically MT-zs, MT-ft, and GPT-4. The analysis centers on the Chinese recipe shown in Figure 1, with model predictions shown in Table 7. The translation from **MT-zs** directly incorporates Chinese ingredients not common in English recipes, accompanied by numerous spelling and grammatical errors. The prevalence of errors can be attributed to a dearth of recipe domain representations in the machine translation training data of MT-zs. In contrast, **MT-ft** offers a notably improved recipe rendition, albeit a wholly different red bean soup from the source recipe. Although this results in minimal content retention, it can be viewed as an extreme cultural adaptation, given the infrequent appearance of sweet red bean soup in Western cuisine. However, MT-ft sporadically manifests consistency errors, exemplified in this case by duplicating beans in the ingredient list and parsley in the steps. These findings confirm that the generation of coherent recipes continues to be a challenging endeavor for sequence-to-sequence models, corroborating the findings of prior work (Li et al., 2022).29**GPT-4, on the other hand, generates a recipe more closely aligned with the source than the human-generated reference (refer to Figure 1). This model also incorporates thoughtful cultural adaptations: it quantifies ingredient amounts, unlike the source which vaguely indicates "\(\frac{\text{size}}{\text{size}}\frac{\text{size}}{\text{size}}\)" (_moderate amount_), and it provides alternative names or substitutions for uniquely Chinese ingredients. The recipe instructions retain the crucial details from the source recipe, whilst maintaining fluency and appropriateness for Western-style recipes.
Footnote 29: Similar behavior is observed in the other sequence-to-sequence models trained on our training set and in the automatically matched (retrieved) recipe.
## 8 Related Work
Cultural adaptation of text.Cultural adaptation overlaps with style transfer, where the goal is to change the style of text while preserving the meaning (Jin et al., 2022). In addition to style, cultural adaptation also concerns common ground, values and topics of interest (Hershcovich et al., 2022). Particularly in culture-loaded tasks, it becomes crucial to consider cultural differences (Zhou et al., 2023, 2023). While semantic divergences are usually treated as errors in machine translation (Briakou and Carpuat, 2021), cross-cultural translation often requires adaptations that change the meaning, e.g., by adapting entities (Peskov et al., 2021) or by adding explanations (Kementchedjhieva et al., 2020). We share the motivation of this line of work, but for the first time focus on recipes, where cultural adaptation is grounded in clear goals (accessi
\begin{table}
\begin{tabular}{l} \hline
**MT-zs:** Table: Red beans soup, ingredients: red beans, rice wine, justice,bigg,ägps, Sops, Chyspie, and beans in the issue, soaked for eight hours, soaked in the pot, boiled in fire, oilned into one unified fire for 30 minutes. \\ \hline
**MT-z:** The Red bean soup, ingredients: 2-haliques soups off all 1: large onion, chopped 2 garlic clones, mined 1 season, middle organized 1 seasonped 1 seasonped 2 seasonped off label 1 seasonped 2 capn 1 seasonped 2 capn 1 kidney beans, rinsed and drained 1 (14 120 anne) cell detected, unmiluted 1 (15 0.500) cell delayed hours, then, obligated 1 (14 0 anne) cell beeroth 1 capn **water 14** cup chopped fresh patchy, Stripe
bility to the cook and quality of the resulting dish).
Recipe generation.Van Erp et al. (2021) outline potential cross-disciplinary approaches involving NLP and food science, claiming that the analysis of digital recipes is a promising but challenging task. Marin et al. (2019) introduce the Recipe1M dataset (see SS3) and H. Lee et al. (2020) finetune GPT-2 (Radford et al., 2019) on it to create a large English language model, RecipeGPT, capable of generating cooking instructions from titles and ingredients or ingredients from instructions and titles. Majumder et al. (2019) introduce a dataset of 180K English recipes from the website Food.com and a neural model to generate recipes according to user preferences inferred from historical interactions. Contrary to these, we focus on recipe adaptation, where generation is conditioned on a source recipe.
Recipe adaptation.Donatelli et al. (2021) align recipes for the same dish on the action level using recipe graphs (Yamakata et al., 2016), aiming to adapt recipes to users of different levels of expertise. Morales-Garzon et al. (2021, 2021, 2022) propose an unsupervised method to adapt recipes according to dietary preferences by proposing ingredient substitutions using domain-specific word and sentence embeddings. However, they do not modify the recipe steps beyond simple ingredient substitution. Li et al. (2022) build a dataset of 83K automatically-matched recipe pairs for the task of editing recipes to satisfy dietary restrictions. They train a supervised model to perform controlled generation, outperforming RecipeGPT. They identify the remaining challenge of "controllable recipe editing using more subtle traits such as cuisines (e.g., making a Chinese version of meatloaf)", which we address here. Antognini et al. (2023), in contrast, propose addressing the same task _without_ paired data, utilizing an unsupervised critiquing module and also outperforming RecipeGPT in both automatic and human evaluation. Liu et al. (2022) present a dataset of 1.5M Chinese recipes and evaluate compositional generalization in neural models in the task of counterfactual generation of recipes with substituted ingredients. They find recipe adaptation to be a challenging task: language models often generate incoherent recipes or fail to satisfy the stated constraints. In contrast, we find that after finetuning pre-trained models on our dataset, the models succeed in the task of cultural adaptation.
## 9 Conclusion and Future Work
In this work, we studied the task of adapting cooking recipes across cultures. We identified dimensions relevant to this task through a data-driven analysis, including differences in ingredients, tools, methods, and measurement units. We introduced CulturalRecipes, a dataset of paired Chinese and English recipes, and evaluated various adaptation methods. Through our experiments and analysis, we show that models can learn to consider cultural aspects, including style, when adapting recipes across cultures, with some challenges remaining in the level of detail and consistency between the different components of a recipe.
We envision our dataset and baselines will be useful for both downstream applications and further studies of cultural adaptation within and beyond NLP. Automatically adapting recipes from one culture to another could facilitate cross-cultural cross-pollination and broaden the horizons of potential users, serving as a bridge between people through food, and being useful to both novice and experienced cooks. Furthermore, our dataset is a challenging benchmark for language models: besides the complex compositional generalization ability required for recipe adaptation (Liu et al., 2022), it assesses the ability of multilingual language models to adapt to target cultural characteristics, and to construct well-formed and faithful recipes. Lastly, our cross-cultural comparative analysis can be extended to sociological and anthropological research.
Future work.As acknowledged in SS2, the cultural categories we assume are highly simplistic. Future work will expand our datasets to treat finer-grained differences, as well as broaden it to more languages and cultures. It will further investigate the factors that impact recipe adaptation and develop more sophisticated modeling approaches to consider them, beyond the sequence-to-sequence approaches we experimented with here. Finally, our dataset can provide a starting point for related tasks, including recipe classification and retrieval.
Cultural categorization can be a sensitive topic so we have been careful to approach it with respect for the communities involved; we encourage future research in the area to maintain this practice. We hope that our research can contribute to a greater understanding and appreciation of diverse cultural traditions and practices related to food and cooking.
## Acknowledgments
The authors extend their sincere gratitude to the reviewers and action editors for their invaluable feedback, which significantly contributed to the improvement of this work. Special thanks are also due to Laura Cabello and Nicolas Garneau for their insightful comments and to Qinghua Zhao and Jingcun Huang for their valuable assistance during our initial human evaluations. The authors gratefully acknowledge the HPC RIVR consortium (www.hpc-rivr.si) and EuroHPC JU (eurohpc-ju.europa.eu) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (www.izum.si). Yong Cao and Li Zhou gratefully acknowledge financial support from China Scholarship Council. (CSC No. 202206070002 and No. 202206160052).
|
2305.03190
|
Quantum Velocity Limits for Multiple Observables: Conservation Laws,
Correlations, and Macroscopic Systems
|
How multiple observables mutually influence their dynamics has been a crucial
issue in statistical mechanics. We introduce a new concept, "quantum velocity
limits," to establish a quantitative and rigorous theory for non-equilibrium
quantum dynamics for multiple observables. Quantum velocity limits are
universal inequalities for a vector the describes velocities of multiple
observables. They elucidate that the speed of an observable of our interest can
be tighter bounded when we have knowledge of other observables, such as
experimentally accessible ones or conserved quantities, compared with the
conventional speed limits for a single observable. We first derive an
information-theoretical velocity limit in terms of the generalized correlation
matrix of the observables and the quantum Fisher information. The velocity
limit has various novel consequences: (i) conservation law in the system, a
fundamental ingredient of quantum dynamics, can improve the velocity limits
through the correlation between the observables and conserved quantities; (ii)
speed of an observable can be bounded by a nontrivial lower bound from the
information on another observable; (iii) there exists a notable non-equilibrium
tradeoff relation, stating that speeds of uncorrelated observables, e.g.,
anti-commuting observables, cannot be simultaneously large; (iv) velocity
limits for any observables on a local subsystem in locally interacting
many-body systems remain convergent even in the thermodynamic limit. Moreover,
we discover another distinct velocity limit for multiple observables on the
basis of the local conservation law of probability current, which becomes
advantageous for macroscopic transitions of multiple quantities.
|
Ryusuke Hamazaki
|
2023-05-04T22:20:33Z
|
http://arxiv.org/abs/2305.03190v4
|
Quantum Velocity Limits for Multiple Observables: Conservation Laws, Correlations, and Macroscopic Systems
###### Abstract
How multiple observables mutually influence their dynamics has been a crucial issue in statistical mechanics. We here introduce a new concept, "quantum velocity limits," to establish a quantitative and rigorous theory for non-equilibrium quantum dynamics for multiple observables. Quantum velocity limits are universal inequalities for a vector the describes velocities of multiple observables. They elucidate that the speed of an observable of our interest can be tighter bounded when we have knowledge of other observables, such as experimentally accessible ones or conserved quantities, compared with the conventional speed limits for a single observable. We first derive an information-theoretical velocity limit in terms of the generalized correlation matrix of the observables and the quantum Fisher information. The velocity limit has various novel consequences: (i) conservation law in the system, a fundamental ingredient of quantum dynamics, can improve the velocity limits through the correlation between the observables and conserved quantities; (ii) speed of an observable can be bounded by a nontrivial lower bound from the information on another observable, while most of the previous speed limits provide only upper bounds; (iii) there exists a notable non-equilibrium tradeoff relation, stating that speeds of uncorrelated observables, e.g., anti-commuting observables, cannot be simultaneously large; (iv) velocity limits for any observables on a local subsystem in locally interacting many-body systems remain convergent even in the thermodynamic limit, unlike the naive application of the conventional speed limits. Moreover, we discover another distinct velocity limit for multiple observables on the basis of the local conservation law of probability current, which becomes advantageous for macroscopic transitions of multiple quantities. Our newly found velocity limits ubiquitously apply not only to unitary quantum dynamics but to classical and quantum stochastic dynamics, offering a key step towards universal theory of far-from-equilibrium dynamics for multiple observables.
## I Introduction
Mutual influence of multiple observables has played a pivotal role in statistical mechanics. As a classic example, correlations between heat and electric currents are widely recognized as the thermoelectric effect [1]. As another famous example, a special type of observables, i.e., conserved quantities, lead to anomalous quantum transport properties for other observables, which is understood through the Mazur-Suzuki bound [2; 3; 4]. Investigating such an interplay of multiple observables has now become an active area of research in various contexts, from the generalized Gibbs ensemble describing stationary states of isolated systems with many conserved quantities [5] to the thermodynamic uncertainty relations [6; 7; 8] with multiple observables [9] for the stationary dynamics in classical stochastic systems. Besides the fundamental interest, establishing a theory of non-equilibrium dynamics for multiple observables results in practical advantages; one can understand the behavior of an observable from the knowledge of other observables, which are easy to evaluate theoretically or experimentally. However, previous studies mainly focused on systems near equilibrium or stationary states. Therefore, despite its importance, universal theory that governs far-from-equilibrium (or stationary) quantum dynamics for multiple observables has remained largely unexplored.
Recently, universal and rigorous theories on non-equilibrium state transitions have been developed in the context of quantum speed limits (QSLs). The first seminal work was put forward by Mandelstam and Tamm in 1945 [10], who derived that the time for an initial state to evolve into an orthogonal state under the unitary time evolution is lower bounded using the inverse of the energy fluctuation. Since then, such QSLs have been generalized and refined by numerous studies [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25] with experimental verifications [26; 27; 28]. Indeed, speed limits are generalized to open quantum systems [29; 30; 31; 32; 33; 34; 35], classical systems [36; 37; 38; 39; 40; 41; 42], and even nonlinear population dynamics [43; 44; 45]. Moreover, refined bounds are obtained in light of the geometry of states [46; 47; 48; 49; 50; 38], (quantum) information theory [40; 39; 51], local conservation law of probability and optimal transport theory [52; 53; 54; 55; 56; 57; 58; 59; 60]. Besides, speed limits also turn out to provide constraints in controlling non-equilibrium systems [61; 62; 63; 64; 65; 66; 67; 68; 69; 70], which is crucial for practical applications represented as quantum technology.
While many previous studies discussed QSLs of a state in light of metrics of the Hilbert space, recent works start to focus on the speed of the expectation value of a physical observable \(\hat{A}\)[40; 39; 51; 57; 71]. Indeed, such observable-based speed limits provide a better bound than the metric-based one for an observable of our interest, which is more directly relevant to experiments than the quantum state itself [51]. Interestingly, the first observable-based QSL was already obtained in the Mandelstam-Tamm paper [10]: they derived that the
instantaneous speed of \(\langle\hat{A}(t)\rangle:=\text{Tr}[\hat{A}\hat{B}(t)]\) for a quantum state \(\hat{\rho}(t)\) at time \(t\) is given by \(\left|\frac{d\langle\hat{A}(t)\rangle}{dt}\right|\leq\frac{2}{\hbar}\Delta A\Delta H\) for a unitary dynamics whose Hamiltonian is \(\hat{H}\), where \(\Delta A=\sqrt{\langle(\hat{A}-\langle\hat{A}\rangle)^{2}\rangle}\) is the quantum fluctuation of \(\hat{A}\). As recently discussed in Ref. [51], this QSL can be generalized and tightened for general dynamics as \(\left|\frac{d\langle\hat{A}(t)\rangle}{dt}\right|\leq\Delta A\sqrt{I_{Q}}\), where \(I_{Q}\) is the quantum Fisher information.
Although observable-based QSLs have attracted growing attention, they are discussed only for every single observable. However, we may be able to evaluate the speed of observables of our interest better when we already have some knowledge on the dynamical behavior of other observables. For example, we expect that the speed will be slowed down if we know some fundamental structures of dynamics, e.g., conserved quantities and locality of the interactions of the Hamiltonians. Discovering a quantitative and rigorous theory that justifies such an expectation is crucial for our understanding of far-from-equilibrium quantum dynamics.
### Summary of the achievements
In this paper, we make the first step towards the theory of far-from-equilibrium quantum dynamics for multiple observables by introducing the concept of quantum velocity limit (QVL). Quantum velocity limits are universal inequalities concerning the out-of-equilibrium dynamics for expectation values of multiple observables. As a notable observation crucially different from the conventional QSLs, we notice that the dynamics of the set of observables define a vector for the _velocity_ instead of the (scalar) speed for a single observable. Our fundamental QVLs, illustrated in Eqs. (10), (12), and (94), become better as we increase the number of observables. In particular, our bounds are tighter than the conventional QSLs for a single observable [51]. Furthermore, QVLs
Figure 1: Schematic illustrations of our achievements. (a) We consider the time evolution of a set of multiple observables simultaneously. From the trajectory of the expectation values of the observables \(\vec{\mathfrak{B}}\), we introduce a velocity vector \(\vec{B}\). This treatment enables us to understand mutual influence of the observables, unlike (b) the treatment of the conventional quantum speed limit for a single observable. (c) The velocity vector satisfies two distinct quantum velocity limits (QVLs), the one based on quantum information theory (Sec. II) and the one following from the continuity equation of probability (Sec. VII). (d) Our QVLs lead to many novel applications that the conventional speed limits cannot address. In Sec. III, we elucidate how symmetry and conservation law \(\hat{P}\) of the dynamics improve our ability to evaluate the rate of the transitions. In Sec. IV, we show that the QVLs can result in nontrivial lower bounds on the speed. This is because our bounds (blue) on the actual speed (black) are asymmetric, unlike conventional speed limits (dotted purple). In Sec. V, we discover novel non-equilibrium tradeoff relations, some of which are unique to quantum systems. For example, we show that anti-commuting observables cannot be simultaneously fast. In Sec. VI, We also elucidate how locally interacting structures in quantum many-body systems (e.g., the local Hamiltonian given by \(\hat{H}=\hat{H}_{SI}+\hat{H}_{B}\)) are crucial in evaluating the speed of local observables. Inequalities presented in (d) are representative examples of our findings for the case of unitary dynamics, where \(\phi_{AB}=\frac{\text{cov}(\hat{A},\hat{B})}{\Delta A\Delta B}\). However, we discuss more generalized versions of these inequalities and other fruitful applications in the main text.
have conceptually distinct consequences from the previous speed limits: they enable us to evaluate the rate of the dynamics under the knowledge of fundamental structures of systems, e.g., conservation law and correlations of operators. Therefore, QVLs offer practical advantages as well as being a novel and fundamental concept towards a universal understanding of far-from-equilibrium quantum dynamics for multiple observables. Our achievements are illustrated in Fig. 1, which are summarized in the following.
#### ii.1.1 Information-theoretical quantum velocity limit
We begin with presenting the information-theoretical QVLs in Sec. II. The essential observations are threefold: i) defining a velocity vector \(\vec{B}=\left(\frac{d\langle\hat{A}_{1}\rangle}{dt},\cdots,\frac{d\langle\hat{ A}_{K}\rangle}{dt}\right)\) for a set of \(K\) time-independent observables (Fig. 1(a)); ii) defining a set of \(M\) special observables, called invariant observables, which are a closely related concept of conserved quantities; iii) constructing a generalized correlation matrix \(D\), which is a correlation matrix of \(\{\hat{A}_{k}\}\) after optimally removing the effect of invariant components. Then, we find information-theoretical QVLs as a novel matrix inequality (Eq. (10)) and a scalar inequality (12), the latter of which reads (Fig. 1(c))
\[\vec{B}^{\mathsf{T}}D^{-1}\vec{B}\leq I_{Q}. \tag{1}\]
For the unitary dynamics, we further have \(I_{Q}\leq 4\Delta H^{2}\) (with setting \(\hbar=1\)). We argue that the QVLs are refined versions of many conventional speed limits and become improved when we increase the number of observables (\(K\) and \(M\)). Furthermore, our QVLs provide hitherto unknown generalizations of the quantum Cramer-Rao bound, which is the fundamental inequality of the quantum information theory, by considering the effect of invariant observables.
Despite its conciseness, our QVLs lead to many distinct applications (Fig. 1(d)), which the conventional approaches of the QSLs cannot address. Indeed, as summarized below, our work reveals the fundamental and rigorous relationship between far-from-equilibrium dynamics and crucial structures that govern it, such as symmetry and conservation laws, correlations of operators and their connection with quantum non-commutativity, and local interactions of the many-body Hamiltonian. In addition, the QVLs can provide nontrivial _lower bound_ of the speed, unlike most of the conventional QSLs that only give upper bounds.
While most of the concrete examples in this manuscript are for unitary quantum dynamics, we stress that one can readily apply our QVLs to quantum stochastic systems. Furthermore, the velocity limits also apply to classical stochastic systems and even nonlinear population dynamics [43]. Therefore, velocity limits introduced in this paper offer a universal framework for understanding non-equilibrium dynamics concerning multiple observables.
#### ii.1.2 Symmetry and conservation law
Our bound elucidates a fundamental and novel relation between the speed of observables and the conservation law of a system for the first time (Sec. III). We show that invariant observables, which are related to conserved quantities of dynamics, tighten the velocity and speed limits (see (27) for the speed limit). Our result rigorously and quantitatively demonstrates how conservation laws can slow down the dynamics, which is consistent with our naive expectations.
For the unitary dynamics, a symmetry operator \(\hat{P}\) satisfying \([\hat{H},\hat{P}]=0\) becomes an invariant observable, and the following universal bound is obtained:
\[\left|\frac{d\langle\hat{A}\rangle}{dt}\right|\leq\Delta A\sqrt{I_{Q}}\sqrt{ 1-\phi_{AP}^{2}}\leq 2\Delta H\Delta A\sqrt{1-\phi_{AP}^{2}}, \tag{2}\]
where \(\phi_{AB}=\frac{\mathrm{cov}(\hat{A},\hat{B})}{\Delta A\Delta B}\) with \(\mathrm{cov}(\hat{A},\hat{B})=\frac{\langle\hat{A}\hat{B}\rangle+\langle\hat {B}\hat{A}\rangle}{2}-\langle\hat{A}\rangle\,\langle\hat{B}\rangle\) being the symmetrized covariance. Thus, the correlation between the observable and the symmetry reduces the bound by the factor \(\sqrt{1-\phi_{AP}^{2}}\) compared with the previously discussed bound [51]. Note that we can generalize the inequality to a more complicated situation with multiple conserved quantities.
Notably, the Hamiltonian itself always becomes an invariant observable for the unitary dynamics. In particular, taking \(\hat{P}=\hat{H}\) is the above inequality, we always have the speed limit stronger than the previous ones [51; 10] for _any_ unitary quantum dynamics. Importantly, the newly found bound is not only quantitative but also qualitative improvement in that it enables us to achieve the equality condition for much broader situations. Indeed, we show that our bound satisfies the equality condition for any pure initial state and Hamiltonian in a single spin-1/2 system, unlike the previously known bounds [51; 10].
#### ii.1.3 Nontrivial lower bounds
As shown in Sec. IV, the QVL for two observables leads to a unique asymmetric lower and upper bound of the velocity of one observable \(\hat{A}\) from the knowledge of the other one \(\hat{B}\), which can lead to the nontrivial lower bound for the speed. This is in stark contrast with many conventional speed limits, which only provide the upper bound. More concretely, our bound takes the form (see (44) for the general expression), e.g.,
\[v_{B}\phi_{AB}-f\sqrt{1-\phi_{AB}^{2}}\leq v_{A}\leq v_{B}\phi_ {AB}+f\sqrt{1-\phi_{AB}^{2}}, \tag{3}\]
where \(v_{X}=\frac{1}{\Delta X}\frac{d\langle\hat{X}\rangle}{dt}\) and \(f=\sqrt{I_{Q}-\left(\frac{1}{\Delta B}\frac{d\langle\hat{B}\rangle}{dt} \right)^{2}}\). Thus, when \(v_{B}\phi_{AB}>f\sqrt{1-\phi_{AB}^{2}}\), our lower bound indicates the nontrivial lower bound for the speed, \(\left|\frac{d\langle\hat{A}\rangle}{dt}\right|\).
Our bound relies on the knowledge of the velocity of the other reference observable \(\frac{d\left\langle B\right\rangle}{dt}\) and the correlation between the two observables \(\phi_{AB}\). Notably, our inequality indicates that we can precisely determine the velocity of the observable of interest as \(v_{A}\simeq v_{B}\phi_{AB}\), if we know that the QSL for the reference observable \(\left|\frac{d\left\langle\hat{B}\right\rangle}{dt}\right|\leq\sqrt{I_{Q}}\Delta B\) is tight. We demonstrate that this situation actually occurs using a single spin-1/2 system.
#### ii.1.4 Non-equilibrium tradeoff relations
Our QVL indicates a new non-equilibrium tradeoff relation for uncorrelated observables; that is, the speeds of uncorrelated observables cannot be simultaneously fast (see (49) in Sec. V). More concretely, we show the additivity principle that the sum of the squares of the normalized speeds of the observables is upper bounded by the quantum Fisher information.
As a remarkable example, we discover that the tradeoff relation is caused by the anti-commutativity, a nontrivial quantum property for certain operators. In particular, if we take a set of observables satisfying the anti-commutation relation \(\hat{A}_{1},\cdots,\hat{A}_{K}\left(\hat{A}_{i}\hat{A}_{j}+\hat{A}_{j}\hat{A}_ {i}=2\delta_{ij}\right)\), we obtain a ubiquitous bound
\[\sum_{k=1}^{K}\left|\frac{d\left\langle\hat{A}_{k}\right\rangle}{dt}\right|^{ 2}\leq I_{Q} \tag{4}\]
for _any_ quantum states and dynamics. As physically important examples, we discuss the cases for a set of anti-commutation Pauli strings and the Majorana fermions for the operators \(\{\hat{A}_{k}\}\).
Our result implies hitherto unknown tradeoff relations due to anti-commutativity, reminiscent of the famous uncertainty relations in quantum mechanics. While the standard uncertainty relation states that quantum fluctuations of two non-commuting observables cannot be simultaneously small, our tradeoff relation states that speeds of multiple anti-commuting observables cannot be large simultaneously. Therefore, this tradeoff relation demonstrates that nontrivial commutativity properties of observables can even affect their dynamics, as well as their fluctuations.
#### ii.1.5 Locally interacting many-body systems
We also discover inequalities that dramatically improve the evaluation of the velocities (or speeds) of local observables in quantum many-body systems (see (74) in Sec. VI). Many conventional speed limits, such as the Mandelstam-Tamm bound [10] and Margolous-Levitin bound [15], typically become meaningless in large quantum many-body systems [57; 27; 65]. In contrast, using our method, we show that the speed of observables \(\hat{A}\) in a local subsystem under unitary dynamics is bounded using an energy fluctuation \(\Delta H_{SI}\) only in the subsystem,
\[\left|\frac{d\left\langle\hat{A}\right\rangle}{dt}\right|\leq 2\Delta A\Delta H _{SI}\sqrt{1-\phi_{H_{SI}H_{B}}^{2}}, \tag{5}\]
where we have decomposed the total Hamiltonian as \(\hat{H}=\hat{H}_{SI}+\hat{H}_{B}\) with \(\hat{H}_{SI}\) being the Hamiltonian nontrivially acting on the subsystem.
Importantly, our bound elucidates the fundamental relation between local structures of the system and the speed of an observable. Indeed, the right-hand side does not increase with the total system size since \(\Delta H_{SI}\) is convergent for locally interacting systems, in contrast to the total energy fluctuation \(\Delta H\). Our results provide a rigorous and valuable bound for arbitrary observables in the local subsystem, unlike the previous approaches [57; 65; 27; 60].
Furthermore, as another notable consequence of considering the QVL, our bound in (5) indicates that the correlation \(\phi_{H_{SI}H_{B}}\) between the Hamiltonian acting on the subsystem and the rest nontrivially improves the bound. We argue that this factor becomes especially crucial when the size of the bath is small, which is relevant for experiments in artificial quantum systems, e.g., trapped ions.
#### ii.1.6 Bound based on the conservation law of probability
In addition to the above QVL based on the quantum Fisher information, we also derive distinct speed limits using the local conservation law of probability (see (94) in Sec. VII). This is advantageous for macroscopic transitions of multiple observables, improving the recently found bound for a single observable [57]. This velocity limit relies on the (generalized) correlation matrix of the _gradient_ of observables and the local probability current. We argue that this velocity limit leads to distinct consequences that are not obtained by the velocity limit based on quantum Fisher information. As a remarkable example of a single-particle transport, we demonstrate the nontrivial tradeoff relation between the speeds of the position and the even-odd probability density of the particle.
As exemplified by the discovery of the two distinct types of the QVLs, our method provides a general framework to derive a wide variety of velocity limits as generalizations of different types of speed limits. Indeed, speed limits whose proof relies on the Cauchy-Schwarz inequality, such as that for classically chaotic systems [72], will be extended to velocity limits with our general procedure.
### Organization of this paper
The rest of this paper is organized as follows. In Sec. II, we show our information-theoretical velocity limit for multiple observables on the basis of the quantum Fisher
information and a generalized correlation matrix. We also illustrate that our bound is regarded as a generalization of many previously obtained inequalities. In Sec. III, we discuss how invariant observables of the system tighten the speed limit by showing several important applications. In Sec. IV, we derive the asymmetric upper and lower bound of the speed of an observable and explain its meaning. In Sec. V, we demonstrate the non-equilibrium tradeoff relation among the speeds of uncorrelated observables, especially anti-commutating observables. In Sec. VI, we argue that our velocity limit can be applied to obtain useful inequalities in quantum many-body systems. In Sec. VII, we discuss the different type of velocity limit based on the local conservation law of probability. After a formulation for the single-observable case, which slightly generalizes the treatment in Refs. [57], we discuss the multiple-observable case and its application. In Sec. VIII, we conclude the paper with future outlook.
## II Information-theoretical velocity limits for multiple observables
### Setup
We consider general quantum dynamics, where a density matrix \(\hat{\rho}(t)\) at time \(t\) follows an equation of motion \(d\hat{\rho}(t)/dt=\mathcal{L}[\hat{\rho}(t)]\) with some (generally time-dependent) super-operator \(\mathcal{L}\). For unitary dynamics, we have \(\mathcal{L}[\hat{\rho}]=-i[\hat{H}(t),\hat{\rho}]\), where \(\hbar\) is set to unity in the following. For quantum stochastic dynamics, we can consider the Liouvillian of, e.g., the Gorini-Kossakowski-Sudarshan-Lindblad equation [73; 74] as \(\mathcal{L}\). We can also treat classical stochastic systems with our setup by focusing only on diagonal elements of \(\hat{\rho}(t)\) and their transitions.
Now, the dynamics can be rewritten as [75]
\[\frac{d\hat{\rho}}{dt}=\frac{1}{2}\{\hat{\rho},\hat{L}\}, \tag{6}\]
where \(\hat{L}\) is the symmetric logarithm derivative (SLD) and \(\{\hat{A},\hat{B}\}=\hat{A}\hat{B}+\hat{B}\hat{A}\) is the anti-commutator.
We focus on a set of linearly independent observables \(\hat{A}_{k}\) (\(k=1,\cdots,K\)) and define the velocity vector \(\vec{B}\) as
\[\vec{B}(\{\hat{A}_{k}\})=\left(\frac{d\left\langle\hat{A}_{1}\right\rangle}{ dt},\cdots,\frac{d\left\langle\hat{A}_{K}\right\rangle}{dt}\right)^{\mathsf{T}}. \tag{7}\]
For simplicity, we assume that these observables are independent of time, \(d\hat{A}_{k}/dt=0\), although the generalization to the time-dependent case is straightforwardly done in a manner similar to Ref. [40; 51]. In this case, Eq. (6) leads to
\[\frac{d\left\langle\hat{A}_{k}\right\rangle}{dt}=\left\langle\hat{A}_{k},\hat {L}\right\rangle, \tag{8}\]
where \(\left\langle\hat{A},\hat{B}\right\rangle=\frac{1}{2}\text{Tr}\left[\hat{\rho} (t)\{\hat{A},\hat{B}\}\right]\) is the symmetrized correlation function.
Besides \(\{\hat{A}_{k}\}\), we also identify a set of (generally time-dependent) observables \(\hat{\Lambda}_{\mu}\) (\(1\leq\mu\leq M\)) that satisfy
\[\frac{d\left\langle\hat{\Lambda}_{\mu}\right\rangle}{dt}-\left\langle\frac{d \hat{\Lambda}_{\mu}}{dt}\right\rangle=\left\langle\hat{\Lambda}_{\mu},\hat{L} \right\rangle=0 \tag{9}\]
for all \(\mu\). We call \(\hat{\Lambda}_{\mu}\) as invariant observables. For example, time-independent operators that conserve during time evolutions are taken as \(\hat{\Lambda}_{\mu}\). Without loss of generality, we can apply the Gram-Schmidt orthonormalization and assume \(\left\langle\hat{\Lambda}_{\nu},\hat{\Lambda}_{\mu}\right\rangle=\delta_{\nu\mu}\). We also assume that \(\hat{A}_{1},\cdots,\hat{A}_{K},\hat{\Lambda}_{1},\cdots,\hat{\Lambda}_{M}\) are linearly independent.
As shown below, by distinguishing the invariant observables \(\{\hat{\Lambda}_{\mu}\}\) from the other observables \(\{\hat{A}_{k}\}\), we can obtain the concise inequality of \(\vec{B}\) where the role of invariant observables is evident.
### Quantum velocity limit
Under the above setup, we show the following QVL in the form of the matrix inequality as our main result:
\[\vec{B}\vec{B}^{\mathsf{T}}\preceq I_{Q}D, \tag{10}\]
where \(A\preceq B\) means that the operator \(B-A\) is positive semi-definite. Here, we define the (SLD) quantum Fisher information \(I_{Q}=\left\langle\hat{L},\hat{L}\right\rangle\) and a \(K\times K\) generalized correlation matrix \(D=D(\{A_{k}\};\{\Lambda_{\mu}\})\), whose matrix elements are given by
\[D(\{A_{k}\};\{\Lambda_{\mu}\})_{kl}\] \[=\left\langle\hat{A}_{k}-\sum_{\mu=1}^{M}\left\langle\hat{A}_{k}, \hat{\Lambda}_{\mu}\right\rangle\hat{\Lambda}_{\mu},\hat{A}_{l}-\sum_{\mu=1}^{M }\left\langle\hat{A}_{l},\hat{\Lambda}_{\mu}\right\rangle\hat{\Lambda}_{\mu}\right\rangle\] \[=\left\langle\hat{A}_{k},\hat{A}_{l}\right\rangle-\sum_{\mu=1}^{M }\left\langle\hat{A}_{k},\hat{\Lambda}_{\mu}\right\rangle\left\langle\hat{ \Lambda}_{\mu},\hat{A}_{l}\right\rangle. \tag{11}\]
Note that \(D\) is generally positive semi-definite. In the following, we assume that \(D\) is also positive definite and has an inverse [76].
With this assumption, Eq. (10) leads to the following QVL in the form of the scalar inequality as our second main result:
\[\mathcal{K}(\{A_{k}\};\{\Lambda_{\mu}\}):=\vec{B}^{\mathsf{T}}D^{-1}\vec{B} \leq I_{Q}. \tag{12}\]
Importantly, for unitary quantum dynamics, we have \(I_{Q}\leq 4\Delta H^{2}\), where the equality condition is achieved for, e.g., pure states. Then we have the bound based on the energy fluctuation of the system, \(\vec{B}^{\mathsf{T}}D^{-1}\vec{B}\leq I_{Q}\leq 4\Delta H^{2}\).
The proofs of Eqs. (10) and (12) are given in Appendix A. There, we also discuss that Eqs. (10) and (12)
are optimal under the knowledge of invariant observables \(\{\hat{\Lambda}_{\mu}\}\) in the following sense: if we consider a matrix \(D^{f}\) whose matrix elements read
\[D^{f}_{kl}=\left\langle\hat{A}_{k}-\sum_{\mu=1}^{M}f_{k\mu}\hat{ \Lambda}_{\mu},\hat{A}_{l}-\sum_{\mu=1}^{M}f_{l\mu}\hat{\Lambda}_{\mu}\right\rangle \tag{13}\]
for a set of real variables \(\{f_{k\mu}\}\), we have
\[D\preceq D^{f}\quad\text{ and }\quad\vec{B}^{\mathsf{T}}(D^{f})^{-1}\vec{B}\leq \vec{B}^{\mathsf{T}}D^{-1}\vec{B}, \tag{14}\]
where the equality condition is given by \(f_{k\mu}=\langle\hat{A}_{k},\hat{\Lambda}_{\mu}\rangle\) for all \(k\) and \(\mu\).
We stress that our velocity limits are for the velocity vector of the expectation values of multiple observables and should not be confused with the standard speed limits for the state vector \(\vec{p}\) and the density matrix \(\hat{\rho}\)[24; 25]. As discussed throughout the manuscript, our velocity limits enable us to better evaluate the dynamics of an observable from the knowledge of other observables, unlike the previous speed limits.
### Connections with previous literature
Let us discuss connections and distinctions with previous literature. First of all, most of the previous information-theoretical speed limits consider the case of \(M=1\) with \(\hat{\Lambda}_{1}=\hat{\mathbb{I}}\). In contrast, as seen in the next subsection, our bounds become tighter if we include more invariant observables, if they exist.
Even when we consider the case of \(M=1\) with \(\hat{\Lambda}_{1}=\hat{\mathbb{I}}\), our result has novel consequences. In this case, \(D\) reduces to the covariance matrix \(C\), whose matrix elements are given by
\[C_{kl}=\langle\hat{A}_{k},\hat{A}_{l}\rangle-\langle\hat{A}_{k} \rangle\left\langle\hat{A}_{l}\right\rangle. \tag{15}\]
Then, our bound can be regarded as the multi-dimensional quantum Cramer-Rao inequality generalized to arbitrary observables (i.e., not restricted to unbiased estimators [75]). The application of the general multi-dimensional quantum Cramer-Rao inequality to dynamics has seldom been discussed previously.
Note that the classical multi-dimensional Cramer-Rao bound for vector-valued observables has recently been used to understand classical stochastic dynamics [39; 77; 9]. However, our QVL (12) is more general in that it can be used even in quantum systems, where noncommutativity comes into play. If we assume that the off-diagonal matrix elements of \(\hat{\rho}\) and \(\hat{A}_{k}\) do not appear during dynamics, Eq. (12) reduces to the classical multi-dimensional Cramer-Rao bound for vector-valued observables, \(\vec{B}^{\mathsf{T}}C_{C}^{-1}\vec{B}\leq I_{C}\), where \(C_{C}=\left\langle A_{k}A_{l}\right\rangle-\left\langle A_{k}\right\rangle \left\langle A_{l}\right\rangle\) is the classical covariance matrix and \(I_{C}\) is the classical Fisher information. As detailed later, by considering Eq. (12) in quantum dynamics, we obtain many conceptually different consequences from the previous literature.
To obtain the QSL for a single observable obtained previously, we again take \(M=1\) with \(\hat{\Lambda}_{1}=\hat{\mathbb{I}}\) and \(K=1\). Then, we have
\[\left|\frac{d\left\langle\hat{A}\right\rangle}{dt}\right|\leq \Delta A\sqrt{I_{Q}}, \tag{16}\]
which is equivalent to the QSL obtained in Ref. [51] when \(\hat{A}\) is independent of time. For unitary dynamics, we have \(I_{Q}\leq 4\Delta H^{2}\), and thus
\[\left|\frac{d\left\langle\hat{A}\right\rangle}{dt}\right|\leq \mathcal{B}_{\text{MT}}:=2\Delta A\Delta H, \tag{17}\]
which is the Mandelstam-Tamm bound for an observable \(\hat{A}\). As mentioned in the next subsection, the QVL (12) for multiple observables becomes tighter than that in Eq. (16) for a single observable by increasing the number of observables \(K\) (as well as that of invariant observables \(M\)).
Finally, if we consider \(M\geq 2\), QVLs in (10) and (12) can be regarded as a hitherto unknown generalized version of the quantum Cramer-Rao bound, where conserved quantities (or, more generally, invariant observables) are taken into account.
### Better bounds from more observables
Our velocity limits in Eqs. (10) and (12) become tighter if we include more observables. In particular, when \(\{A_{k}\}\subset\{A^{\prime}_{k}\}\), we have
\[\mathcal{K}(\{A_{k}\};\{\Lambda_{\mu}\})\leq\mathcal{K}(\{A^{ \prime}_{k}\};\{\Lambda_{\mu}\})\leq I_{Q}. \tag{18}\]
Likewise, when \(\{\Lambda_{\mu}\}\subset\{\Lambda^{\prime}_{\mu}\}\), we have a matrix inequality
\[D(\{A_{k}\};\{\Lambda^{\prime}_{\mu}\})\preceq D(\{A_{k}\};\{ \Lambda_{\mu}\}) \tag{19}\]
and thus
\[\mathcal{K}(\{A_{k}\};\{\Lambda_{\mu}\})\leq\mathcal{K}(\{A_{k}\}; \{\Lambda^{\prime}_{\mu}\})\leq I_{Q}. \tag{20}\]
We skip the proof of (18) since it is essentially equivalent to Eqs. (7)-(12) in Ref. [9]. Instead of the classical covariance matrix treated there, we can perform a similar discussion for the quantum generalized correlation matrix \(D\).
To prove \(D(\{A_{k}\};\{\Lambda^{\prime}_{\mu}\})\preceq D(\{A_{k}\};\{\Lambda_{\mu}\})\), it is sufficient that we show the case with \(\{\Lambda^{\prime}_{\mu}\}_{\mu=1}^{M+1}=\{\Lambda_{\mu}\}_{\mu=1}^{M}\cup\{ \Lambda_{M+1}\}\). In this case, we have
\[\sum_{kk^{\prime}}a_{k}a_{k^{\prime}}\left[D(\{A_{k}\};\{\Lambda_{ \mu}\})_{kk^{\prime}}-D(\{A_{k}\};\{\Lambda^{\prime}_{\mu}\})_{kk^{\prime}}\right]\] \[=\left|\sum_{k}a_{k}\left\langle\hat{A}_{k},\hat{\Lambda}_{M+1} \right\rangle\right|^{2}\geq 0. \tag{21}\]
for any nonzero vector \(\{a_{k}\}\). Thus, we have \(D(\{A_{k}\};\{\Lambda^{\prime}_{\mu}\})\preceq D(\{A_{k}\};\{\Lambda_{\mu}\})\), which is known to lead to \(D(\{A_{k}\};\{\Lambda^{\prime}_{\mu}\})^{-1}\succeq D(\{A_{k}\};\{\Lambda_{ \mu}\})^{-1}\). Consequently, inequality (20) follows.
### Bound for finite time interval
While we mainly consider the instantaneous speed of the expectation value of multiple observables in this paper, we can also find the corresponding inequality for a finite time interval. For this purpose, we focus on a time interval \(t\in[0,T]\) and define the displacement vector of observables of our interest (see Fig. 1(a)),
\[\vec{\mathfrak{B}}=\left(\left\langle\hat{A}_{1}(T)\right\rangle-\left\langle \hat{A}_{1}(0)\right\rangle,\cdots,\left\langle\hat{A}_{K}(T)\right\rangle- \left\langle\hat{A}_{K}(0)\right\rangle\right). \tag{22}\]
In this case, we have a matrix inequality (see Appendix B for proof)
\[\vec{\mathfrak{B}}\vec{\mathfrak{B}}^{\mathsf{T}}\preceq T^{2}\overline{DI_{Q}}, \tag{23}\]
where \(\overline{X}:=\frac{1}{T}\int_{0}^{T}dtX(t)\) is the average over time. Assuming that \(D\) is positive definite for all \(t\in[0,T]\), we find that \(\overline{D}\) is also positive definite. Then, we have the following scalar inequality, which relates the displacement for multiple observables and the time interval:
\[\sqrt{\vec{\mathfrak{B}}^{\mathsf{T}}\frac{\overline{D}^{-1}}{\vec{\mathfrak{ B}}}}\leq T \tag{24}\]
For the time-independent unitary dynamics, we further have
\[\frac{\sqrt{\vec{\mathfrak{B}}^{\mathsf{T}}\overline{D}^{-1}\vec{\mathfrak{B}} }}{2\Delta H}\leq T. \tag{25}\]
We will discuss some concrete applications of this result in Sec. V.2.
For \(K=M=1\) with \(\hat{\Lambda}_{1}=\hat{\mathbb{I}}\), it reduces to
\[\frac{\left|\left\langle\hat{A}(T)\right\rangle-\left\langle\hat{A}(0)\right\rangle \right|}{2\overline{\Delta}\overline{A}\overline{A}\overline{A}H}\leq T. \tag{26}\]
Again, our inequality for multiple observables, such as Eq. (25), is better than the inequality for a single observable, such as Eq. (26).
## III Speed limit and invariant observables
In the following sections, we demonstrate that the QVL obtained in the previous section is not just a theoretical generalization but has various notable consequences. In Table 1, we summarize applications in each of the sections with the corresponding numbers of observables, \(K\) and \(M\). When \(K=1\), we can call the bound as the speed limit (or QSL) instead of the velocity limit (or QVL).
### Speed limit with invariant observables and its meaning
As a first application, we discuss the improved QSL under invariant quantities, which are also related to symmetry and the conservation law of the system. Let us focus on \(K=1\) in Eq. (12), which leads to
\[\left|\frac{d\left\langle\hat{A}\right\rangle}{dt}\right|\leq\sqrt{\left\langle \hat{A}^{2}\right\rangle-\sum_{\mu=1}^{M}\left\langle\hat{A},\hat{\Lambda}_{ \mu}\right\rangle^{2}}\sqrt{I_{Q}}. \tag{27}\]
This inequality can also be understood as follows. Since \(\left\langle\hat{\Lambda}_{\mu},\hat{L}\right\rangle=0\), we have \(d\left\langle\hat{A}\right\rangle/dt=\left\langle\hat{A}-\sum_{\mu}f_{\mu} \hat{\Lambda}_{\mu},\hat{L}\right\rangle\) for any \(f_{\mu}\in\mathbb{R}\). Using the Cauchy-Schwarz inequality and the optimization of the value of \(\left\langle\hat{A}-\sum_{\mu}f_{\mu}\hat{\Lambda}_{\mu},\hat{A}-\sum_{\mu}f_ {\mu}\hat{\Lambda}_{\mu}\right\rangle\) result in \(f_{\mu}=\left\langle\hat{A},\hat{\Lambda}_{\mu}\right\rangle\) and then the above inequality. Geometrically, the subtraction of \(\sum_{\mu}\left\langle\hat{A},\hat{\Lambda}_{\mu}\right\rangle\hat{\Lambda}_ {\mu}\) from \(\hat{A}\) means that we can only focus on an observable projected onto the operator space that is orthogonal to \(\{\hat{\Lambda}_{\nu}\}\). Indeed, we have
\[\left\langle\hat{A}-\sum_{\mu}\left\langle\hat{A},\hat{\Lambda}_{\mu}\right\rangle \hat{\Lambda}_{\mu},\hat{\Lambda}_{\nu}\right\rangle=0 \tag{28}\]
for any \(\nu\) (see Fig. 2 for the case with \(M=1\)). By this orthogonal decomposition of \(\hat{A}\), we can optimize the Cauchy-Schwarz inequality under the knowledge of \(\{\hat{\Lambda}_{\nu}\}\) and tighten the speed limit.
Interestingly, the factor \(-\sum_{\mu=1}^{M}\left\langle\hat{A},\hat{\Lambda}_{\mu}\right\rangle^{2}\) in Eq. (27) also appears in the Mazur bound [2] if we consider time-dependent \(\hat{\Lambda}_{\mu}\) that conserve during all times. The Mazur bound is a bound on the long-time average of the temporal auto-correlation function near equilibrium. Our speed limit (27) (or, more generally, the velocity limit in Eq. (12)) demonstrates that overlap of an observable with conserved quantities even affects the transient dynamics far from equilibrium. Note, however, that \(\hat{\Lambda}_{\mu}\) in
\begin{table}
\begin{tabular}{c c c c} \hline \hline Section & Application & \(K\) & \(M\) \\ \hline III & Speed limits under invariant observables & 1 & Arbitrary \\ IV & Asymmetric upper and lower bound & 2 & Arbitrary \\ V & Tradeoff relation for uncorrelated observables & Arbitrary & Arbitrary \\ VI & Many-body systems & Arbitrary & Arbitrary \\ \hline & (Conventional speed limits) & 1 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the applications of our velocity limits described in each of the sections with the corresponding numbers of observables \(K\) and \(M\). Note that conventional quantum speed limits for the observable correspond to \(K=M=1\).
our bounds may not necessarily be a conserved quantity for all times. Instead, it is sufficient that \(\hat{\Lambda}_{\mu}\), which may depend on \(t\) itself, satisfies Eq. (9) at each \(t\). We also note that similar orthogonal decompositions have recently been developed in different contexts, i.e., improving the bound on the estimation of parameters under some constraints in quantum metrology [78] and evaluating the effect of local conserved quantities on the eigenstate thermalization hypothesis [79].
Inequality (27) becomes better as we increase \(M\). Correspondingly, the set of observables that satisfy the equality condition becomes larger for larger \(M\). Indeed, let us consider a set \(\mathcal{A}_{M}^{\text{eq}}\) of observables satisfying the equality condition of (27) at a fixed time \(t\). Then, we have
\[\mathcal{A}_{M+M^{\prime}}^{\text{eq}}\supseteq\left\{\hat{A}+ \sum_{\mu=M+1}^{M^{\prime}}f_{\mu}\hat{\Lambda}_{\mu}|\hat{A}\in\mathcal{A}_{ M},f_{\mu}\in\mathbb{R}\right\}\supset\mathcal{A}_{M}^{\text{eq}}, \tag{29}\]
where \(M^{\prime}\geq 1\). As seen from the following example, the equality condition for (27) is universally satisfied for a single spin-\(1/2\) system.
### Examples for unitary dynamics
#### iv.2.1 Single spin system
As a first example, let us consider a single spin system that undergoes unitary time evolution with a Hamiltonian \(\hat{H}=g\hat{\sigma}^{x}\). We can generally parametrize the initial state as \(\ket{\psi(0)}=\cos(\theta/2)\ket{\uparrow}+e^{i\phi}\sin(\theta/2)\ket{ \downarrow}(0\leq\theta\leq\pi,0\leq\phi<2\pi)\), where \(\ket{\uparrow}/\ket{\downarrow}\) is the eigenstate of \(\hat{\sigma}^{z}\) with the eigenvalue \(+1/-1\), we have \(\langle\hat{\sigma}^{x}(t)\rangle=\sin\theta\cos\phi,\;\langle\hat{\sigma}^{y} (t)\rangle=\cos 2gt\sin\theta\sin\phi\;-\;\sin 2gt\cos\theta,\) and \(\langle\hat{\sigma}^{z}(t)\rangle=\sin 2gt\sin\theta\sin\phi\;+\;\cos 2gt\cos\theta,\) and \(\Delta H=g\sqrt{1-\sin^{2}\theta\cos^{2}\phi}\). Then, \(\hat{A}=c_{I}\hat{\mathbb{I}}+c_{y}\hat{\sigma}^{y}+c_{z}\hat{\sigma}^{z}\left( c_{I},c_{y},c_{z}\in\mathbb{R}\right)\) satisfies the equality condition of the previous speed limit (17) (\(M=1\) with \(\hat{\Lambda}_{1}=\hat{\mathbb{I}}\)) for arbitrary later times if \(\theta=0,\pi\). However, the equality condition for (17) is not satisfied for a more general observable \(\hat{A}=c_{I}\hat{\mathbb{I}}+c_{x}\hat{\sigma}^{x}+c_{y}\hat{\sigma}^{y}+c_{ z}\hat{\sigma}^{z}\) with \(c_{x}\neq 0\), even when \(\theta=0,\pi\).
In contrast, since \(\hat{\sigma}^{x}\) is conserved, we can obtain a tighter bound using the above general method. Indeed, our inequality (27) with \(M=2,\hat{\Lambda}_{1}=\hat{\mathbb{I}}\), and \(\hat{\Lambda}_{2}=\frac{\hat{\sigma}^{x}-\langle\hat{\sigma}^{z}\rangle}{ \sqrt{\langle\langle\hat{\sigma}^{x}-\langle\hat{\sigma}^{z}\rangle\rangle^{2}}}\) still satisfies the equality condition, in accordance with Eq. (29). Notably, the equality condition holds for any \(\theta\) and \(\phi\) in this case.
In Fig. 3, we show one example that our bound \(\mathcal{B}_{X}=2\Delta H\sqrt{\langle\hat{A}^{2}\rangle-\sum_{\mu=1}^{2} \langle\hat{A},\hat{\Lambda}_{\mu}\rangle^{2}}\) discussed above satisfies the equality condition, while the Mandelstam-Tamm bound \(\mathcal{B}_{\text{MT}}\) in (17) does not.
More generally, we obtain the following striking fact
for a single spin-\(1/2\) system (or a two-level system): For _any_ Hamiltonians, observables, and initial pure states, our speed limit (27) with \(M=2,\hat{\Lambda}_{1}=\hat{\mathbb{I}}\), and \(\hat{\Lambda}_{2}=\frac{\hat{H}-\langle\hat{H}\rangle}{\sqrt{\langle(\hat{H}- \langle\hat{H}\rangle)^{2}\rangle}}\) (corresponding to (32) given later) satisfies the equality condition (see Appendix C for a proof). Therefore, our inequalities attain the equality condition in much broader situations than the previous QSLs for an observable [10; 51]. Given that the attainability of the equality condition is a crucial subject on speed limits [17], our inequalities are regarded as providing qualitative improvement (not to mention quantitative improvement) in evaluating the speed of observables.
#### iii.2.2 Hamiltonian and symmetry as invariant observables
In the following, we show more complicated examples beyond the single spin system. As one of the primary examples of Eq. (27), let us consider a general system driven by a Hamiltonian \(\hat{H}(t)\), i.e., \(\mathcal{L}[\hat{\rho}(t)]=-i[\hat{H}(t),\hat{\rho}(t)]\). In this case, we can find some invariant operators, such as the power of \(\hat{H}(t)\) or \(\hat{\rho}(t)\), the projection to the eigenstates of \(\hat{H}(t)\) or \(\hat{\rho}(t)\), and nontrivial symmetries commuting with \(\hat{H}(t)\).
Let us first take the powers of the shifted Hamiltonian, \(\hat{\mathbb{I}},\delta\hat{H},\delta\hat{H}^{2},\cdots,\) where \(\delta\hat{H}:=\hat{H}-\langle\hat{H}\rangle\) (\(t\) is omitted for brevity). By the Gram-Schmidt orthonormalization, we have, e.g., \(\hat{\Lambda}_{1}=\hat{\mathbb{I}}\), \(\hat{\Lambda}_{2}=\frac{\delta\hat{H}}{\sqrt{m_{2}}}\), and
\[\hat{\Lambda}_{3}=\frac{\delta\hat{H}^{2}-\frac{m_{3}}{m_{2}}\delta\hat{H}-m_ {2}}{\sqrt{m_{4}-\frac{m_{2}^{2}}{m_{2}}-m_{2}^{2}}}, \tag{30}\]
where \(m_{z}=\langle\delta\hat{H}^{z}\rangle\) is the \(z\)th central moment of the Hamiltonian. Applying Eq. (27), we find the conventional speed limit \(\left|d\left\langle\hat{A}\right\rangle/dt\right|\leq\Delta A\sqrt{I_{Q}}\) for \(M=1\), as we have seen in the previous section. Furthermore, setting \(M=2\) leads to
\[\left|\frac{d\left\langle\hat{A}\right\rangle}{dt}\right| \leq \sqrt{\Delta A^{2}-\frac{\text{cov}(\hat{A},\hat{H})^{2}}{\Delta H ^{2}}}\sqrt{I_{Q}} \tag{31}\] \[= \Delta A\sqrt{I_{Q}}\sqrt{1-\phi_{AH}^{2}},\]
where \(\text{cov}(\hat{X},\hat{Y})=\left\langle\hat{X},\hat{Y}\right\rangle-\left\langle \hat{X}\right\rangle\left\langle\hat{Y}\right\rangle\) is the symmetrized covariance, \(\phi_{XY}=\frac{\text{cov}(\hat{X},\hat{Y})}{\Delta X\Delta Y}\) is the quantum version of the Pearson correlation coefficient, and we have used \(m_{2}=\Delta H^{2}\). Inequality (31) means that the knowledge about the correlation between the observable and the Hamiltonian improves the speed limit. We stress that this inequality ubiquitously holds for _any_ unitary quantum dynamics without further assumptions.
When \(\hat{\rho}\) is a pure state, inequality (31) becomes
\[\left|\frac{d\left\langle\hat{A}\right\rangle}{dt}\right|\leq\mathcal{B}_{H}: =2\Delta A\Delta H\sqrt{1-\phi_{AH}^{2}}. \tag{32}\]
Note that this can also directly be obtained from the Schrodinger uncertainty relation [80], which states \(\left|\frac{[\hat{X},\hat{Y}]}{2i}\right|^{2}+|\text{cov}(\hat{X},\hat{Y})|^{ 2}\leq\Delta X^{2}\Delta Y^{2}\) for two Hermitian operators \(\hat{X}\) and \(\hat{Y}\). However, we stress that (31) generally goes beyond this uncertainty relation since \(I_{Q}\leq 4\Delta H^{2}\) for general mixed states. Furthermore, we can obtain tighter inequality even for pure states by including higher-order invariant observables, \(\hat{\Lambda}_{3},\cdots\).
Another example for which our inequality is relevant is where the system possesses some additional symmetry \(\hat{P}\), or conservation law, which commutes with the Hamiltonian, \([\hat{H},\hat{P}]=0\). If we choose \(M=2\) with \(\hat{\Lambda}_{1}=\hat{\mathbb{I}}\) and \(\hat{\Lambda}_{2}=\hat{P}-\left\langle\hat{P}\right\rangle\), we obtain
\[\left|\frac{d\left\langle\hat{A}\right\rangle}{dt}\right|\leq\Delta A\sqrt{I_ {Q}}\sqrt{1-\phi_{AP}^{2}}, \tag{33}\]
which becomes
\[\left|\frac{d\left\langle\hat{A}\right\rangle}{dt}\right|\leq\mathcal{B}_{P}: =2\Delta A\Delta H\sqrt{1-\phi_{AP}^{2}} \tag{34}\]
for pure states.
To confirm the advantage our our inequalities in Eqs. (32) and (34), we consider spin-\(1/2\) system on \(L=2\) lattice sites, whose Hamiltonian is given by
\[\hat{H}=J\hat{\sigma}_{1}^{z}\hat{\sigma}_{2}^{z}+J^{\prime}(\hat{\sigma}_{1} ^{x}\hat{\sigma}_{2}^{x}+\hat{\sigma}_{1}^{y}\hat{\sigma}_{2}^{y})+g(\hat{ \sigma}_{1}^{x}+\hat{\sigma}_{2}^{x})+h(\hat{\sigma}_{1}^{z}+\hat{\sigma}_{2}^{ z}). \tag{35}\]
This Hamiltonian respects a permutation symmetry given by
\[\hat{P}=\frac{1}{2}\sum_{\alpha=x,y,z}\hat{\sigma}_{1}^{\alpha}\hat{\sigma}_{2} ^{\alpha}+\frac{1}{2}\hat{\mathbb{I}}. \tag{36}\]
Figure 4 shows the speed of \(\hat{A}=\hat{\sigma}_{1}^{z}\) and the speed limits. Assuming an initial pure state, we compare the speed with \(\mathcal{B}_{\text{MT}}\) in Eq. (17), \(\mathcal{B}_{H}\) in Eq. (32), and \(\mathcal{B}_{P}\) in Eq. (34). We can see that \(\mathcal{B}_{H}\) and \(\mathcal{B}_{P}\) are better than \(\mathcal{B}_{\text{MT}}\), and that no hierarchy exists between \(\mathcal{B}_{H}\) and \(\mathcal{B}_{P}\).
Note that, since
\[\mathcal{B}_{P}=2\Delta H\sqrt{1-\left\langle\hat{\sigma}_{1}^{z}\right\rangle^ {2}-\frac{(\left\langle\hat{\sigma}_{1}^{z}\right\rangle+\left\langle\hat{ \sigma}_{2}^{z}\right\rangle-\left\langle\hat{\sigma}_{1}^{z}\right\rangle \left\langle\hat{P}\right\rangle)^{2}}{\Delta P}}, \tag{37}\]
it is obtained only by the measurement of the single-site expectation values at time \(t\) and the conserved quantities \(\Delta H,\left\langle\hat{P}\right\rangle\), and \(\Delta P\), which are obtained from the initial state. This is especially relevant when we do not have enough resolution to directly measure \(d\left\langle\hat{\sigma}_{1}^{z}\right\rangle/dt\) because of, e.g., the lack of time resolution or impossibility of taking the time derivative due to temporal noise on \(\hat{\sigma}_{1}^{z}\).
Another advantage of Eqs. (32) and (34) is that when we can measure \(\left|d\left\langle\hat{A}\right\rangle/dt\right|\), \(\Delta A\), and the conserved quan
tities, we have upper bounds of the covariances, i.e.,
\[\left|\mathrm{cov}(\hat{A},\hat{H})\right|\leq\sqrt{\Delta A^{2} \Delta H^{2}-\frac{1}{4}\left(\frac{d\left\langle\hat{A}\right\rangle}{dt} \right)^{2}},\] \[\left|\mathrm{cov}(\hat{A},\hat{P})\right|\leq\Delta P\sqrt{ \Delta A^{2}-\frac{1}{4\Delta H^{2}}\left(\frac{d\left\langle\hat{A}\right\rangle }{dt}\right)^{2}}, \tag{38}\]
which may be difficult to measure directly in general. Note that these inequalities are strictly tighter than the usual Caushy-Scwarz inequality, e.g., \(\left|\mathrm{cov}(\hat{A},\hat{P})\right|\leq\Delta P\Delta A\).
#### iii.3.3 Tighter bounds due to the purity conservation
As yet another interesting example of (27), we consider the conservation law of any power of the density matrix \(\hat{\rho},\hat{\rho}^{2},\hat{\rho}^{3},\cdots\) for the unitary dynamics. This purity conservation is also regarded as the conservation of the projection to the basis of \(\hat{\rho}\). We can thus take
\[\hat{\Lambda}_{\mu}=\frac{\left|\rho_{\mu}\right\rangle\left\langle\rho_{\mu} \right|}{\sqrt{\rho_{\mu}}} \tag{39}\]
for a general mixed state \(\hat{\rho}\), which is diagonalized as \(\hat{\rho}=\sum_{\mu}\rho_{\mu}\left|\rho_{\mu}\right\rangle\left\langle\rho_ {\mu}\right|.\) In this case, (27) leads to
\[\left|\frac{d\left\langle\hat{A}\right\rangle}{dt}\right|\leq\sqrt{\sum_{\mu} \rho_{\mu}\Delta A_{\mu}^{2}}\sqrt{I_{Q}}=\sqrt{\sum_{\mu\neq\nu}\rho_{\mu} \left|\left\langle\rho_{\mu}\right|\hat{A}|\rho_{\nu}\right\rangle|^{2}}\sqrt {I_{Q}}, \tag{40}\]
where
\[\Delta A_{\mu}^{2}=\left\langle\rho_{\mu}|\hat{A}^{2}|\rho_{\mu}\right\rangle- \left\langle\rho_{\mu}|\hat{A}|\rho_{\mu}\right\rangle^{2} \tag{41}\]
is the fluctuation for each basis \(|\rho_{\mu}\rangle\). Note that this bound is tighter than the conventional bound \(\left|d\left\langle\hat{A}\right\rangle/dt\right|\leq\Delta A\sqrt{I_{Q}}\), since \(\left(\sum_{\mu}\rho_{\mu}\left\langle\rho_{\mu}|\hat{A}|\rho_{\mu}\right\rangle \right)^{2}\leq\sum_{\mu}\rho_{\mu}\left\langle\rho_{\mu}|\hat{A}|\rho_{\mu} \right\rangle^{2}\) and thus
\[\sum_{\mu}\rho_{\mu}\Delta A_{\mu}^{2}\leq\Delta A^{2}. \tag{42}\]
We stress that this inequality ubiquitously holds for _any_ unitary quantum dynamics without further assumptions, as in (31).
As an elementary example where our bound is advantageous, let us consider the state whose diagonal basis coincides with those of \(\hat{A}=\sum_{\mu}\rho_{\mu}\left|a_{\mu}\right\rangle\left\langle a_{\mu}\right|\), i.e., \(\hat{\rho}=\sum_{\mu}\rho_{\mu}\left|a_{\mu}\right\rangle\left\langle a_{\mu}\right|\). Then, we have \(\left|d\left\langle\hat{A}\right\rangle/dt\right|=\left|\mathrm{Tr}[\left[\hat{ \rho},\hat{A}\right]\hat{H}]\right|=0\). In this case, the right-hand side of Eq. (40) vanishes, as desired (i.e., the equality condition is satisfied). In contrast, the previous bound \(\Delta A\sqrt{I_{Q}}\) does not vanish general.
Finally, we note that other choices of invariant observables are considered. In Appendix D, we show several other applications of Eq. (27).
## IV Asymmetric upper and lower bound
Inequality (12) provides a general relation concerning multiple observables. As an application, let us take \(K=2\) observables. In this case, we explicitly have
\[\frac{D_{22}\left|\frac{d\left\langle\hat{A}_{1}\right\rangle}{ dt}\right|^{2}-2D_{12}\frac{d\left\langle\hat{A}_{1}\right\rangle}{dt}\frac{d \left\langle\hat{A}_{2}\right\rangle}{dt}+D_{11}\left|\frac{d\left\langle \hat{A}_{2}\right\rangle}{dt}\right|^{2}}{D_{11}D_{22}-|D_{12}|^{2}}\leq I_{Q}, \tag{43}\]
where the right-hand side is upper bounded by \(2\Delta H\) for the unitary evolution. Note that, when we take \(M=1\) and \(\hat{\Lambda}_{1}=\tilde{1}\), we have \(D=C\), and Eq. (43) gives the quantum generalization of the result presented in Ref. [39].
After straightforward calculations from this inequality, we find the nontrivial asymmetric lower and upper bound for the velocity,
\[\chi V_{2}-\sqrt{(1-\chi^{2})(I_{Q}-V_{2}^{2})} \leq V_{1}\] \[\leq\chi V_{2}+\sqrt{(1-\chi^{2})(I_{Q}-V_{2}^{2})}. \tag{44}\]
Here, we have introduced the normalized velocity
\[V_{k}:=\frac{1}{\sqrt{D_{kk}}}\frac{d\left\langle\hat{A}_{k}\right\rangle}{dt} \tag{45}\]
and the generalized Pearson correlation coefficient
\[\chi:=\frac{D_{12}}{\sqrt{D_{11}D_{22}}}\leq 1, \tag{46}\]
which reduces to \(\phi_{A_{1}A_{2}}\) for \(D=C\). Note that \(V_{2}^{2}\leq I_{Q}\) is ensured because of the single-observable speed limit for \(\hat{A}_{2}\). We also note that \(\left|V_{1}-\chi V_{2}\right|\) is upper bounded by \(\sqrt{(1-\chi^{2})(4\Delta H^{2}-V_{2}^{2})}\) (\(\leq 2\Delta H\sqrt{1-\chi^{2}}\)) for the unitary evolution.
The inequality (43) provides both nontrivial upper and lower bounds for the velocity of \(\hat{A}_{1}\) (i.e., \(V_{1}\)), given the correlation \(\chi\) and the velocity \(V_{2}\) of the other observable \(\hat{A}_{2}\). Such asymmetric bounds for the velocity have seldom been obtained in previous literature. In particular, when \(\chi V_{2}-\sqrt{(1-\chi^{2})(I_{Q}-V_{2}^{2})}>0\), our inequality indicates the nontrivial lower bound of the speed \(\left|V_{1}\right|\), while many previous speed limits only indicate the upper bounds.
Furthermore, when the single-observable speed limit for \(\hat{A}_{2}\) becomes tighter (i.e., \(\left|V_{2}^{2}-I_{Q}\right|\) is small), (43) also becomes tight and \(V_{1}\) becomes close to \(\chi V_{2}\). Interestingly, if we know that an observable \(\hat{A}_{2}\) satisfies the equality condition for the single-observable speed limit, i.e., \(V_{2}=\sqrt{I_{Q}}\), the speed of another arbitrary observable \(\hat{A}_{1}\) is precisely determined as
\[\frac{d\left\langle\hat{A}_{1}\right\rangle}{dt}=\frac{D_{12}}{D_{22}}\frac{d \left\langle\hat{A}_{2}\right\rangle}{dt}. \tag{47}\]
In the following examples, we take \(M=1\) and \(\hat{\lambda}_{1}=\hat{\mathbb{I}}\), which leads to \(V_{k}=(d\left\langle\hat{A}_{k}\right\rangle/dt)/\Delta A_{k}\) and \(\chi=\phi_{A_{1}A_{2}}=\mathrm{cov}(\hat{A}_{1},\hat{A}_{2})/\Delta A_{1} \Delta A_{2}\). As the first example, let us consider the single spin system as in Sec. III.2.1. For simplicity, we here take \(\hat{A}_{2}=\hat{\sigma}^{y}\). In this case, when \(\theta=0\) or \(\pi\), Eq. (47) holds. For example, if we take \(\hat{A}_{1}=\hat{\sigma}^{z}\), Eq. (47) reduces to \(\frac{d\left\langle\hat{\sigma}^{z}\right\rangle}{dt}=\frac{-(\hat{\sigma}^{z })\left\langle\hat{\sigma}^{y}\right\rangle}{1-(\hat{\sigma}^{y})^{2}}\frac{d \left\langle\hat{\sigma}^{y}\right\rangle}{dt}\), which actually holds true. Even when \(\theta\) is not exactly \(\theta=0\) or \(\pi\), \(I_{Q}-V_{2}^{2}\) becomes small when \(\theta\) is close to those values, and inequality (44) provides a good evaluation for other observables. Figure 5 demonstrates this fact: the upper and lower bounds (\(\mathcal{B}_{\mathrm{L}}\) and \(\mathcal{B}_{\mathrm{U}}\), respectively) on \(\frac{d\left\langle\hat{\sigma}^{z}\right\rangle}{dt}\) indicated by (44) are tighter than the standard single-observable speed limit for \(\hat{\sigma}^{z}\), i.e., \(-2\Delta H\Delta\sigma^{z}\leq\frac{d\left\langle\hat{\sigma}^{z}\right\rangle }{dt}\leq 2\Delta H\Delta\sigma^{z}\).
To demonstrate our bound in a more complicated setup, we next consider the coupled two spins whose Hamiltonian is given by Eq. (35). As observables, we choose \(\hat{A}_{1}=\hat{\sigma}_{1}^{z}\), and \(\hat{A}_{2}=\hat{\sigma}_{2}^{x}\). Figure 6 shows the lower and upper bounds \(\mathcal{B}_{\mathrm{L}}\) and \(\mathcal{B}_{\mathrm{U}}\) of \(d\left\langle\hat{A}_{1}\right\rangle/dt\) indicated from Eq. (44), i.e., \(\mathcal{B}_{\mathrm{L}}\leq d\left\langle\hat{A}_{1}\right\rangle/dt\leq \mathcal{B}_{\mathrm{U}}\), the Mandeltam-Tamm bound \(-\mathcal{B}_{\mathrm{MT}}\leq d\left\langle\hat{A}_{1}\right\rangle/dt\leq \mathcal{B}_{\mathrm{MT}}=2\Delta A\Delta H\), and the bound based on the Hamiltonian conservation in Eq. (32) \(-\mathcal{B}_{H}\leq d\left\langle\hat{A}_{1}\right\rangle/dt\leq\mathcal{B}_{ H}=2\Delta A\Delta H\sqrt{1-\phi_{AH}^{2}}\). The bounds \(\mathcal{B}_{\mathrm{L}}\) and \(\mathcal{B}_{\mathrm{U}}\) can provide better bounds than the other bounds, while there is no absolute hierarchy between \(\mathcal{B}_{\mathrm{L}}/\mathcal{B}_{\mathrm{U}}\) and \(\mathcal{B}_{H}\).
## V Tradeoff relation for uncorrelated observables
### Additivity principle
We next show a new non-equilibrium tradeoff relation between the speeds of uncorrelated observables. We assume that \(K\) observables of our interest are uncorrelated with one another in the generalized sense that
\[D_{kl}=0\;(k\neq l). \tag{48}\]
Then, using Eq. (12), we have a stronger inequality than the one for a single observable,
\[\sum_{k=1}^{K}\frac{1}{D_{kk}}\left|\frac{d\left\langle\hat{A}_{k}\right\rangle}{ dt}\right|^{2}=\sum_{k=1}^{K}V_{k}^{2}\leq I_{Q}. \tag{49}\]
Namely, we have a simple additivity principle that the sum of the squares of the (normalized) speeds becomes the lower bound of the quantum Fisher information.
If Eq. (49) holds true during the finite time-interval \(t\in[0,T]\) of our interest, we can discuss the additivity principle for the displacement \(\mathfrak{\tilde{A}}\) by applying Eq. (24). Indeed, we obtain
\[\sum_{k}\frac{\left|\left\langle\hat{A}_{k}(T)\right\rangle-\left\langle\hat{A }_{k}(0)\right\rangle\right|^{2}}{\overline{D_{kk}I_{Q}}}\leq T^{2}. \tag{50}\]
As the first application of Eq. (49), if we assume \(M=1\) and \(\hat{\Lambda}_{1}=\hat{\mathbb{I}}\), we have
\[\sum_{k=1}^{K}\frac{1}{\Delta A_{k}^{2}}\left|\frac{d\left\langle\hat{A}_{k} \right\rangle}{dt}\right|^{2}\leq I_{Q}, \tag{51}\]
given that \(C_{kl}=\text{cov}(\hat{A}_{k},\hat{\Lambda}_{l})=0\) for \(k\neq l\). Defining the characteristic speed of \(\hat{A}_{k}\) as \(v_{k}=\left|V_{k}\right|=\left|d\left\langle\hat{A}_{k}\right\rangle/dt\right| /\Delta A_{k}\)[40], this can simply written as \(\sum_{k}v_{k}^{2}\leq I_{Q}\), which is regarded as a quantum extension of what is obtained by Ref. [81].
### Additivity for anti-commuting observables
The uncorrelated structure, which is necessary for the above additivity principle, naturally appears under certain situations. As a notable case, let us choose \(M=0\) and consider observables for which \(\left\langle\hat{A}_{k},\hat{A}_{l}\right\rangle=0\) for \(k\neq l\). This holds true for any state \(\hat{\rho}\) when the observables anti-commute, i.e., \(\{\hat{A}_{k},\hat{A}_{l}\}=0\) for \(k\neq l\). In this case, we have the tradeoff inequality
\[\sum_{k=1}^{K}\frac{1}{\left\langle\hat{A}_{k}^{2}\right\rangle}\left|\frac{d \left\langle\hat{A}_{k}\right\rangle}{dt}\right|^{2}\leq I_{Q} \tag{52}\]
for _any_ state and dynamics. We also find its finite-time version
\[\sum_{k}\frac{\left|\left\langle\hat{A}_{k}(T)\right\rangle-\left\langle\hat{ A}_{k}(0)\right\rangle\right|^{2}}{\left\langle\hat{A}_{k}^{2}\right\rangle I _{Q}}\leq T^{2}, \tag{53}\]
since the anti-commutation condition holds any time.
These non-equilibrium tradeoff relations, stating that two (or more than two) anti-commuting observables cannot have large speeds simultaneously, are reminiscent of the standard uncertainty relation that two non-commuting observables cannot have small variances simultaneously. Therefore, our inequalities offer a fundamental and useful principle in non-equilibrium dynamics caused by the nontrivial commutativity property [82]. Note that, applying this to unitary quantum dynamics, our result leads to
\[\sum_{k}\frac{\left\langle i[\hat{X}_{k},\hat{Y}]\right\rangle^{2}}{\left\langle \hat{X}_{k}^{2}\right\rangle}\leq\Delta Y^{2}, \tag{54}\]
for an operator \(\hat{Y}\) and any set of operators satisfying the anti-commutation relation \(\{\hat{X}_{k},\hat{X}_{l}\}=2\delta_{kl}\hat{X}_{k}^{2}\).
#### iv.2.1 Majorana fermions
The anti-commutation condition is profoundly connected to quantum particle statistics. For example, let us first consider a system composed of multiple Majorana fermions labeled by \(k\). The set of such Majorana fermion operators \(\{\hat{\gamma}_{k}\}\) satisfy \(\hat{\gamma}_{k}^{\dagger}=\hat{\gamma}_{k}\) and \(\{\hat{\gamma}_{k},\hat{\gamma}_{l}\}=2\delta_{kl}\). Then, we readily have
\[\sum_{k=1}^{K}\left|\frac{d\left\langle\hat{\gamma}_{k}\right\rangle}{dt} \right|^{2}\leq I_{Q} \tag{55}\]
and
\[\sum_{k=1}^{K}\left|\left\langle\hat{\gamma}_{k}(T)\right\rangle-\left\langle \hat{\gamma}_{k}(0)\right\rangle\right|^{2}\leq T^{2}\overline{I_{Q}}, \tag{56}\]
for _any_ dynamics of Majorana fermions. If we consider the time-dependent unitary dynamics, such as the dynamics described by the Sachdev-Ye-Kitaev model [83], we further have
\[\sqrt{\sum_{k=1}^{K}\left|\left\langle\hat{\gamma}_{k}(T)\right\rangle-\left \langle\hat{\gamma}_{k}(0)\right\rangle\right|^{2}}\leq T\sqrt{I_{Q}}\leq 2T \Delta H. \tag{57}\]
#### iv.2.2 Anti-commuting Pauli strings
As another notable example, we next consider any spin-1/2 system and Pauli strings \(\{\hat{\Sigma}_{q}\}\), where \(\hat{\Sigma}_{q}=\prod_{l}\hat{\sigma}_{l}^{\alpha_{l}}\) with \(\alpha_{l}=0,x,y,z\) (\(\hat{\sigma}_{l}^{0}=\hat{\mathbb{I}}_{l}\)). Taking a set \(\mathcal{P}_{A}\) (\(\left|\mathcal{P}_{A}\right|=K\)) of mutually anti-commuting Pauli strings, we find
\[\sum_{\hat{\Sigma}_{q}\in\mathcal{P}_{A}}^{K}\left|\frac{d\left\langle\hat{ \Sigma}_{q}\right\rangle}{dt}\right|^{2}\leq I_{Q} \tag{58}\]
for _arbitrary_ dynamics.
As a first application of inequality (58), let us consider the unitary dynamics in the single spin-1/2 system (or a two-level system). Notably, in this case, for _any_ Hamiltonians, observables, and initial pure states, our velocity limit (58) with taking \(\hat{\sigma}^{x},\hat{\sigma}^{y}\), and \(\hat{\sigma}^{z}\) satisfies the equality condition. That is,
\[\left|\frac{d\left\langle\hat{\sigma}^{x}\right\rangle}{dt}\right|^{2}+\left| \frac{d\left\langle\hat{\sigma}^{y}\right\rangle}{dt}\right|^{2}+\left|\frac{d \left\langle\hat{\sigma}^{z}\right\rangle}{dt}\right|^{2}=4\Delta H^{2}=I_{Q}, \tag{59}\]
always holds true (see Appendix E).
To demonstrate our inequality for a more complicated setup, we next consider a unitary dynamics for a pure state in the two-spin system whose Hamiltonian is given by Eq. (35). We especially consider \(\left\langle\hat{\sigma}_{1}^{x}\right\rangle\) as an observable of our interest and compare the Mandelstam-Tamm bound \(\mathcal{B}_{\text{MT}}\) in Eq. (17), \(\mathcal{B}_{H}\) in Eq. (32), and a new bound obtained from Eq. (58) with \(\mathcal{P}_{A}=\{\hat{\sigma}_{1}^{x},\hat{\sigma}_{1}^{y},\hat{\sigma}_{1}^ {z}\}\),
\[\left|\frac{d\left\langle\hat{\sigma}_{1}^{x}\right\rangle}{dt} \right|\leq\mathcal{B}_{\text{Pauli}}=\sqrt{4\Delta H^{2}-\left|\frac{d\left \langle\hat{\sigma}_{1}^{y}\right\rangle}{dt}\right|^{2}-\left|\frac{d\left\langle \hat{\sigma}_{1}^{z}\right\rangle}{dt}\right|^{2}} \tag{60}\]
Figure 7 shows that \(\mathcal{B}_{\text{Pauli}}\) can provide a better bound than \(\mathcal{B}_{\text{MT}}\) and \(\mathcal{B}_{H}\).
Finally, we note that as in Eqs. (56) and (57), we have a finite-time version of (58), i.e.,
\[\sum_{\hat{\Sigma}_{q}\in\mathcal{P}_{A}}^{K}\left|\left\langle\hat{\Sigma}_{ q}(T)\right\rangle-\left\langle\hat{\Sigma}_{q}(0)\right\rangle\right|^{2}\leq T ^{2}\overline{I_{Q}} \tag{61}\]
for general dynamics and
\[\sqrt{\sum_{\hat{\Sigma}_{q}\in\mathcal{P}_{A}}^{K}\left|\left\langle\hat{ \Sigma}_{q}(T)\right\rangle-\left\langle\hat{\Sigma}_{q}(0)\right\rangle \right|^{2}}\leq T\sqrt{I_{Q}}\leq 2T\Delta H \tag{62}\]
for time-independent unitary dynamics.
#### iv.2.3 Comparison with the previous speed limit based on the metric
Interestingly, the inequalities (57) and (62) can be verified only by the information of the initial and final times. Furthermore, if we include a sufficient number of anti-commuting observables, the inequalities become better than conventional speed limits based on the metric of the quantum state. To see this, we remind the general inequality for a single observable
\[\left|\left\langle\hat{A}(T)\right\rangle-\left\langle\hat{A}(0) \right\rangle\right|^{2} \leq\frac{\Delta_{A}^{2}}{4}\|\hat{\rho}(T)-\tilde{\rho}(0)\|_{1}^ {2}\] \[\leq\Delta_{A}^{2}(1-F(T)), \tag{63}\]
where \(\Delta_{A}=\max_{v}\left\langle v|\hat{A}|v\right\rangle-\min_{v}\left\langle v |\hat{A}|v\right\rangle\) is the spectral width of \(\hat{A}\), \(\|X\|_{1}=\text{Tr}\sqrt{X^{\dagger}X}\) is the trace-1 norm, and \(F(T)=\left(\text{Tr}\bigg{\{}\sqrt{\sqrt{\tilde{\rho}(0)}\tilde{\rho}(T)\sqrt {\tilde{\rho}(0)}}\bigg{\}}\right)^{2}\) is the mixed-state fidelity. Here, we have used the Fuchs-van de Graaf inequality [84]. For the time-independent unitary dynamics, we have a well-known speed limit based on the metric of the state [24], \(\arccos(\sqrt{F(T)})\leq T\Delta H\) for \(0\leq T\Delta H\leq\frac{\pi}{2}\). We then have
\[\left|\left\langle\hat{A}(T)\right\rangle-\left\langle\hat{A}(0)\right\rangle \right|^{2}\leq\Delta_{A}^{2}\sin^{2}(T\Delta H). \tag{64}\]
Using this inequality for \(K\) times, we have
\[\sqrt{\sum_{k=1}^{K}\left|\left\langle\hat{A}_{k}(T)\right\rangle-\left\langle \hat{A}_{k}(0)\right\rangle\right|^{2}}\leq\sqrt{\sum_{k=1}^{K}\Delta_{A_{k}}^ {2}}|\sin(T\Delta H)| \tag{65}\]
for \(0\leq T\Delta H\leq\frac{\pi}{2}\).
While Eq. (65) holds for general observables, our bounds for anti-commuting observables, such as (57) and (62), can be tighter for large \(K\). For example, if we consider anti-commuting Pauli strings, we have \(\Delta_{\Sigma_{q}}=2\), and Eq. (65) becomes
\[\sqrt{\sum_{\hat{\Sigma}_{q}\in\mathcal{P}_{A}}^{K}\left|\left\langle\hat{ \Sigma}_{q}(T)\right\rangle-\left\langle\hat{\Sigma}_{q}(0)\right\rangle \right|^{2}}\leq 2\sqrt{K}|\sin(T\Delta H)|. \tag{66}\]
Thus, inequality (62) provides a better bound when
\[K>\min\Bigg{\{}\left(\frac{T\Delta H}{\sin(T\Delta H)}\right)^{2},\left(\frac {\pi}{2}\right)^{2}\Bigg{\}}. \tag{67}\]
In particular, if we take \(K\geq 3\), (62) always gives a better bound than (66) (defined for \(0\leq T\Delta H\leq\frac{\pi}{2}\)). We have a similar result for the Majorana fermion case.
### Remark on the coherent and incoherent speed limits
Before ending this section, we briefly remark on the coherent and incoherent speed limits discussed in Ref. [51]. To explain these speed limits in our context, let us diagonalize the density matrix at a fixed time \(t\) as \(\hat{\rho}=\sum_{\mu}\rho_{\mu}\ket{\rho_{\mu}}\bra{\rho_{\mu}}\). Then, we can decompose an observable \(\hat{A}\) as \(\hat{A}=\hat{A}_{C}+\hat{A}_{I}\), where \(\hat{A}_{C}=\sum_{\mu\neq\nu}\bra{\rho_{\mu}}\hat{A}|\rho_{\nu}\rangle\ket{ \rho_{\mu}}\bra{\rho_{\nu}}\) and \(\hat{A}_{I}=\sum_{\mu}\bra{\rho_{\mu}}\hat{A}|\rho_{\mu}\rangle\ket{\rho_{\mu}} \bra{\rho_{\mu}}\). Here, we consider that \(\hat{A}_{C}\) and \(\hat{A}_{I}\) are fixed and independent of time. Then, at time \(t\), we can show speed limits separately for \(\hat{A}_{C}\) and \(\hat{A}_{I}\). Indeed, we have \(\left|\frac{d\langle\hat{A}_{C}\rangle}{dt}\right|\leq\Delta A_{C}\sqrt{I_{QC}}\) and \(\left|\frac{d\langle\hat{A}_{I}\rangle}{dt}\right|\leq\Delta A_{I}\sqrt{I_{QI}}\), where \(I_{QC}\) and \(I_{QI}\) are coherent and incoherent parts of the quantum Fisher information, respectively (see Ref. [51] for their explicit expressions). Importantly, \(I_{Q}=I_{QC}+I_{QI}\).
Now, it is straightforward to see that \(\text{cov}(\hat{A}_{C},\hat{A}_{I})=0\). Thus, we can use our general discussion in (51) to obtain
\[\frac{1}{\Delta A_{C}^{2}}\left|\frac{d\bra{\hat{A}_{C}}}{dt}\right|^{2}+\frac {1}{\Delta A_{Q}^{2}}\left|\frac{d\bra{\hat{A}_{Q}}}{dt}\right|^{2}\leq I_{Q}, \tag{68}\]
which is consistent with the coherent and incoherent speed limits. While inequality (68) has less information than the separate speed limits, we stress that it will be improved if we can find additional uncorrelated observables and include them in the left-hand side of (51).
## VI Application to quantum many-body systems
In this section, we show that useful QVLs can be derived and result in tighter inequalities than the previous ones for many-body systems. As seen below, our results provide meaningful convergent bounds even for large systems, in contrast with a naive application of previous speed limits, which lead to divergent bounds due to, e.g., the divergence of \(\Delta H\). While this fact has been pointed out before [65, 27, 57], previous results are not satisfactory in the following sense: Refs. [65, 27] relied on an unproven conjecture that is applied to only limited situations where quantum systems are controlled by the change of external parameters; Ref. [27] mentioned the problem but did not explicitly consider many-body situations; results in Refs. [60, 57] can only be applied to a limited type of observables [85]. In stark contrast, we rigorously show that the speed of general local observables is bounded by the local energy fluctuation (for the case of the unitary dynamics), which does not diverge even in the thermodynamic limit and provides the qualitatively tighter bound. Moreover, we also clarify how we can tighten the bound when the "bath" of the system is finite.
We also note that, while several other approaches exist to bound the speed of the many-body dynamics, such as the Lieb-Robinson bound and its applications [86, 87, 88, 89, 90, 91, 92, 93, 94, 95], our bounds have the advantage in that they have the information-theoretical meaning. Furthermore, for unitary dynamics, our bounds include the energy fluctuation of part of the Hamiltonians, which is in accordance with the original spirit by Mandelstam and Tamm, i.e., the tradeoff inequality between energy fluctuation and time.
### Quantum velocity limit for decomposed dynamics
For the above purpose, we first illustrate a general inequality as a variant of the QVL discussed in Eq. (12). Let \(\mathcal{L}\) be decomposed as \(\mathcal{L}=\mathcal{L}_{1}+\mathcal{L}_{2}\), where \(\mathcal{L}_{2}[\hat{\rho}]\) does not change the expectation value of \(K\) projected observables \(\hat{A}_{k}-\sum_{\mu}\bra{\hat{A}_{k},\hat{\Lambda}_{\mu}}\hat{\Lambda}_{\mu}\) of our interest. That is, defining the SLD \(\hat{L}_{1,2}\) for \(\mathcal{L}_{1,2}\), we assume
\[\left\langle\hat{A}_{k}-\sum_{\mu}\bra{\hat{A}_{k},\hat{\Lambda}_{\mu}}\hat{ \Lambda}_{\mu},\hat{L}_{2}\right\rangle=0 \tag{69}\]
for all \(1\leq k\leq K\). Under this condition, we can show (see Appendix F for proof)
\[\vec{B}^{\mathsf{T}}D^{-1}\vec{B}\leq\mathcal{F}_{11}-\frac{\mathcal{F}_{12}^ {2}}{\mathcal{F}_{22}}=\frac{\text{Det}[\mathcal{F}]}{\mathcal{F}_{22}}. \tag{70}\]
Here,
\[\mathcal{F}_{zz^{\prime}} =\left\langle\hat{L}_{z}-\sum_{\mu=1}^{M}\bra{\hat{L}_{z},\hat{ \Lambda}_{\mu}}\hat{\Lambda}_{\mu},\hat{L}_{z^{\prime}}-\sum_{\mu=1}^{M}\bra{ \hat{L}_{z^{\prime}},\hat{\Lambda}_{\mu}}\hat{\Lambda}_{\mu}\right\rangle\] \[=\mathcal{I}_{zz^{\prime}}-\sum_{\mu=1}^{M}\bra{\hat{L}_{z},\hat{ \Lambda}_{\mu}}\bra{\hat{\Lambda}_{\mu},\hat{L}_{z^{\prime}}} \tag{71}\]
is the modified quantum Fisher information matrix with \(\mathcal{I}_{zz^{\prime}}=\bra{\hat{L}_{z},\hat{L}_{z^{\prime}}}\) being the standard quantum Fisher information matrix (see also Appendix A). As discussed in Appendix F, the bound in the right hand-side of (70) can be replaced simply with \(\mathcal{I}\):
\[\vec{B}^{\mathsf{T}}D^{-1}\vec{B}\leq\mathcal{I}_{11}-\frac{\mathcal{I}_{12}^ {2}}{\mathcal{I}_{22}}=\frac{\text{Det}[\mathcal{I}]}{\mathcal{I}_{22}}. \tag{72}\]
As a primary example, let us consider the Hamiltonian dynamics \(\hat{H}=\hat{H}_{1}+\hat{H}_{2}\) and a pure state. In this case, we have \(\mathcal{I}_{11}=4\Delta H_{1}^{2}\), \(\mathcal{I}_{22}=4\Delta H_{2}^{2}\) and \(\mathcal{I}_{11}=4\text{cov}(\hat{H}_{1},\hat{H}_{2})\). In particular, for \(M=1\) with \(\hat{\Lambda}_{1}=\hat{\mathbb{I}}\), we obtain the following QVL:
\[\vec{B}^{\mathsf{T}}C^{-1}\vec{B}\leq 4\Delta H_{1}^{2}(1-\phi_{H_{1}H_{2}}^{2}), \tag{73}\]
provided that \(\langle[\hat{A}_{k},\hat{H}_{2}]\rangle=0\) for all \(k\). Furthermore, for a single observable \(\hat{A}\) (\(K=1\)), we have
\[\left|\frac{d\bra{\hat{A}}}{dt}\right|\leq\mathcal{B}_{\text{MB}}:=2\Delta A \Delta H_{1}\sqrt{1-\phi_{H_{1}H_{2}}^{2}}, \tag{74}\]
when \(\langle[\hat{A},\hat{H}_{2}]\rangle=0\).
### Bounds for many-body dynamics
As an important application, we consider quantum many-body spin systems on lattice sites \(\Omega\) and dynamics caused by local interactions. Let us focus on a set of observables \(\hat{A}_{k}\) that act on a subsystem \(S\subset\Omega\). The dynamics is then decomposed as \(\mathcal{L}=\mathcal{L}_{S}+\mathcal{L}_{I}+\mathcal{L}_{\Omega\setminus S}\), where \(\mathcal{L}_{S}\left(\mathcal{L}_{\Omega\setminus S}\right)\) acts nontrivially only on \(S\left(\Omega\backslash S\right)\) and \(\mathcal{L}_{I}\) represents the interaction between \(S\) and \(\Omega\backslash S\) (see Fig. 8(a)). Note that we can also regard \(\Omega\backslash S\) as the "bath" for the subsystem.
If we take \(M=0\), for which \(D_{kl}=\langle\hat{A}_{k},\hat{A}_{l}\rangle\), or \(M=1\) with \(\hat{\Lambda}_{1}=\hat{\mathbb{I}}\), for which \(D=C\), we can see that inequality (70) holds with setting \(\mathcal{L}_{1}=\mathcal{L}_{S}+\mathcal{L}_{I}\) and \(\mathcal{L}_{2}=\mathcal{L}_{\Omega\setminus S}\) since \(\langle\hat{A}_{k},\hat{L}_{2}\rangle=\mathrm{cov}(\hat{A}_{k},\hat{L}_{2})=0\). For the small subsystem \(S\), the bound in (70) [or (74)] becomes much tighter than Eq. (12) [or \(\mathcal{B}_{\mathrm{MT}}\)] because \(\mathcal{I}_{1}\ll I_{Q}\) in general. Furthermore, the second term in the right-hand side of (70) [or (73) and (74)] also indicates a nontrivial consequence that the correlation between the subsystem and the rest suppresses the speed limit. For example, for unitary dynamics whose Hamiltonian is given by \(\hat{H}=\hat{H}_{S}+\hat{H}_{I}+\hat{H}_{B}=\hat{H}_{SI}+\hat{H}_{B}\) (note that \(\mathcal{L}_{\Omega\setminus S}=-i[\hat{H}_{B},\cdots]\)), we have
\[\left|\frac{d\left\langle\hat{A}\right\rangle}{dt}\right|\leq 2\Delta A\Delta H _{SI}\sqrt{1-\phi_{H_{SI}H_{B}}^{2}}, \tag{75}\]
which is convergent even for \(\Delta H\to\infty\) in the thermodynamic limit, since \(\Delta H_{SI}\) is convergent. Note that \(\left|\phi_{H_{SI}H_{B}}\right|\) can be larger as we decrease the system size if the correlation length between the subsystem and the rest is finite. This means that we can further tighten the bound compared with the naive bound \(2\Delta A\Delta H_{SI}\) using the correlation factor \(\sqrt{1-\phi_{H_{SI}H_{B}}^{2}}\), especially when the bath of the system is finite. Such a situation can occur in actual experiments using artificial quantum systems, e.g., trapped ions.
To demonstrate our bound (74) for spin-\(1/2\) many-body systems on \(L\) lattice sites, let us consider unitary dynamics whose Hamiltonian is given by
\[\hat{H}=\sum_{j=1}^{L-1}J\hat{\sigma}_{j}^{z}\hat{\sigma}_{j+1}^{z}+\sum_{j=1 }^{L}g\hat{\sigma}_{j}^{x}+h\hat{\sigma}_{j}^{z}. \tag{76}\]
for a pure state. We focus on the local magnetization at the first site, \(\hat{A}=\hat{\sigma}_{1}^{z}\). In this case, we can set \(\hat{H}_{1}=g\hat{\sigma}_{1}^{x}\) and \(\hat{H}_{2}=\hat{H}-\hat{H}_{1}\), since \([\hat{A},\hat{H}_{2}]=0\) (this choice is slightly different from the decomposition into \(\hat{H}_{SI}\) and \(\hat{H}_{B}\) discussed in the previous paragraph). Figure 8(b,c) shows the bound \(\mathcal{B}_{\mathrm{MB}}\) in Eq. (74), the slightly loose bound that neglects the correlation, i.e., \(\mathcal{B}_{\mathrm{MB}}^{\prime}=2\Delta A\Delta H_{1}\), and the Mandelstam-Tamm bound \(\mathcal{B}_{\mathrm{MT}}\) in Eq. (17). Figure 8 shows that \(\mathcal{B}_{\mathrm{MB}}\) and \(\mathcal{B}_{\mathrm{MB}}^{\prime}\) become much better than \(\mathcal{B}_{\mathrm{MT}}\), which tends to be loose for larger \(L\). We also find that \(\mathcal{B}_{\mathrm{MB}}\) becomes better than \(\mathcal{B}_{\mathrm{MB}}^{\prime}\) especially for relatively small \(L\), indicating that the correlation factor can capture finite-size corrections for the speed limit.
## VII Velocity limit based on local conservation law of probability
While we have discussed QVLs based on the Fisher information so far, we can obtain a distinct type of velocity limits based on the local conservation law of probability for multiple observables. These velocity limits, as for the recently found speed limits [57] for each single observable, are especially advantageous in discussing
macroscopic transitions. As in the case for the bounds in Eqs. (10) and (12), our multiple-observable velocity limits can provide better bounds than the speed limit obtained previously [57; 60].
### Review on the speed limit for single observables
We first review the speed limit for a single observable (see Eq. (80)) based on Ref. [57] with a slight modification of avoiding the explicit introduction of the graph structure. Thanks to this modification, the results in this manuscript can be applied to a broader class of macroscopic systems, which were difficult previously [57; 60] (see the final paragraph of this subsection).
To begin with, we consider a discrete system whose state space is described by the basis set \(\{i\}\) and a probability distribution \(\{p_{i}\}\) on it. We assume that the local conservation law of probability is satisfied, which leads to the continuity equation of probability,
\[\frac{dp_{i}}{dt}=-\sum_{j(\neq i)}J_{ji}, \tag{77}\]
where \(J_{ji}=-J_{ij}\) is the probability current from \(i\) to \(j\). Note that this equation and the following formalism apply to various systems. For example, for a classical stochastic system whose time evolution is given by \(dp_{i}/dt=\sum_{j}R_{ij}p_{j}\) with a transition rate matrix \(R\), \(J_{ij}=R_{ij}p_{j}-R_{ji}p_{i}\). For a unitary quantum dynamics with \(d\hat{\rho}/dt=-i[\hat{H},\hat{\rho}]\), we can take some fixed basis set \(\{\ket{i}\bra{i}\}\) and define \(p_{i}=\bra{i}\hat{\rho}\ket{i}\). Then, we find \(J_{ij}=-iH_{ij}\rho_{ji}+\text{c.c}\), where \(H_{ij}=\bra{i}\hat{H}\ket{j}\) and \(\rho_{ji}=\bra{j}\hat{\rho}\ket{i}\). We can also consider open quantum systems, as discussed in Ref. [57].
Let us consider an observable written as a function of \(\{i\}\). We first focus on quantum systems and a single observable given by \(\hat{A}=\sum_{i}a_{i}\ket{i}\bra{i}\) (for example, when \(\ket{i}\) represents the Fock basis, we can take, e.g., the sum of the particle positions and the on-site interactions as \(\hat{A}\)). Then, we have \(\frac{d\langle\hat{A}\rangle}{dt}=-\sum_{i\neq j}a_{i}J_{ji}=-\frac{1}{2}\sum_ {i\neq j}(a_{i}-a_{j})J_{ji}\), where \(J_{ij}=-J_{ji}\) is used. Now, we introduce \(r_{ij}\geq 0\), which satisfies \(r_{ij}>0\) if \(J_{ij}\neq 0\) (equivalently, \(J_{ij}=0\) if \(r_{ij}=0\)). Then
\[\frac{d\left\langle\hat{A}\right\rangle}{dt}=-\frac{1}{2}\sum_{i\neq j,r_{ij}> 0}r_{ij}(a_{j}-a_{i})\frac{J_{ij}}{r_{ji}}=\left\langle\nabla A,\mathbf{u} \right\rangle_{r}, \tag{78}\]
where
\[\left\langle\mathbf{Y},\mathbf{Z}\right\rangle_{r}=\frac{1}{2}\sum_{i\neq j,r _{ij}>0}r_{ij}Y_{ij}Z_{ij}, \tag{79}\]
\((\nabla A)_{ij}=a_{i}-a_{j}\), and \((\mathbf{u})_{ij}=\frac{J_{ij}}{r_{ij}}\).
Now, We can use the Cauchy-Schwarz inequality to find
\[\left|\frac{d\left\langle\hat{A}\right\rangle}{dt}\right|\leq\sqrt{\left\langle \nabla A,\nabla A\right\rangle_{r}}\sqrt{U}, \tag{80}\]
where we define
\[U=\left\langle\mathbf{u},\mathbf{u}\right\rangle_{r}. \tag{81}\]
This inequality leads to important consequences. First, the factor \(\left\langle\nabla A,\nabla A\right\rangle_{r}\) can be small even when \(\|\hat{A}\|_{\infty}\) or \(\Delta A\) can be large (see the example below). Second, \(U\) is often bounded by a physically relevant quantity. For example, for unitary dynamics, by taking \(r_{ij}=|H_{ij}\rho_{ji}|\), we find [57]
\[U\leq 2\sum_{i\neq j}r_{ij}-\frac{2E_{\text{trans}}^{2}}{\sum_{i\neq j}r_{ij}} \leq 2C_{H}-\frac{2E_{\text{trans}}^{2}}{C_{H}}, \tag{82}\]
where
\[E_{\text{trans}}:=\sum_{i\neq j}H_{ij}=\left\langle\hat{H}\right\rangle-\sum_{i }H_{ii} \tag{83}\]
is the transition part of the energy and
\[C_{H}:=\max_{i}\sum_{j(\neq i)}|H_{ij}| \tag{84}\]
is the strength of the transition, which is easily known from the Hamiltonian.
Instead, if we consider a classical stochastic system, \(U\) may be bounded by the entropy production rate \(\dot{\Sigma}\) (note that we consider a classical observable \(A=\{a_{i}\}\) instead of \(\hat{A}\) in this case). For example, if the system is attached to a single heat bath satisfying the detailed balance condition, we take \(r_{ij}=R_{ij}p_{j}+R_{ji}p_{i}\) and obtain [41]
\[U\leq\frac{\dot{\Sigma}}{2}, \tag{85}\]
where \(\dot{\Sigma}:=\sum_{i\neq j}R_{ij}p_{j}\ln\frac{R_{ij}p_{i}}{R_{ji}p_{i}}\). We note that this thermodynamic inequality is recently found to be related to the optimal transport problem [96], where the lower bound is further bounded using the square of the order-2 Wasserstein distance [54; 59].
For a simple example, let us consider far-from-equilibrium transport of a quantum particle in one dimension, whose Hamiltonian is given by
\[\hat{H}=\hat{V}_{\text{pot}}+\sum_{i}J_{h}\hat{b}_{i+1}^{\dagger}\hat{b}_{i}+ \text{h.c.}, \tag{86}\]
where \(\hat{b}_{i}\) is the annihilation operator of the particle at site \(i\) and \(\hat{V}_{\text{pot}}\) determines an arbitrary on-site potential. Note that the basis \(\ket{i}\) is taken as the basis for the single-particle position. We focus on the position operator given by \(\hat{x}=\sum_{i}\ket{i}\bra{i}.\) For an infinitely large system, \(\Delta x\) diverges unboundedly in time (because of, e.g., diffusion),
which makes the inequality \(\left|d\left\langle\hat{x}\right\rangle/dt\right|\leq 2\Delta x\Delta H\) meaningless (note that \(\left|d\left\langle\hat{x}\right\rangle/dt\right|\) is convergent).
In contrast, if we apply (80), we find a convergent bound. If we take \(r_{ij}=|H_{ij}\rho_{ji}|=J_{h}(\delta_{i+1,j}+\delta_{i-1,j})|\rho_{ji}|\), we find \(\left\langle\nabla A,\nabla A\right\rangle_{r}=J_{h}\sum_{i}|\rho_{i,i+1}| \leq J_{h}\), where we have used \(|\rho_{i,i+1}|\leq\sqrt{p_{i}p_{i+1}}\leq(p_{i}+p_{i+1})/2\). We also have \(C_{H}=2J_{h}\) in Eq. (84). Then, using (82), we obtain
\[\left|\frac{d\left\langle\hat{x}\right\rangle}{dt}\right|\leq\sqrt{J_{h}U} \leq\sqrt{4J_{h}^{2}-E_{\rm trans}^{2}}, \tag{87}\]
which provides a convergent bound even for a macroscopic system. As discussed in Ref. [57], we can derive a similar speed limit useful for macroscopic transition even in (possibly interacting) many-particle systems.
Let us argue that the speed limit in (80) can be applied to a broader class of macroscopic systems, which were difficult previously [57; 60]. This is because we can avoid the explicit introduction of the graph structure, which was done in Refs. [57; 60]. In Refs. [57; 60], the speed limit for an observable is described by, e.g., the graph analogue of the Lipshitz constant of it (\(\max_{(i,j)\in\mathcal{E}}|a_{i}-a_{j}|\), where \(\mathcal{E}\) denotes the edge of the graph). This factor is significantly changed if we alter the graph structure. However, such change is physically unfavorable for a weakly perturbed system. For example, QSLs for particles in Refs. [57; 60] should be loosened even if we include very small long-range hoppings since they dramatically change the graph structure (especially \(\mathcal{E}\)). However, our bound in Eq. (80) does not have such a problem, i.e., such small long-range hoppings do not greatly change the bound. This is because we use \(\left\langle\nabla A,\nabla A\right\rangle_{r}\) instead of the Lipshitz constant of \(A\); the fact that perturbation is weak is encoded through \(r\). For example, let us consider a single-particle quantum system with small long-range hopping amplitudes, where \(|H_{ij}|\) becomes nonzero but small for \(|i-j|\gg 1\). Then, \(r_{ij}=|H_{ij}\rho_{ji}|\) becomes automatically small for \(|i-j|\gg 1\), which does not alter the right-hand side of (80) much.
Finally, we mention that the above speed limit can be discussed in a continuous system. Let us assume that the space coordinate is given by \(\mathbf{x}\), and that we can define a probability distribution \(P(\mathbf{x},t)\) on it, which satisfies the continuity equation
\[\frac{\partial P}{\partial t}=-\nabla\cdot\mathbf{J} \tag{88}\]
for a probability current \(\mathbf{J}(\mathbf{x},t)\). We consider an observable \(A(\mathbf{x})\) whose expectation value is given by
\[\left\langle A(t)\right\rangle=\int d\mathbf{x}P(\mathbf{x},t)A(\mathbf{x}), \tag{89}\]
Assuming that \(P\to 0\) for \(|\mathbf{x}|\rightarrow\infty\), we obtain [57]
\[\frac{d\left\langle A\right\rangle}{dt}=-\left\langle\nabla A,\mathbf{u} \right\rangle_{r}^{c}. \tag{90}\]
Here,
\[\left\langle\mathbf{Y},\mathbf{Z}\right\rangle_{r}^{c}:=\int_{r(\mathbf{x})>0 }d\mathbf{x}r(\mathbf{x})\mathbf{Y}(\mathbf{x})\cdot\mathbf{Z}(\mathbf{x}) \tag{91}\]
and \(\mathbf{u}(\mathbf{x})=\mathbf{J}(\mathbf{x})/r(\mathbf{x})\), where we assume \(r(\mathbf{x})>0\) if \(\mathbf{J}(\mathbf{x})\neq 0\). Then, the Cauchy-Schwarz inequality leads to
\[\left|\frac{d\left\langle A\right\rangle}{dt}\right|\leq\sqrt{\left\langle \nabla A,\nabla A\right\rangle_{r}^{c}}\sqrt{U_{c}}, \tag{92}\]
where \(U_{c}=\left\langle\mathbf{u},\mathbf{u}\right\rangle_{r}^{c}\), in analogy with Eq. (80). Because \(\left\langle\nabla A,\nabla A\right\rangle_{r}^{c}\) provides a small value compared with \(\Delta A\), Eq. (92) is useful for macroscopic transitions in continuous systems as in Eq. (80).
Importantly, \(\left\langle\mathbf{u},\mathbf{u}\right\rangle_{r}^{c}\) is often bounded by some physical quantities. For example, for the nonlinear Schrodinger equation that can describe, e.g., the mean-field dynamics of a Bose gas [97], we find \(\left\langle\mathbf{u},\mathbf{u}\right\rangle_{r}^{c}\leq 2E_{\rm kin}\) by taking \(r=P(\mathbf{x})\)[57]. Here, \(E_{\rm kin}=\int d\mathbf{x}P|\nabla\theta|^{2}h^{2}/(2m^{2})\) is the kinetic energy of the Bose gas per particle, where the (normalized) wave function is given by \(\psi(\mathbf{x})=\sqrt{P(\mathbf{x})}e^{i\theta(\mathbf{x})}\) with a quantum phase \(\theta(\mathbf{x})\). Another example is the thermodynamic Fokker-Planck equation [52; 98], where we find \(\left\langle\mathbf{u},\mathbf{u}\right\rangle_{r}^{c}\leq\mu T\hat{\Sigma}\) by taking \(r=P(\mathbf{x})\). Here, \(\mu\) is the mobility of the particle, \(T\) is the temperature, and \(\hat{\Sigma}=\frac{\left\langle\mathbf{J}^{2}/P^{2}\right\rangle}{\mu T}\) is the entropy production rate for the Fokker-Planck system.
### Velocity limit for multiple observables
We now discuss our new velocity limit for multiple observables based on the local conservation law of probability. Similar to the derivation in Eqs. (10) and (12), we obtain the matrix inequality (see Appendix G)
\[\vec{B}\vec{B}^{\rm T}\preceq U\mathcal{D} \tag{93}\]
and the scalar inequality
\[\vec{B}^{\rm T}\mathcal{D}^{-1}\vec{B}\leq U \tag{94}\]
for a set of observables \(\{\hat{A}_{k}\}\) given by \(\hat{A}_{k}=\sum_{i}(a_{k})_{i}\left|i\right\rangle\left\langle i\right|\) for all \(k\). Here, we define a \(K\times K\) matrix \(\mathcal{D}\) whose components are given by
\[\mathcal{D}_{kl}=\left\langle\nabla A_{k},\nabla A_{l}\right\rangle_{r}-\sum_ {\mu}\left\langle\nabla A_{k},\nabla\Lambda_{\mu}\right\rangle_{r}\left\langle \nabla\Lambda_{\mu},\nabla A_{l}\right\rangle_{r}, \tag{95}\]
where \(\hat{\Lambda}_{\mu}\) are invariant observables that are assumed to have the form \(\hat{\Lambda}_{\nu}=\sum_{i}(\lambda_{\nu})_{i}\left|i\right\rangle\left\langle i\right|\) and satisfy the orthonormalized condition \(\left\langle\nabla\Lambda_{\mu},\nabla\Lambda_{\mu^{\prime}}\right\rangle_{r}= \delta_{\mu\mu^{\prime}}\). We have also assumed that \(\mathcal{D}\) has an inverse. For \(K=1\) and \(M=0\), we recover Eq. (80).
The velocity limit based on the probability current has many nontrivial consequences not obtained by the velocity limit based on the Fisher information. Below, we detail examples of an asymmetric lower and upper bound and a tradeoff relation for observables whose gradients are uncorrelated.
We first discuss the asymmetric lower and upper bound. Choosing \(K=2\) in (94), we have
\[\frac{\mathcal{D}_{22}\left|\frac{d\left\langle\hat{A}_{1}\right\rangle}{dt} \right|^{2}-2\mathcal{D}_{12}\frac{d\left\langle\hat{A}_{1}\right\rangle}{dt} \frac{d\left\langle\hat{A}_{2}\right\rangle}{dt}+\mathcal{D}_{11}\left|\frac{d \left\langle\hat{A}_{2}\right\rangle}{dt}\right|^{2}}{\mathcal{D}_{11} \mathcal{D}_{22}-|\mathcal{D}_{12}|^{2}}\leq U. \tag{96}\]
To gain physical insight, we here focus on the transport dynamics of a single particle on an extended one-dimensional lattice, while generalization to many-particle systems and higher dimensions is possible (as in the single-observable case). We first consider the unitary dynamics whose Hamiltonian is given in Eq. (86). We focus on two observables \(\hat{A}_{1,2}=\sum_{i}(a_{1,2})_{i}\ket{i}\bra{i}\). After some calculation, we have an asymmetric lower and upper bound for the velocity of \(\hat{A}_{1}\) in terms of that of \(\hat{A}_{2}\) (see Appendix G.2),
\[\tilde{\chi}\mathcal{V}_{2} -\sqrt{(1-\tilde{\chi}^{2})(J_{h}U-\mathcal{V}_{2}^{2})}\leq \mathcal{V}_{1}\] \[\leq\chi\mathcal{V}_{2}+\sqrt{(1-\tilde{\chi}^{2})(J_{h}U- \mathcal{V}_{2}^{2})}, \tag{97}\]
where \(U\) can be replaced with \(4J_{h}-2E_{\rm trans}^{2}/J_{h}\) (see Eq. (82)). Here, \(\mathcal{V}_{k}:=(d\hat{A}_{k}/dt)/\sqrt{\mathcal{C}_{kk}}\) and \(\tilde{\chi}:=\mathcal{C}_{12}/\sqrt{\mathcal{C}_{11}\mathcal{C}_{22}}\leq 1\) with
\[\mathcal{C}_{kl}\] \[=\sum_{i}\frac{p_{i}}{2}\{(\nabla A_{k})_{i,i+1}(\nabla A_{k^{ \prime}})_{i,i+1}+(\nabla A_{k})_{i,i-1}(\nabla A_{k^{\prime}})_{i,i-1}\}. \tag{98}\]
This inequality provides a tighter inequality than that in Ref. [57] for a single observable in light of the knowledge of another observable.
Instead of the unitary time evolution, we can discuss classical stochastic dynamics for single particle transport. If we assume the transition rate matrix as \(R_{ij}=R(\delta_{i,j+1}+\delta_{i,j-1})\) (\(i\neq j\)), we have
\[\tilde{\chi}\mathcal{V}_{2} -\sqrt{(1-\tilde{\chi}^{2})(2RU-\mathcal{V}_{2}^{2})}\leq \mathcal{V}_{1}\] \[\leq\chi\mathcal{V}_{2}+\sqrt{(1-\tilde{\chi}^{2})(2RU-\mathcal{V }_{2}^{2})}, \tag{99}\]
where \(U\) can be replaced with \(\hat{\Sigma}/2\).
Note that (98) is often easily calculated. For example, for \(\hat{A}_{1}=\hat{x}=\sum_{i}i\ket{i}\bra{i}\) and \(\hat{A}_{2}=\hat{x}^{2}=\sum_{i}i^{2}\ket{i}\bra{i}\), we find \(\mathcal{C}_{12}=\mathcal{C}_{21}=2\bra{\hat{x}}\), \(\mathcal{C}_{11}=1\), and \(\mathcal{C}_{22}=4\bra{\hat{x}^{2}}+1\).
As a special case for the above argument, we can find the additivity principle for two observables whose gradients are uncorrelated. For example, if we consider the position of the particle \(\hat{A}_{1}=\hat{x}=\sum_{i}i\ket{i}\bra{i}\) and the even-odd imbalance of the density \(\hat{A}_{2}=\hat{s}=\sum_{i}(4\lfloor i/2\rfloor-2i+1)\ket{i}\bra{i}\), we have \(\mathcal{C}_{11}=1,\mathcal{C}_{12}=\mathcal{C}_{21}=0,\mathcal{C}_{22}=4\), and \(\tilde{\chi}=0\). Then, substituting them into (97) shows that
\[\left|\frac{d\left\langle\hat{x}\right\rangle}{dt}\right|^{2}+\frac{1}{4}\left| \frac{d\left\langle\hat{s}\right\rangle}{dt}\right|^{2}\leq J_{h}U\leq 4J_{h}^{2}-E_{ \rm trans}^{2} \tag{100}\]
for unitary dynamics, which is stronger than Eq. (87). Similarly, the corresponding bound for classical stochastic systems reads
\[\left|\frac{d\left\langle\hat{x}\right\rangle}{dt}\right|^{2}+\frac{1}{4}\left| \frac{d\left\langle\hat{s}\right\rangle}{dt}\right|^{2}\leq 2RU\leq R\hat{\Sigma}. \tag{101}\]
Figure 9 verifies the bound obtained from Eq. (100), i.e., \(\left|d\left\langle\hat{x}\right\rangle/dt\right|\leq\mathcal{B}_{\rm xs}= \sqrt{J_{h}U-\left|d\left\langle\hat{s}\right\rangle/dt\right|^{2}/4}\) for a single-particle quantum system in Eq. (86). We also compare it with the bounds \(\mathcal{B}_{\rm x}=\sqrt{J_{h}U}\) in Eq. (87) and \(\mathcal{B}_{\rm MT}\) in Eq. (17). We find that \(\mathcal{B}_{\rm xs}\) provides a better bound than \(\mathcal{B}_{\rm x}\), meaning that the knowledge of the speed of the imbalance can tighten the bound for the speed of the position. We also find that \(\mathcal{B}_{\rm MT}\) becomes divergent for large \(t\).
We can obtain a similar velocity limit for multiple observables in continuous systems. For this purpose, we introduce the velocity vector \(\vec{B}_{c}=\{B_{k}\}_{k=1}^{K}\) with \(B_{k}=d\left\langle A_{k}\right\rangle/dt\) from \(A_{k}({\bf x})\) and the invariant observables \(\{\Lambda_{\mu}({\bf x})\}_{\mu=1}^{M}\) satisfying \(\left\langle\nabla\Lambda_{\mu},\nabla\Lambda_{\nu}\right\rangle_{r}^{c}=\delta_ {\mu\nu}\). We then have \(\vec{B}_{c}\vec{B}_{c}^{\intercal}\preceq U_{c}\mathcal{D}_{c}\) and
\[\vec{B}_{c}\mathcal{D}_{c}^{-1}\vec{B}_{c}^{\intercal}\leq U_{c}, \tag{102}\]
where \(\mathcal{D}_{c}\) is obtained by the replacement of \(\left\langle\cdots,\cdots\right\rangle_{r}\) in Eq. (95) with \(\left\langle\cdots,\cdots\right\rangle_{r}^{c}\). Again, we remind that \(U_{c}\) is bounded by physical quantities, e.g., the kinetic energy for the nonlinear Schrodinger equation and the entropy production rate for the Fokker-Planck equation.
## VIII Conclusion and outlook
In this paper, we have introduced the notion of the quantum velocity limits for multiple observables for the first time. We show that the velocity limits provide tighter bounds due to the knowledge of other observables than the conventional speed limits for a single observable. As a first type of the quantum velocity limits, we have derived the universal information-theoretical bound for the velocity vector of observables ((10) and (12)). Remarkably, this velocity limit has various consequences, including the speed limits tightened by conserved quantities (27), the nontrivial lower bound for the velocity of an observable (44), the tradeoff relation between uncorrelated observables (49) and its relation to anti-commutivity, and the convergent bound for many-body systems (74). We also show the distinct type of the multiple-observable velocity limit based on the local conservation law of probability (94). These velocity limits serve as a hitherto unknown concept toward a universal theory of far-from-equilibrium quantum dynamics of multiple observables.
While we have discussed our velocity limits for the expectation values of observables, it is intriguing to extend the concept to other quantities. Recently, speed limits are investigated for, e.g., the correlations [99; 100; 101; 102], entropies [103; 104; 105; 106; 107; 108], and operator complexity [109; 102].
Even for those quantities, the knowledge of other observables, such as conserved quantities, may be used to tighten the bounds. Furthermore, it is also interesting to investigate how unique quantum properties affect the dynamics of those quantities, like the non-equilibrium tradeoff relation for nontrivially anti-commuting observables found in this work.
Since practical quantum velocity limits are obtained for many-body systems, we may be able to apply them to evaluate the timescale of quantum many-body dynamics [110; 111; 112; 113; 114; 115]. This is relevant for the problem of thermalization of isolated quantum systems [116], which offers a foundation of quantum statistical mechanics. Interestingly, our strategy of considering local observables in locally interacting systems, not the entire state, is analogous to the standard method in discussing this problem. We leave it a future problem to discuss the detailed relation between our bounds and the timescale of thermalization.
Though we have mainly discussed examples in unitary quantum systems, the velocity limits can be universally applied to open quantum systems, classical stochastic systems, and even nonlinear systems, such as population dynamics. It is left as a future issue to examine such systems on the basis of our bounds. It is also essential to discuss how our velocity limits can be used to evaluate the controllability of quantum systems under the knowledge of, e.g., conserved quantities and Hamiltonian structures. This may be accomplished by combining our results with optimization techniques developed in other fields [117].
Finally, as seen in Sec. II, the information-theoretical quantum velocity limits can also be regarded as a generalized version of the quantum Cramer-Rao bound, the fundamental bound in quantum information theory. In contrast with the conventional quantum Cramer-Rao bound, our bound accounts for multiple observables of a system, such as conserved quantities. Then, our rigorous inequality will have profound implications even outside the context of non-equilibrium dynamics, e.g., quantum metrology under the knowledge of other observables. Indeed, a similar motivation has recently been appreciated in multi-parameter quantum metrology [78]. Thus, our formalism may be beneficial for such an application, too.
###### Acknowledgements.
We are grateful to Francesco Albarelli for letting us know relevant references in the field of quantum metrology. The numerical calculations were carried out with the help of QUSPIN [118; 119].
## Appendix A Details of Eqs. (10) and (12) in the main text
### Proof of Eqs. (10) and (12)
We first give a proof of Eqs. (10) and (12) in the main text. To keep the derivation as general as possible, we consider the case with multiple observables and multi-parameters simultaneously. More precisely, we focus on
a set of \(K\) observables \(\hat{A}_{1},\cdots,\hat{A}_{K}\) and a set of \(Z\) parameters \(y_{1},\cdots,y_{Z}\). We take \(Z=1\) and \(y_{1}=t\) at the end to obtain the formula in the main text. We define
\[B_{kz}:=\partial_{y_{z}}\left\langle\hat{A}_{k}\right\rangle= \left\langle\hat{A}_{k},\hat{L}_{z}\right\rangle, \tag{104}\]
where \(\hat{L}_{z}\) is the symmetric logarithmic derivative (SLD) for the parameter \(y_{z}\).
We assume that the set of \(M\) invariant observables \(\hat{\Lambda}_{\mu}\) exists and satisfies \(\partial_{y_{z}}\left\langle\hat{\Lambda}_{\mu}\right\rangle=0\) for all \(z\). As discussed in the main text, we can assume the orthonormalization condition \(\left\langle\hat{\Lambda}_{\mu},\hat{\Lambda}_{\nu}\right\rangle=\delta_{\mu\nu}\). Then, \(B_{kz}\) is given by
\[B_{kz}=\left\langle\hat{A}_{k}-\sum_{\mu}f_{k\mu}\hat{\Lambda}_{ \mu},\hat{L}_{z}\right\rangle \tag{105}\]
for arbitrary \(f_{k\mu}\), which we assume real.
Now, we introduce two real vectors \((a_{1},\cdots,a_{K})\) and \((b_{1},\cdots,b_{Z})\) and consider
\[\sum_{kz}a_{k}b_{z}B_{kz}=\left\langle\sum_{k}a_{k}\left(\hat{A}_ {k}-\sum_{\mu}f_{k\mu}\hat{\Lambda}_{\mu}\right),\sum_{z}b_{z}\hat{L}_{z} \right\rangle. \tag{106}\]
Using the Cauchy-Schwarz inequality, we have
\[\sum_{kzk^{\prime}z^{\prime}}a_{k}a_{k^{\prime}}b_{z}b_{z^{\prime }}B_{kz}B_{k^{\prime}z^{\prime}}\leq\sum_{kzk^{\prime}z^{\prime}}a_{k}a_{k^{ \prime}}b_{z}b_{z^{\prime}}D^{f}_{kk^{\prime}}\mathcal{I}_{zz^{\prime}}, \tag{107}\]
where
\[D^{f}_{kk^{\prime}}:=\left\langle\hat{A}_{k}-\sum_{\mu}f_{k\mu} \hat{\Lambda}_{\mu},\hat{A}_{k^{\prime}}-\sum_{\mu}f_{k^{\prime}\mu}\hat{ \Lambda}_{\mu}\right\rangle \tag{108}\]
and
\[\mathcal{I}_{zz^{\prime}}:=\left\langle\hat{L}_{z},\hat{L}_{z^{ \prime}}\right\rangle \tag{109}\]
is the SLD quantum Fisher information matrix.
We can choose \(f_{k\mu}\) as \(\left\langle\hat{A}_{k},\hat{\Lambda}_{\mu}\right\rangle\) to optimize the right-hand side of inequality (107), for which \(D^{f}\) becomes \(D\) introduced in the main text. In fact, we can prove the matrix inequality \(D\preceq D^{f}\) for all \(f_{k\mu}\) since
\[\sum_{kk^{\prime}}a_{k}a^{\prime}_{k}(D^{f}_{kk^{\prime}}-D_{kk^ {\prime}})=\sum_{\mu}\left|\sum_{k}a_{k}(\left\langle\hat{A}_{k},\hat{\Lambda} _{\mu}\right\rangle-f_{k\mu})\right|^{2}\geq 0. \tag{110}\]
We thus obtain
\[\sum_{kzk^{\prime}z^{\prime}}a_{k}a_{k^{\prime}}b_{z}b_{z^{\prime }}B_{kz}B_{k^{\prime}z^{\prime}}\leq\sum_{kzk^{\prime}z^{\prime}}a_{k}a_{k^{ \prime}}b_{z}b_{z^{\prime}}D_{kk^{\prime}}\mathcal{I}_{zz^{\prime}}. \tag{111}\]
Inequality (111) gives the inequality generalizing our results discussed in the main text. We here take \(Z=1\), with which we have \(\mathcal{I}_{11}=I_{Q}\). Then, (111) reduces to the matrix inequality in Eq. (10) in the main text,
\[\vec{B}\vec{B}^{\mathsf{T}}\preceq DI_{Q}. \tag{112}\]
Now, since the similarity transformation does not change the eigenvalues, we have \(D^{-1}\vec{B}\vec{B}^{\mathsf{T}}D\preceq DI_{Q}\). We then have \(\vec{B}^{\mathsf{T}}D^{-1}\vec{B}\vec{B}^{\mathsf{T}}D\vec{B}\leq\vec{B}^{ \mathsf{T}}D\vec{B}I_{Q}\) and finally obtain the scalar inequality in Eq. (12) in the main text,
\[\mathcal{K}(\{A_{k}\};\{\Lambda_{\mu}\}):=\vec{B}^{\mathsf{T}}D ^{-1}\vec{B}\leq I_{Q}. \tag{113}\]
Note that, if we instead consider \(K=1\) and \(Z\neq 1\), we have a matrix inequality
\[\vec{\mathsf{B}}\vec{\mathsf{B}}^{\mathsf{T}}\preceq\mathcal{I} \left(\left\langle\hat{A}^{2}\right\rangle-\sum_{\mu}\left\langle\hat{A},\hat{ \Lambda}_{\mu}\right\rangle^{2}\right) \tag{114}\]
and a scalar inequality
\[\vec{\mathsf{B}}^{\mathsf{T}}\mathcal{I}^{-1}\vec{\mathsf{B}}\leq \left(\left\langle\hat{A}^{2}\right\rangle-\sum_{\mu}\left\langle\hat{A},\hat{ \Lambda}_{\mu}\right\rangle^{2}\right), \tag{115}\]
where \(\vec{\mathsf{B}}=(B_{11},\cdots,B_{1Z})^{\mathsf{T}}\) and we have assumed that \(\mathcal{I}\) has an inverse.
## Appendix B Proof of the bound for a finite time interval
We here show Eq. (23) in the main text. For this purpose, we take an arbitrary real vector \(\{a_{k}\}\) and consider
\[\sum_{k}a_{k}\mathfrak{B}_{k}=\int_{0}^{T}\sum_{k}a_{k}B_{k}(t)dt. \tag{116}\]
We then have
\[\sum_{kk^{\prime}}a_{k}a_{k^{\prime}}\mathfrak{B}_{k}\mathfrak{B }_{k^{\prime}} =\left|\sum_{k}a_{k}\mathfrak{B}_{k}\right|^{2}\] \[\leq T\int_{0}^{T}\left(\sum_{k}a_{k}B_{k}(t)\right)^{2}dt\] \[\leq T\int_{0}^{T}\sum_{kk^{\prime}}a_{k}a_{k^{\prime}}D(t)_{kk^{ \prime}}I_{Q}dt\] \[\leq T^{2}\sum_{kk^{\prime}}a_{k}a_{k^{\prime}}\overline{D_{kk^{ \prime}}I_{Q}}. \tag{117}\]
Here, we have used the Cauchy-Schwarz inequality from the first to the second line and the instantaneous velocity limit from the second to the third line. Since this inequality holds for all \(\{a_{k}\}\), we have
\[\mathfrak{B}\mathfrak{B}^{\mathsf{T}}\preceq\overline{DI_{Q}}. \tag{118}\]
## Appendix C Proof of the equality condition for a single spin-1/2 system
Here, we discuss the proof of the equality conditions discussed in Sec. III.2.1. We first consider the case with \(\hat{H}=g\hat{\sigma}^{x}\) and \(\hat{A}=c_{I}\hat{\mathbb{I}}+\sum_{\alpha=x,y,z}c_{\alpha}\hat{\sigma}^{\alpha}\). A straightforward calculation leads to
\[\frac{d\left\langle\hat{A}\right\rangle}{dt} =2g(-c_{y}\left\langle\hat{\sigma}^{z}\right\rangle+c_{z}\left\langle \hat{\sigma}^{y}\right\rangle)\] \[\left\langle\hat{A}^{2}\right\rangle-\sum_{\mu=1,2}\left\langle \hat{A},\hat{\Lambda}_{\mu}\right\rangle^{2} =c_{y}^{2}+c_{z}^{2}-\frac{(c_{y}\left\langle\hat{\sigma}^{y} \right\rangle+c_{z}\left\langle\hat{\sigma}^{z}\right\rangle)^{2}}{1-\left\langle \hat{\sigma}^{x}\right\rangle^{2}}\] \[\Delta H^{2} =g^{2}(1-\left\langle\hat{\sigma}^{x}\right\rangle^{2}), \tag{100}\]
where \(\hat{\Lambda}_{1}=\hat{\mathbb{I}}\) and
\[\hat{\Lambda}_{2}=\frac{\hat{H}-\left\langle\hat{H}\right\rangle}{\sqrt{ \left\langle(\hat{H}-\left\langle\hat{H}\right\rangle)^{2}\right\rangle}}. \tag{101}\]
Thus, the equality
\[\left|\frac{d\left\langle\hat{A}\right\rangle}{dt}\right|=2\Delta H\sqrt{ \left\langle\hat{A}^{2}\right\rangle-\sum_{\mu=1,2}\left\langle\hat{A},\hat{ \Lambda}_{\mu}\right\rangle^{2}} \tag{102}\]
holds true, where we have used \(\left\langle\hat{\sigma}^{x}\right\rangle^{2}+\left\langle\hat{\sigma}^{y} \right\rangle^{2}+\left\langle\hat{\sigma}^{z}\right\rangle^{2}=1\).
Now, we consider a more general Hamiltonian \(\hat{H}\). For two-level systems, the Hamiltonian can always be represented as
\[\hat{H}=g\hat{V}^{\dagger}\hat{\sigma}^{x}\hat{V}+h\hat{\mathbb{I}}, \tag{103}\]
where \(\hat{V}\) is a unitary operator. Then, for the state \(|\psi\rangle\), we have
\[\frac{d\left\langle\hat{A}\right\rangle}{dt}=\left\langle i[\hat{H},\hat{A}] \right\rangle=\left\langle\psi^{\prime}|i[g\hat{\sigma}^{x},\hat{A}^{\prime}]| \psi^{\prime}\right\rangle, \tag{104}\]
where \(|\psi^{\prime}\rangle=\hat{V}\left|\psi\right\rangle\) and \(\hat{A}^{\prime}=\hat{V}\hat{A}\hat{V}^{\dagger}\). Using the above discussion, we have
\[\left|\frac{d\left\langle\hat{A}\right\rangle}{dt}\right|=2\Delta H^{\prime} \sqrt{\left\langle\hat{A}^{\prime 2}\right\rangle^{\prime}-\sum_{\mu=1,2}\left\langle \hat{A}^{\prime},\hat{\lambda}_{\mu}\right\rangle^{\prime 2}}, \tag{105}\]
where \(\left\langle\cdots\right\rangle^{\prime}=\left\langle\psi^{\prime}|\cdots| \psi^{\prime}\right\rangle\) and
\[\Delta H^{\prime 2}=g^{2}(1-\left\langle\hat{\sigma}^{x}\right\rangle^{\prime 2}), \tag{106}\]
\(\lambda_{1}=\hat{\mathbb{I}}\), and
\[\hat{\lambda}_{2}=\frac{\hat{\sigma}^{x}-\left\langle\hat{\sigma}^{x}\right \rangle^{\prime}}{\sqrt{\left\langle\left(\hat{\sigma}^{x}-\left\langle\hat{ \sigma}^{x}\right\rangle^{\prime}\right)^{2}\right\rangle}}. \tag{107}\]
Now, straightforward calculation leads to \(\Delta H^{\prime}=\Delta H\), \(\left\langle\hat{A}^{2}\right\rangle^{{}^{\prime}}=\left\langle\hat{A}^{2}\right\rangle\), and \(\left\langle\hat{A}^{\prime},\hat{\lambda}_{\mu}\right\rangle^{{}^{\prime}}= \left\langle\hat{A},\hat{\Lambda}_{\mu}\right\rangle\). Then, we find the equality condition in the form of Eq. (102) holds even in this general case.
## Appendix D Other bounds based on invariant observables
Here, we discuss two additional applications of our bounds based on invariant observables in Eq. (27) in the main text, assuming the unitary time evolution \(\frac{d\hat{\rho}(t)}{dt}=-i[\hat{H}(t),\hat{\rho}(t)]\). The first one is to consider the (instantaneous) energy eigenstates of \(\hat{H}(t)\), \(|E_{\alpha}\rangle\) (\(\alpha=1,2,\cdots\)). We can take
\[\hat{\Lambda}_{\mu}=\frac{\left|E_{\mu}\right\rangle\left\langle E_{\mu} \right|}{\sqrt{\left\langle E_{\mu}\right|\hat{\rho}|E_{\mu}\rangle}}, \tag{108}\]
which satisfy the orthonormalization condition. We then have
\[\left|\frac{d\left\langle\hat{A}\right\rangle}{dt}\right|\leq\sqrt{\left\langle \hat{A}^{2}\right\rangle-\sum_{\mu}\frac{\left\langle E_{\mu}\right|\{\hat{ \rho},\hat{A}\}|E_{\mu}\rangle^{2}}{4\left\langle E_{\mu}|\hat{\rho}|E_{\mu} \rangle\right\rangle^{2}}}\sqrt{I_{Q}}. \tag{109}\]
Another example is to take the power of \(\hat{\rho}\) (\(=\hat{\rho}(t)\)) assuming the mixed state, since \(d\left\langle\hat{\rho}^{n}\right\rangle/dt=0\) due the conservation of purity. We here consider the simplest case \(\hat{\Lambda}_{1}=\hat{\mathbb{I}}\) and \(\hat{\Lambda}_{2}=(\hat{\rho}-\left\langle\hat{\rho}\right\rangle)/\sqrt{ \left\langle(\hat{\rho}-\left\langle\hat{\rho}\right\rangle)^{2}\right\rangle}\). In this case, we find
\[\left|\frac{d\left\langle\hat{A}\right\rangle}{dt}\right|\leq\Delta A\sqrt{I_{Q }}\sqrt{1-\phi_{A\rho}^{2}}. \tag{110}\]
## Appendix E Proof of Eq. (59)
We express the Hamiltonian as
\[\hat{H}=h_{I}\hat{\mathbb{I}}+h_{x}\hat{\sigma}^{x}+h_{y}\hat{\sigma}^{y}+h_{z} \hat{\sigma}^{z}. \tag{111}\]
We have
\[\Delta H^{2}=h_{x}^{2}(1-\left\langle\hat{\sigma}^{x}\right\rangle^{2})+h_{y}^{ 2}(1-\left\langle\hat{\sigma}^{y}\right\rangle^{2})+h_{z}^{2}(1-\left\langle \hat{\sigma}^{z}\right\rangle^{2})\] \[-2h_{x}h_{y}\left\langle\hat{\sigma}^{x}\right\rangle\left\langle \hat{\sigma}^{y}\right\rangle-2h_{x}h_{z}\left\langle\hat{\sigma}^{x}\right\rangle \left\langle\hat{\sigma}^{z}\right\rangle-2h_{z}h_{y}\left\langle\hat{\sigma}^{z} \right\rangle\left\langle\hat{\sigma}^{y}\right\rangle \tag{112}\]
and
\[\left|\frac{d\left\langle\hat{\sigma}^{x}\right\rangle}{dt}\right|^{2} =4(h_{y}\left\langle\hat{\sigma}^{z}\right\rangle-h_{z}\left\langle \hat{\sigma}^{y}\right\rangle)^{2}\] \[\left|\frac{d\left\langle\hat{\sigma}^{y}\right\rangle}{dt}\right|^ {2} =4(h_{x}\left\langle\hat{\sigma}^{z}\right\rangle-h_{z}\left\langle \hat{\sigma}^{x}\right\rangle)^{2}\] \[\left|\frac{d\left\langle\hat{\sigma}^{z}\right\rangle}{dt}\right|^ {2} =4(h_{y}\left\langle\hat{\sigma}^{x}\right\rangle-h_{x}\left\langle \hat{\sigma}^{y}\right\rangle)^{2}. \tag{113}\]
Using \(\left\langle\hat{\sigma}^{x}\right\rangle^{2}+\left\langle\hat{\sigma}^{y}\right\rangle ^{2}+\left\langle\hat{\sigma}^{z}\right\rangle^{2}=1\), we obtain Eq. (59).
## Appendix F Proof of Eq. (70)
To prove Eq. (70), we first note that
\[B_{k} =\left\langle\hat{A}_{k}-\sum_{\mu}\left\langle\hat{A}_{k},\hat{ \Lambda}_{\mu}\right\rangle\hat{\Lambda}_{\mu},\hat{L}\right\rangle\] \[=\left\langle\hat{A}_{k}-\sum_{\mu}\left\langle\hat{A}_{k},\hat{ \Lambda}_{\mu}\right\rangle\hat{\Lambda}_{\mu},\hat{L}_{1}\right\rangle\] \[=\left\langle\hat{A}_{k}-\sum_{\mu}\left\langle\hat{A}_{k},\hat{ \Lambda}_{\mu}\right\rangle\hat{\Lambda}_{\mu},\hat{L}_{1}-\sum_{\mu}\left\langle \hat{L}_{1},\hat{\Lambda}_{\mu}\right\rangle\hat{\Lambda}_{\mu}\right\rangle \tag{101}\]
for \(1\leq k\leq K\).
We next consider \((K+1)\)-dimensional vector \(\vec{B}^{\prime}\), which is given by
\[\vec{B}^{\prime}=\left(\vec{B},\left\langle\hat{L}_{2}-\sum_{\mu}\left\langle \hat{L}_{2},\hat{\Lambda}_{\mu}\right\rangle\hat{\Lambda}_{\mu},\hat{L}_{1}- \sum_{\mu}\left\langle\hat{L}_{1},\hat{\Lambda}_{\mu}\right\rangle\hat{ \Lambda}_{\mu}\right)\right)^{\mathsf{T}}. \tag{102}\]
In a manner similar to Appendix A, we have
\[(\vec{B}^{\prime})^{\mathsf{T}}(D^{\prime})^{-1}\vec{B}^{\prime}\leq\mathcal{ F}_{11}, \tag{103}\]
where \(\mathcal{F}_{zz^{\prime}}\) is given in Eq. (71) and \(D^{\prime}\) is block-diagonalized as
\[D^{\prime}=\begin{pmatrix}D&\vec{0}\\ \vec{0}^{\mathsf{T}}&\mathcal{F}_{22}\end{pmatrix}, \tag{104}\]
where we have used the relation
\[0 =\left\langle\hat{A}_{k}-\sum_{\mu}\left\langle\hat{A}_{k},\hat{ \Lambda}_{\mu}\right\rangle\hat{\Lambda}_{\mu},\hat{L}_{2}\right\rangle\] \[=\left\langle\hat{A}_{k}-\sum_{\mu}\left\langle\hat{A}_{k},\hat{ \Lambda}_{\mu}\right\rangle\hat{\Lambda}_{\mu},\hat{L}_{2}-\sum_{\mu}\left\langle \hat{L}_{2},\hat{\Lambda}_{\mu}\right\rangle\hat{\Lambda}_{\mu}\right\rangle. \tag{105}\]
Then we have
\[(D^{\prime})^{-1}=\begin{pmatrix}D^{-1}&\vec{0}\\ \vec{0}^{\mathsf{T}}&\mathcal{F}_{22}^{-1}\end{pmatrix}. \tag{106}\]
From Eq. (104), we immediately obtain Eq. (70):
\[\vec{B}^{\mathsf{T}}D^{-1}\vec{B}\leq\mathcal{F}_{11}-\frac{\mathcal{F}_{12}^ {2}}{\mathcal{F}_{22}}=\frac{\text{Det}[\mathcal{F}]}{\mathcal{F}_{22}}. \tag{107}\]
While we have subtracted \(\sum_{\mu}\left\langle\hat{L}_{k},\hat{\Lambda}_{\mu}\right\rangle\hat{ \Lambda}_{\mu}\) from \(\hat{L}_{k}\) above, a similar discussion can be made without this subtraction. In this case, we obtain
\[\vec{B}^{\mathsf{T}}D^{-1}\vec{B}\leq\mathcal{I}_{11}-\frac{\mathcal{I}_{12}^ {2}}{\mathcal{I}_{22}}=\frac{\text{Det}[\mathcal{I}]}{\mathcal{I}_{22}}. \tag{108}\]
instead.
Appendix G Velocity limit for multiple observables based on the local conservation law of probability
Here, we discuss the detail of the velocity limit for multiple observables based on the local conservation law of probability.
### Derivation of the case for multiple observables
While we can consider both classical and quantum systems, we here use the quantum description to keep generality. We assume that (invariant) observables are diagonal in the \(i\)-basis, i.e.,
\[\hat{A}_{k}=\sum_{i}(a_{k})_{i}\ket{i}\bra{i} \tag{109}\]
and
\[\hat{\Lambda}_{\nu}=\sum_{i}(\lambda_{\nu})_{i}\ket{i}\bra{i}. \tag{110}\]
We also assume the orthonormalization condition for \(\nabla\Lambda_{\nu}\), instead of \(\hat{\Lambda}_{\nu}\). That is,
\[\left\langle\nabla\Lambda_{\mu},\nabla\Lambda_{\nu}\right\rangle_{r}=\delta_{ \mu\nu}. \tag{111}\]
The proof of the multiple-observable bound is similar to that in Appendix A. Let us consider the general case with multi-parameters \(y_{1},\cdots,y_{Z}\) as in Appendix A and assume the continuity equation
\[\partial_{y_{z}}p_{i}=-\sum_{j(\neq i)}J_{ji}^{z}, \tag{112}\]
We first notice that \(B_{kz}\) in Eq. (102) is given by
\[B_{kz}=\left\langle\nabla A_{k},\mathbf{u}_{z}\right\rangle_{r} \tag{113}\]
with \((\mathbf{u}_{z})_{ij}=J_{ij}^{z}/r_{ij}\). As we assume that invariant operators satify \(\partial_{y_{z}}\left\langle\hat{\Lambda}_{\mu}\right\rangle=0\) for all \(z\) and \(\mu\), we have \(\left\langle\nabla\Lambda_{\mu},\mathbf{u}_{z}\right\rangle_{r}=0.\) Then we have
\[\sum_{kz}a_{k}b_{z}B_{kz}=\left\langle\sum_{k}a_{k}\left(\nabla A_{k}-\sum_{ \mu}f_{k\mu}^{\prime}\nabla\Lambda_{\mu}\right),\sum_{z}b_{z}\mathbf{u}_{z} \right\rangle_{r}. \tag{114}\]
for real vectors \((a_{1},\cdots,a_{K})\), \((b_{1},\cdots,b_{Z})\), and \(\{f_{k\mu}^{\prime}\}\). As in Appendix A, we find that \(f_{k\mu}^{\prime}=\left\langle\nabla A_{k},\nabla\Lambda_{\mu}\right\rangle_{r}\) provides a tight bound compared with the other choices, so we assume this choice in the following.
Using the Cauchy-Schwarz inequality, we have
\[\sum_{kzk^{\prime}z^{\prime}}a_{k}a_{k^{\prime}}b_{z}b_{z^{\prime}}B_{kz}B_{k^{ \prime}z^{\prime}}\leq\sum_{kzk^{\prime}z^{\prime}}a_{k}a_{k^{\prime}}b_{z}b_{z^ {\prime}}\mathcal{D}_{kk^{\prime}}\mathcal{U}_{zz^{\prime}}, \tag{115}\]
where
\[\mathcal{D}_{kk^{\prime}}:=\left\langle\nabla A_{k},\nabla A_{k^{\prime}}\right\rangle _{r}-\sum_{\mu}\left\langle\nabla A_{k},\nabla\Lambda_{\mu}\right\rangle_{r} \left\langle\nabla\Lambda_{\mu},\nabla A_{k^{\prime}}\right\rangle_{r} \tag{107}\]
and
\[\mathcal{U}_{zz^{\prime}}:=\left\langle\mathbf{u}_{z},\mathbf{u}_{z^{\prime} }\right\rangle_{r}. \tag{108}\]
If we take \(Z=1\), (106) indicates
\[\vec{B}\vec{B}^{\mathsf{T}}\preceq\mathcal{D}U \tag{109}\]
and
\[\vec{B}^{\mathsf{T}}\mathcal{D}^{-1}\vec{B}\leq U, \tag{110}\]
which we have presented in the main text. If we instead consider \(K=1\) and \(Z\neq 1\), we have a matrix inequality
\[\vec{\mathsf{B}}\vec{\mathsf{B}}^{\mathsf{T}}\preceq\mathcal{U}\left(\left\langle \nabla A,\nabla A\right\rangle_{r}-\sum_{\mu}\left\langle\nabla A,\nabla\Lambda _{\mu}\right\rangle_{r}^{2}\right) \tag{111}\]
and a scalar inequality
\[\vec{\mathsf{B}}^{\mathsf{T}}\mathcal{U}^{-1}\vec{\mathsf{B}}\leq\left(\left \langle\nabla A,\nabla A\right\rangle_{r}-\sum_{\mu}\left\langle\nabla A, \nabla\Lambda_{\mu}\right\rangle_{r}^{2}\right), \tag{112}\]
where we have assumed that \(\mathcal{U}\) has an inverse.
### Derivation of Eq. (97) in the main text
We derive Eq. (97) in the main text, assuming the single-particle quantum system whose Hamiltonian is given in Eq. (86). We take \(M=0\), i.e., neglect the term concerning \(\nabla\Lambda_{\mu}\).
We first show that, if \(r_{ij}\leq r_{ij}^{\prime}\) for all \(i\) and \(j\), we have a matrix inequality \(\mathcal{D}\preceq\mathcal{D}^{\prime}\), where \(\mathcal{D}_{kk^{\prime}}:=\left\langle\nabla A_{k},\nabla A_{k^{\prime}} \right\rangle_{r}\) and \(\mathcal{D}^{\prime}_{kk^{\prime}}:=\left\langle\nabla A_{k},\nabla A_{k^{ \prime}}\right\rangle_{r^{\prime}}\). Indeed, \(\mathcal{D}^{\prime}-\mathcal{D}\) has matrix elements given by
\[\mathcal{D}^{\prime}_{kk^{\prime}}-\mathcal{D}_{kk^{\prime}}=\left\langle \nabla A_{k},\nabla A_{k^{\prime}}\right\rangle_{r^{\prime}-r}, \tag{113}\]
which is positive semidefinite. Thus, \(\mathcal{D}\preceq\mathcal{D}^{\prime}\) holds, and then
\[\vec{B}^{\mathsf{T}}\mathcal{D}^{\prime}{}^{-1}\vec{B}\leq\vec{B}^{\mathsf{ T}}\mathcal{D}^{-1}\vec{B}\leq U. \tag{114}\]
We now take
\[r_{ij}=|H_{ij}\rho_{ji}|\leq r_{ij}^{\prime}=|H_{ij}|\frac{p_{i}+p_{j}}{2} \tag{115}\]
with \(H_{ij}=J_{h}(\delta_{i,j+1}+\delta_{i,j-1})\). In this case,
\[\mathcal{D}^{\prime}{}_{kk^{\prime}}:=\] \[J_{h}\sum_{i}\frac{p_{i}}{2}\left((\nabla A_{k})_{i,i+1}(\nabla A _{k^{\prime}})_{i,i+1}+(\nabla A_{k})_{i,i-1}(\nabla A_{k^{\prime}})_{i,i-1} \right), \tag{116}\]
where we neglect the boundary contribution, assuming that \(p_{i}\) becomes vanishing for \(i\rightarrow\pm\infty\).
For \(K=2\), we especially have
\[\frac{\mathcal{D}^{\prime}_{22}|\frac{d\langle\tilde{A}_{1}\rangle}{dt}|^{2} -2\mathcal{D}^{\prime}_{12}\frac{d\langle\tilde{A}_{1}\rangle}{dt}\frac{d \langle\tilde{A}_{2}\rangle}{dt}+\mathcal{D}^{\prime}_{11}|\frac{d\langle \tilde{A}_{2}\rangle}{dt}|^{2}}{\mathcal{D}^{\prime}_{11}\mathcal{D}^{\prime}_{ 22}-|\mathcal{D}^{\prime}_{12}|^{2}}\leq U, \tag{117}\]
where \(U\leq 2C_{H}-2E_{\mathrm{trans}}^{2}/C_{H}\) (see (82)). Noting that \(C_{H}=2J_{h}\) and introducing \(\mathcal{C}=\mathcal{D}^{\prime}/J_{h}\), we have
\[\frac{\mathcal{C}_{22}|\frac{d\langle\tilde{A}_{1}\rangle}{dt}|^{2}-2 \mathcal{C}_{12}\frac{d\langle\tilde{A}_{1}\rangle}{dt}\frac{d\langle\tilde{A} _{2}\rangle}{dt}+\mathcal{C}_{11}|\frac{d\langle\tilde{A}_{2}\rangle}{dt}|^{2} }{\mathcal{C}_{11}\mathcal{C}_{22}-|\mathcal{C}_{12}|^{2}}\leq 4J_{h}U. \tag{118}\]
Finally, straightforward calculation leads to Eq. (97) in the main text:
\[|\mathcal{V}_{1}-\bar{\chi}\mathcal{V}_{2}| \leq\sqrt{(1-\bar{\chi}^{2})(4J_{h}U-\mathcal{V}_{2}^{2})}\] \[\leq\sqrt{(1-\bar{\chi}^{2})(4J_{h}^{2}-E_{\mathrm{trans}}^{2}- \mathcal{V}_{2}^{2})}, \tag{119}\]
where \(\mathcal{V}_{k}:=\frac{d\langle\tilde{A}_{k}\rangle}{dt}/\sqrt{\mathcal{C}_{kk}}\), and \(\bar{\chi}:=\mathcal{C}_{12}/\sqrt{\mathcal{C}_{11}\mathcal{C}_{22}}\leq 1\).
As an example, let us consider the position operator \(\hat{A}_{1}=\hat{x}=\sum_{i}i\left|i\right\rangle\!\left\langle i\right|\) and the imbalance operator for odd-even density, \(\hat{s}=\sum_{i}(4\lfloor i/2\rfloor-2i+1)\left|i\right\rangle\!\left\langle i\right|\). We then have
\[(\nabla x)_{i,i+1}=1,\quad(\nabla s)_{i,i+1}=-2(-1)^{i}, \tag{120}\]
and \(\mathcal{C}_{11}=1,\mathcal{C}_{12}=\mathcal{C}_{21}=0\), and \(\mathcal{C}_{22}=4\). Thus, \(\bar{\chi}=0\) and we have
\[\left|\partial_{t}\left\langle\hat{x}\right\rangle\right|^{2}+\frac{\left| \partial_{t}\left\langle\hat{s}\right\rangle\right|^{2}}{4}\leq 4J_{h}^{2}-E_{ \mathrm{trans}}^{2}. \tag{121}\]
|
2309.12348
|
ChatGPT impacts in programming education: A recent literature overview
that debates ChatGPT responses
|
This paper aims at a brief overview of the main impact of ChatGTP in the
scientific field of programming and learning/education in computer science. It
lists, covers and documents from the literature the major issues that have been
identified for this topic, such as applications, advantages and limitations,
ethical issues raised. Answers to the above questions were solicited from
ChatGPT itself, the responses were collected, and then the recent literature
was surveyed to determine whether or not the responses are supported. The paper
ends with a short discussion on what is expected to happen in the near future.
A future that can be extremely promising if humanity manages to have AI as a
proper ally and partner, with distinct roles and specific rules of cooperation
and interaction.
|
Christos-Nikolaos Anagnostopoulos
|
2023-08-30T06:41:57Z
|
http://arxiv.org/abs/2309.12348v2
|
ChatGPT impacts in programming education: A recent literature overview that debates ChatGPT responses
###### Abstract
This paper aims at a brief overview of the main impact of ChatGPT in the scientific field of programming and learning/education in computer science. It lists, covers and documents from the literature the major issues that have been identified for this topic, such as applications, advantages and limitations, ethical issues raised. Answers to the above questions were solicited from ChatGPT itself, the responses were collected, and then the recent literature was surveyed to determine whether or not the responses are supported. The paper ends with a short discussion on what is expected to happen in the near future. A future that can be extremely promising if humanity manages to have AI as a proper ally and partner, with distinct roles and specific rules of cooperation and interaction.
ChatGPT, Programming, Computer Science, Overview, Survey Education, Ethics.
## 1 Introduction
Recently, in the field of human-computer communication and natural language processing, AI-based language models such as several versions of ChatGPT [(3 and 4)] have shown excellent performance in translation, question answering, inference estimation, text evaluation and summarization, but also in the field of automatic code generation and programming functions. These models have the capability to generate code and assist programmers in completing tasks more efficiently. However, there are challenges associated with AI-generated code, such as syntax errors that can hinder comprehension and functionality. Despite these challenges, the integration of AI in computer science and code programming has the potential to enhance productivity, accessibility, and problem-solving capabilities.
This paper attempts to provide a brief and recent overview that supports or negates the points that ChatGPT itself identifies as important impacts, advantages and disadvantages. Moreover, the paper concentrates some structural categories for the major impacts of them in computer science (programming and education) as identified by ChatGPT and as discussed in over 40 recent research papers (most of them published in 2022 and 2023).
The major goal is to provide a brief source of reference for researchers involved in the above scientific fields. This would support the systematic performance evaluation in the scientific community worldwide and allow developers to get familiarized with technologies that are still immature for the time being, but there are here to stay. It's now time, not to be afraid of them, but adopt with them and use them as assistive tools.
## 2 Impacts of AI generated content and ChatGPT in programming
The following subsections discuss what are the main implications of AI Generated content and ChatGPT in the field of computer science programming and programmers training. It is chosen to evaluate the main applications, advantages, disadvantages, future implications, and ethical considerations providing recent documentation and sources that support those outcomes. Thus, the following five questions were addressed to ChatGPT ver3.5, in order to identify the opinion of ChatGPT in these issues:
_Question 1: What are the most important applications of ChatGPT in programming and training of programmers?_
_Question 2: What are the major advantages of ChatGPT in programming and training of programmers?_
_Question 3: What are the most important limitations of ChatGPT in programming and training of programmers?_
_Question 4: What are the future prospects of ChatGPT in programming and training of programmers?_
_Question 5: What are ethical considerations of ChatGPT in programming and training of programmers?_
In the following five sections the responses of ChatGPT in every one of the above five questions are given in bullets in the respective tables. Each table demonstrates the recent papers identified in the literature that have a positive, negative or a partially positive/negative view. The tables are then followed by a brief discussion.
### Question 1: Applications
Table I. The question about "most important applications" in ChatGPT and the responses are highlighted in grey.
**Question:**
**What are the most important applications of ChatGPT in programming and training of programmers?**
**a. Assistance in code management and production:** Automatically generated AI content helps developers by showing suggestions for code or code snippets, syntax checks and debugging help. Indeed, ChatGPT is an effective tool for generating computer language code snippets (Sallam, 2023; Devlin, 2018), but also for conducting comprehensive literature reviews and queries from the literature, thus helping scientists even in creating content for writing research proposal suggestions (Liu et al., 2019; Li, 2023). But in any case, if one wants to use an AI tool specifically tailored for programming, then GitHub Copilot is the optimal assistant that works with AI principles (Prather, 2023).
**b. Programming tutoring:** Content generated using AI is an excellent aid for users who are first learning computer programming languages, as it is possible to develop explanatory dialogues for various concepts and answer basic questions. ChatGPT offers process assistance for writing code, improving training in programming languages and also supporting users in programming tasks. ChatGPT has even been compared to other tools for automated programming code generation and has been found to have unique
capabilities (Zambrano, 2023). In the context of research and education, ChatGPT has been found to be a tool that assists researchers in writing research papers, as mentioned above. However, it is particularly critical to keep in mind that ChatGPT several times requires the user to validate and verify the accuracy of the information it provides. The quality of ChatGPT's responses, particularly in literature reviews, literature reports and data sources (FERROUHI, 2023; Hopkins et al., 2023) has been debated and specific doubts have been raised about its accuracy.
In the field of computer programming education, ChatGPT has also been used to assist teachers in related tasks, such as grading and assessment. However, just as is the case for learners/students, there are serious questions about the risk of reducing instructors' critical thinking if they become overly dependent on ChatGPT (Iskender, 2023; Uddin et al, 2023). It is no coincidence then that the overall conclusion is that ChatGPT can damage the critical thinking ability of its users (both students and instructors) and need for continuous self-improvement if the latter do not understand the correct degree of its use and value it as the main tool of the educational process.
**c. Natural Language Processing (NLP):** ChatGPT finds particularly important applications in the field of natural language processing (NLP), which include computer science education as mentioned above, but also its assessment, automatic question answering systems, automated text generation, as well as translation between texts in different languages and types (Serdaliyev & Zhunissov, 2023).
In the broader field of NLP, ChatGPT found immediate applications in chatbots and virtual assistants (Serdaliyev & Zhunissov, 2023; Gilson et al., 2023), which is not surprising as it is itself a form of chatbot. Moreover, ChatGPT is proposed for automated paper writing applications, and text generation (text from scratch or revised versions) for research papers and articles (Gilson et al., 2023; Temsah et al., 2023).
Beyond writing papers, ChatGPT is reported to be used for research paper editing, academic instruction, and knowledge-based assessments (Stokell-Walker & Noorden, 2023; Buriak et al., 2023; Banerjee et al., 2023; Ali et al., 2023) as well as a tool for writing research proposals for funding and for academic theses (Qasem, 2023). But, as will be discussed in the section of ethical issues raised, a great concern is identified about an increase in plagiarism and over-reliance on ChatGPT (Qasem, 2023) leading to limitations in analytical capabilities (Stokel-Walker & Noorden, 2023) and for this case, the necessity of human verification before submitting proposals or a thesis is also emphasized (Sallam, 2023).
**d. Technical Documentation:** Artificial intelligence text generation models are capable of assisting in the completion of technical documentation
and explanatory text for projects or software applications. Demonstrated success in this area brings to the fore the importance of design choices and raises questions about the source of recent improvements in the performance of language models (Liu et al., 2019). The attention mechanism, as presented in (Vaswani et al., 2017), is the one that has played the most crucial role in the development of these language models. Therefore, by incorporating the power of AI language models, software projects and applications can benefit from the automated production of technical documentation that should always accompany a software project thus increasing its sustainability and updating it with fully documented new releases.
### Question 2: Advantages
Table II. The question about "major advantages" of ChatGPT and the responses are highlighted in grey.
**a. Increase of the software developer productivity and rapid idea prototyping:** AI Generated Content can speed up development tasks, reduce coding errors, and enhance the overall productivity of software engineers. It also enables individuals with little or no programming experience to engage with computer science concepts and coding. In particular, ChatGPT can be used to speed up software development tasks and enhance the overall productivity of engineers engaged in these tasks (Sallam, 2023; Haque & Li, 2023). As mentioned in the previous section, ChatGPT helps in computer code generation or software prototypes, and with its proper utilization, software
engineers can save time from low-level tasks that require significant effort and focus on higher-level tasks and experimental design (Sallam, 2023), get explanations, feedback and acquire realistic virtual simulations that may contribute to hands-on learning experiences (Qadir, 2022). A very good example is that by using ChatGPT, developers can take advantage of its ability to easily produce code at a fairly satisfactory level, fill in any missing parts and/or update it, or even develop it in different programming languages (Kreitmeir & Raschky, 2023). Moreover, an important capability of ChatGPT is the provision of explanations and suggestions that can help reduce coding errors and optimize the quality of the code being programmed (Haque & Li, 2023).
**b. Continuous Learning:** ChatGPT can also assist in the formation of collaborative workflows during the implementation of an IT project or during the production of software, forming an enhanced human-machine interaction (Nascimento, 2023). At the same time, ChatGPT's capabilities in natural language processing, beyond the creation of new content, have been reported to offer significant potential in information retrieval, cataloging and indexing, and metadata creation (Lund & Wang, 2023). In addition to the above, it is important to add its flexibility and interactivity that has been found to increase the productivity of scientists and their ability to access new knowledge and skills (Irons, 2023).
### Question 3: Limitations
Table III. The question about "most important limitations" of ChatGPT and the responses are highlighted in grey.
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Question:** & & \\
**What are the most important limitations of ChatGPT in programming and** \\
**training of programmers?** & & \\ \hline _ChatGPT3.5 response In bullets below:_ & & \\ \hline a. Lack of Context & (Wang et al., 2021), (Khoshafah, (2023), (Aljanabi et al., 2023), (Buriak et al., 2023) & \\ \hline b. Security concerns & (Flanagin, 2023) (Buriak et al., 2023) & \\ \hline \end{tabular}
**a. Lack of Context:** This is the most important limitation of AI Generated Content, since AI still cannot fully understand the context of specific codebases or assets, leading to potential inaccuracies and inappropriate promptings.
Wang et al. (2021) discuss the limitations of current methods in processing code snippets, neglecting the special characteristics of programming languages and token types, which can result in suboptimal understanding and generation tasks. Khoshafah (2023) denotes issues related to the translation accuracy of ChatGPT and highlights limitations in understanding domain-specific terminology and cultural context, which can affect the accuracy of translations. Aljanabi et al. (2023) mentions that ChatGPT may not fully understand the nuances of security vulnerabilities, reverse engineering, or malware analysis, potentially leading to inaccurate or less useful information in coding. Additionally, it mentions that ChatGPT may not be able to handle certain types of queries, such as mathematical calculations.
**b. Security Concerns:** AI models like ChatGPT might inadvertently generate code with security vulnerabilities, which could be exploited by malicious actors. Security concerns arise in the use of ChatGPT in computer science applications. The potential for misuse, such as using ChatGPT to cheat on assignments, write essays, or take exams, including computer science and programming exams, has been identified. The inclusion of ChatGPT as a bylined author in scientific articles has raised concerns about the integrity of scientific publication and the indexing of nonhuman "authors". The lack of accountability and responsibility associated with AI tools like ChatGPT has prompted the development of policies by journals and organizations to address these issues (Flanagin, 2023). Additionally, the limitations of ChatGPT, such as its lack of context and inability to understand new information or generate deep analysis, restrict its utility in writing up-to-date reviews, perspectives, and introductions (Buriak et al., 2023). It is crucial to address these security concerns and ensure responsible use of ChatGPT in computer science to maintain the integrity of scientific research and programming education.
**c. Overconfidence or excessive trust in AI:** Dependence on AI-generated code without proper understanding certainly would negatively affect developers' growth. While AI technologies like low-code development platforms offer efficiency and productivity benefits, relying solely on AI-generated code may limit developers' understanding of underlying concepts and principles. This lack of understanding can hinder their ability to troubleshoot, debug, and optimize code, resulting in less robust and maintainable solutions. Additionally, the lack of comprehension may impede
developers' growth and hinder their ability to adapt to new technologies and challenges (Tang, 2023). Therefore, it is important to strike a balance between implementing AI technologies and ensuring developers' comprehensive understanding of code and underlying principles.
### Question 4: Future Prospects
**a. AI languages customization:** Future iterations of AI language models would certainly be tailored to specific programming languages and frameworks, providing more specialized support. As an example, Devlin (2018) introduced BERT (Bidirectional Encoder Representations from Transformers), a language representation model that can be fine-tuned for various tasks. BERT was designed to pretrain deep bidirectional representations from unlabeled text, and it has been shown to achieve state-of-the-art results on multiple natural language processing tasks (Devlin, 2018). Additionally, Liu et al. (2019) presented RoBERTa, a robustly optimized BERT pretraining approach. While the latter does not directly address the customization of AI language models for specific programming languages and
frameworks, it provides insights into the advancements and optimizations in BERT-based models. More recently, Brown et al. (2020) discusses the scaling up of language models and its impact on task-agnostic, few-shot performance giving evidence about the performance improvements achieved by scaling up language models.
**b. Collaborative operations:** AI models may facilitate collaborative programming, enabling developers to work together more effectively. Back in 2018, Mikhaylov et al. (2018) discussed the challenges of cross-sector collaboration in the public sector when adopting AI and data science and programming. The authors described challenges such as information silos, lack of resources, and collaborative culture. In addition, the authors highlighted the divergent approaches to managing risk in the public and private sectors. Since the political and market risks may not easily align, challenges may be created in collaborative programming efforts using AI models. Moreover, they continue mentioning that cross-sector collaborations, although popular, entail serious management challenges that hinder their success and may impact the effectiveness of collaborative programming facilitated by AI models.
**c. Self-Correcting Models:** AI models would eventually become better at recognizing and fixing their mistakes, leading to more accurate responses. AI language models have shown remarkable progress in various NLP tasks, but their ability to recognize and fix mistakes is an ongoing challenge. However, with the exploration of decoding strategies, fine-tuning approaches, and reward learning, there is potential for self-correcting models in ChatGPT. By addressing the limitations and leveraging advancements in the field, AI models could improve their error recognition and correction capabilities, ultimately leading to more accurate responses. Specifically, to address the challenge of mistake recognition, researchers have explored different decoding strategies for text generation from language models. (Holtzman et al., 2019) propose Nucleus Sampling, a method that draws higher quality text from neural language models compared to traditional decoding strategies. By truncating the unreliable tail of the probability distribution and sampling from the dynamic nucleus of tokens, Nucleus Sampling aims to avoid text degeneration and generate more coherent and diverse responses.
In addition, recent studies have examined the fine-tuning of language models by exploiting user preferences and reinforcement learning models to improve the accuracy and quality of responses. For instance, Ziegler (2019) proposes a reward-based learning in natural language dialogues, ultimately creating a model of retributive learning implemented by asking users questions. This approach is a successful application of reinforcement learning in real-world dialogue scenarios where rewards are determined by human judgment through the incorporation of human preferences and feedback that could be incorporated in ChatGPT future versions.
In support of the above article, Brown et al. (2020) very emphatically cited the weaknesses that NLP systems faced in performing new linguistic tasks when given few examples or simple instructions. In their article they argue
that AI models are not yet at the same level of error recognition and correction capabilities as a human user, but they acknowledge that this will change radically in the near future with improved decoding strategies, such as kernel sampling, and the incorporation of sophisticated adaptation and reward learning.
### Question 5: Ethical Considerations
Table V. The question about "ethical considerations" of ChatGPT and the responses are highlighted in grey.
**a. Plagiarism, proper citation and acknowledgement:** This is one of the most important ethical issues that arises as the code generated by AI should be recognized as automatically generated and programmers should not appropriate these results without critical processing (Taddeo and Floridi, 2018).
To this end, Li (2022) addresses the issue of source code intellectual property and the necessity for accurate attribution of code to its owner/developer, issues that are crucial in the software and application implementation cycle and in the broader fields of software forensics and software quality analysis. In the same article, the author acknowledges that the current methods of rendering the authoring of a code can be copied by other
users, which means that the same phenomena can occur in ChatGPT. Furthermore, Gibea and Uszkai (2023) summarize in their article the growing concern of the research community about the development and deployment of AI systems and point out the need for a common code of ethics to be established by international organizations in response to these concerns. In addition, Svetlova (2022) argues that AI ethics should also take into account all systemic implications arising from the use of AI, especially the systemic effects on developers' codes of conduct. All of the above argues that the appropriate performance of an automatically generated code is critical to address potential systemic risks, but also to ensure the appropriate development of AI itself.
Similarly, the article by Jobin and Vayena, (2019), highlights the new landscape of ethical guidelines for AI with both meaningful analysis and appropriate implementation strategies. The importance of adhering to ethical considerations, among which is the appropriate attribution of the rights of a code segment, is reemphasized and also for the development of AI systems. In the same vein, Taddeo and Floridi (2018) stress the importance of having an ethical framework for exploiting the potential of AI, but where control remains strictly with the human agent. Furthermore, Minkkinen et al. (2022) present for the first time the use of AI in process evaluations in administrative, social and governmental issues, proposing standardized metrics for the responsible use of AI, including the proper performance of AI-generated code.
The important question "Does AI own intellectual property rights in content produced by it?" is answered in the article by Sharma & Sony (2020). The article highlights the uncertainty that arises on this issue, but again clearly supports the need for appropriate attribution. In addition, Diaz-Noci (2020) went a little further into the question and presented the legal implications of news produced using AI systems and what happens to intellectual property rights. Although the article is mainly concerned with journalistic practices, it highlights the challenges in determining the rights of the author, and again strongly supports the need for appropriate attribution of AI-generated content.
Very interesting is also the article by Almarzoqi and Albakjaji (2022), who even explored the possibility of patenting products resulting from AI, while they also examined the challenges posed by intellectual property laws in this context. As in the previous article, the need for appropriate attribution to the creator (AI) is emphasized. However, it is recognized that the legal frameworks that describe and regulate intellectual property in innovations/new inventions are not ready to patent something that is purely generated by AI.
More recently, Lu et al. (2023) propose the idea of collecting patterns to assist in the design of responsible AI systems. Their article highlights the need to address the challenges of responsible AI operation, part of which includes the appropriate rendering of the code generated by AI. Similarly, Haonan et al. (2023) discuss copyright protection and accountability of creative AI. Although they focus on intellectual property rights, their work highlights the need for attribution and accountability in the context of AI-generated works.
**b. Bias and Fairness:** Another important ethical aspect is the presence of bias in the training data. In programming, this would lead to biased or discriminatory code suggestions, and it is important to focus on efforts to mitigate such issues.
It should be emphasized that the impact of bias in training data that leads to discriminatory code suggestions from AI generated text or code, was early underlined from the scientific community. For instance, (Caliskan et al., 2017) discuss how text corpora contain recoverable and accurate imprints of historic biases, including biases towards race or gender. This suggests that biases present in the training data can be reflected in the output of AI models, potentially leading to biased or discriminatory code suggestions. Aligned with the above assumptions, Dixon et al. (2018) demonstrated how imbalances in training data can lead to unintended bias in resulting models. This implies that if the training data contains biased or discriminatory patterns, the AI model's code suggestions may also exhibit such biases.
Moreover, Garg et al. (2018) highlighted the presence of gender and ethnic stereotypes in word embeddings trained on text data. Word embeddings, which are widely used in natural language processing tasks, can amplify biases present in the training data. This suggests that biases in training data can affect the behavior of AI models, including code suggestions.
To sum up, ChatGPT has shown potential as a code assistance tool, aiding in generating computer codes and supporting researchers in coding tasks developers should be aware when they are interacting with AI systems and understand the model's limitations to make informed decisions and take responsibility for thoroughly reviewing and testing the output. To this end, Amodei et al. (2016) focuses on AI safety and the potential risks associated with forward-looking applications of AI. While it does not directly address the need for developers to understand AI model limitations, it highlights the importance of considering safety in the design and deployment of AI systems.
## 3 Discussion - outlook for the future
The next versions of ChatGPT, which will certainly be more comprehensive instances of AI-generated content tools, will significantly improve the automatic chat capabilities. As an AI-based language model, ChatGPT is designed to be trained by its interactions with humans and appropriately adapt its responses. As such, it is reasonable to expect that future versions of ChatGPT, will be more comprehensive and reliable, as they will have gathered a greater amount of data and experience. This accumulation of knowledge will allow it to provide even more accurate and appropriate answers for the subjects for which questions are asked, both in general discussions and specifically in areas such as computer science education and code generation.
On a technical level, the ChatGPT developers will certainly incorporate
new updates that will include improvements in natural language processing, increased ability to understand user queries and the ability to generate responses that are more coherent and closer to everyday human communication in natural language. In addition, it is reasonable to assume that the developers have addressed any limitations or biases that were identified in previous versions, ensuring an ever-increasing impartiality of ChatGPT.
In addition, the next era for ChatGPT could include the integration of new features or functions. As a representative example, a feature has already been launched where a feedback mechanism is incorporated that allows users to provide feedback on the quality of responses. This feedback loop continuously improves ChatGPT based on user feedback, resulting in a more personalized and satisfying user experience. It is also expected that ChatGPT will be integrated with different AI models through artificial agents. Such collaborations will lead to a more diverse range of answers in computer science to create code, increase the knowledge base and improve problem-solving capabilities (Neil et al., 2022) and (Beiqi et al., 2023).
However, the human factor must play an important role in the final decisions and judgements. In particular, in the field of computer science, programmers should not rely solely on the answers provided by an automated system but should instead combine them with their own expertise and experience and should continuously cultivate their knowledge, improve their skills and remain actively involved in the continuous and lifelong learning of (Christen et al., 2023).
**Data availability statement**
Zenodo: Anagnostopoulos C.N. (2023). ChatGPT impacts in programming education: A short list of input questions and output answers from ChatGPT [Data set]. Zenodo. [https://doi.org/10.5281/zenodo.8375014](https://doi.org/10.5281/zenodo.8375014)
This project contains the following underlying data: Questions and answers from ChatGPT concerning ChatGPT impacts in programming education (pdf file). Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).
|
2303.10504
|
Optimization-based Constrained Funnel Synthesis for Systems with
Lipschitz Nonlinearities via Numerical Optimal Control
|
This paper presents a funnel synthesis algorithm for computing controlled
invariant sets and feedback control gains around a given nominal trajectory for
dynamical systems with locally Lipschitz nonlinearities and bounded
disturbances. The resulting funnel synthesis problem involves a differential
linear matrix inequality (DLMI) whose solution satisfies a Lyapunov condition
that implies invariance and attractivity properties. Due to these properties,
the proposed method can balance maximization of initial invariant funnel size,
i.e., size of the funnel entry, and minimization of the size of the attractive
funnel for attenuating the effect of disturbance. To solve the resulting funnel
synthesis problem with the DLMI as constraints, we employ a numerical optimal
control approach that uses a multiple shooting method to convert the problem
into a finite dimensional semidefinite programming problem. This framework does
not require piecewise linear system matrices and funnel parameters, which is
typically assumed in recent related work. We illustrate the proposed funnel
synthesis method with a numerical example.
|
Taewan Kim, Purnanand Elango, Taylor P. Reynolds, Behçet Açıkmeşe, Mehran Mesbahi
|
2023-03-18T21:34:14Z
|
http://arxiv.org/abs/2303.10504v2
|
Optimization-based Constrained Funnel Synthesis for Systems with Lipschitz Nonlinearities via Numerical Optimal Control
###### Abstract
This paper presents a funnel synthesis algorithm for computing controlled invariant sets and feedback control gains around a given nominal trajectory for dynamical systems with locally Lipschitz nonlinearities and bounded disturbances. The resulting funnel synthesis problem involves a differential linear matrix inequality (DLMI) whose solution satisfies a Lyapunov condition that implies invariance and attractivity properties. Due to these properties, the proposed method can balance maximization of initial invariant funnel size, i.e., size of the _funnel entry_, and minimization of the size of the attractive funnel for disturbance attenuation. To solve the resulting funnel synthesis problem with the DLMI as one of the problem constraints, we employ a numerical optimal control approach that uses a multiple shooting method to convert the problem into a finite dimensional semidefinite programming problem. This framework avoids the need for piecewise linear system matrices and funnel parameters, which are typically assumed in recent related work. We illustrate the proposed funnel synthesis method with a numerical example.
## I Introduction
Funnel, also referred to as tube, represents regions of finite-time controlled invariant state space for closed-loop systems equipped with an associated feedback control law around a given nominal trajectory [1]. Funnel synthesis refers to a procedure for computing both the controlled invariant set and the corresponding feedback control law. Once we compute a library of funnels along different nominal trajectories, the resulting funnel can be used for different purposes such as real-time motion planning [2] and feasible trajectory generation [3].
The studies in funnel synthesis can be separated into two categories depending on whether they aim to maximize [3, 4, 5] or minimize the size of the funnel [2, 6, 7]. The funnel computation inherently aims to maximize the size of the funnel to have a larger controlled invariant set in the state space. On the other hand, when it comes to systems under uncertainty or disturbances, the funnel has been computed in a way that minimizes the size of the funnel to bound the effect of the uncertainty. For example, the work in [2] minimizes the size of the funnel to prohibit collision with obstacles instead of imposing obstacle avoidance constraints directly. However, minimizing the size of the funnel is against the original purpose of having a large controlled invariant set in the state space. In this work, we provide a funnel synthesis algorithm that balances maximizing the size of the funnel and minimizing the effect of the bounded disturbance. To this end, we exploit invariance and attractivity conditions derived from Lyapunov theory [8] by solving linear matrix inequalities (LMIs) [9, 10] and imposing state and input constraints directly on the funnel.
When employing the Lyapunov condition, the resulting optimization problem has a differential inequality of the Lyapunov function in continuous-time for a finite-time interval. Since it is intractable to satisfy the inequality for all time in the given interval, many approaches focus on checking the differential inequality at a finite number of node points [2, 4, 5]. When a quadratic Lyapunov function with a time-varying positive definite (PD) matrix is employed, the resulting differential inequality ends up with a differential linear matrix inequality (DLMI). To solve the resulting DLMI, one can assume that first-order approximations (Jacobians matrices) of the nonlinear dynamics computed around the nominal trajectory are continuous piecewise linear in time. By applying the same piecewise linear parametrization to the PD matrix in the Lyapunov function, one can obtain a finite number of LMIs whose feasibility is a sufficient condition for the original DLMI [3, 11]. The main downside of this approach is that the assumption of piecewise linear system matrices may have large errors and applying the same parametrization on the PD matrix can be conservative.
In this paper, we provide a constrained funnel synthesis algorithm for locally Lipschitz nonlinear systems under bounded disturbance. To this end, we express the closed-loop system around the given nominal trajectory as a linear time-varying system having uncertain terms. Then, the DLMI is derived based on the Lyapunov condition that guarantees the invariance and the attractivity conditions. With the Lyapunov condition, the continuous-time funnel optimization problem maximizes the size of the funnel entry and minimizes the attractive funnel for disturbance attenuation. Furthermore, the proposed method can satisfy linear state and control constraints in a way that the resulting funnel around the given nominal trajectory remains inside the feasible sets of states and controls. For instance, this can be mathematically written as \(\{\bar{x}\}\oplus\mathcal{E}_{c}\subseteq\mathcal{P}\) where \(\bar{x}\) is the state in the nominal trajectory, \(\mathcal{E}_{c}\) is the state funnel, \(\mathcal{P}\) is feasible state space, and the operation \(\oplus\) is Minkowski sum. To convert the funnel synthesis problem into a finite dimensional semidefinite programming (SDP) problem, we employ a numerical optimal control approach with a multiple shooting method [12]. Consequently, the proposed work does not depend on the piecewise linear assumption of the system matrices and the PD matrix in the Lyapunov function.
### _Contributions_
First, the proposed constrained funnel synthesis approach provides an optimization framework that can balance maximizing the size of the funnel entry and minimizing the effect of the disturbance for locally Lipschitz nonlinear systems while guaranteeing the constraint satisfaction of the funnel. This balancing has not been tackled in the relevant funnel work. Second, we provide an approach based on multiple shooting in numerical optimal control for solving the DLMI. As a consequence, the assumption that the system matrices and the PD matrix in the Lyapunov function are continuous piecewise linear is not necessary.
### _Notation_
The notations \(\mathbb{R}\), \(\mathbb{R}_{+}\)\(\mathbb{R}_{++}\), and \(\mathbb{R}^{n}\) are the field of real, nonnegative, positive numbers, and the \(n\)-dimensional Euclidean space, respectively. The set \(\mathcal{N}_{q}^{\tau}\) is a finite set of consecutive nonnegative integers, i.e., \(\{q,q+1,\ldots,r\}\). The symmetric matrix \(Q=Q^{\top}(\succeq)\succ 0\) implies \(Q\) is PD (PSD) matrix, and \((\mathbb{S}_{+}^{n})\mathbb{S}_{++}^{n}\) denotes the set of all PD (PSD) matrices whose size is \(n\times n\). The symbols \(\otimes\) is the Kronecker product. The notation * denotes the symmetric part of a matrix, i.e, \(\left[\begin{array}{cc}a&b^{\top}\\ b&c\end{array}\right]=\left[\begin{array}{cc}a&*\\ b&c\end{array}\right]\). The squared root of a PSD matrix \(A\) is defined as \(A^{\frac{1}{2}}\) such that \(A=A^{\frac{1}{2}}A^{\frac{1}{2}}\). We omit the time argument \(t\) if it is clear from context.
## II Constrained Funnel Synthesis
### _Locally Lipschitz Nonlinear Systems_
Consider the following continuous-time dynamics:
\[\dot{x}(t)=f(t,x(t),u(t),w(t)),\quad t\in[t_{0},t_{f}], \tag{1}\]
where \(x(t)\in\mathbb{R}^{n_{x}}\) is state and \(u(t)\in\mathbb{R}^{n_{u}}\) is input. The vector-valued function \(w(t)\in\mathbb{R}^{n_{w}}\) represents bounded disturbance such that \(\|w(\cdot)\|_{\infty}\leq 1\) where \(\|w(\cdot)\|_{\infty}\coloneqq\sup_{t\in[t_{0},t_{f}]}\|w(t)\|\), and \(t_{0}\) and \(t_{f}\) are initial and final time, respectively. The function \(f:\mathbb{R}_{+}\times\mathbb{R}^{n_{x}}\times\mathbb{R}^{n_{w}}\times \mathbb{R}^{n_{w}}\to\mathbb{R}^{n_{x}}\) is assumed to be continuously differentiable. Suppose that a nominal trajectory \((\bar{x}(\cdot),\bar{u}(\cdot),\bar{w}(\cdot))\) is a solution of the system (1). Then, we could express the following system in the form of Lur'e system [13] as
\[\dot{x}(t) =A(t)x(t)+B(t)u(t)+F(t)w(t)+Ep(t),\] \[p(t) =\phi(t,q(t)),\] \[q(t) =Cx(t)+Du(t)+Gw(t),\]
where \(p(t)\in\mathbb{R}^{n_{p}}\) is a lumped nonlinearity represented by a nonlinear function \(\phi(t)\) and its argument \(q(t)\in\mathbb{R}^{n_{q}}\). The matrices \(A(t)\), \(B(t)\), and \(F(t)\) are chosen as first-order approximations of the nonlinear dynamics (1) around the nominal trajectory. The matrices \(E\in\mathbb{R}^{n_{x}\times n_{p}}\), \(C\in\mathbb{R}^{n_{q}\times n_{x}}\), \(D\in\mathbb{R}^{n_{q}\times n_{u}}\), and \(G\in\mathbb{R}^{n_{q}\times n_{w}}\) are assumed to be time-invariant. Particularly, we choose a zero disturbance for the nominal trajectory, that is \(\bar{w}(t)=0\) for all \(t\in[t_{0},t_{f}]\).
With the state difference \(\eta\coloneqq x-\bar{x}\), the difference dynamics can be derived as
\[\dot{\eta}(t) =f(t,x,u,w)-f(t,\bar{x},\bar{u},0),\] \[=A(t)\eta(t)+B(t)\xi(t)+F(t)w(t)+E\delta p(t),\] \[\delta p(t) =\phi(t,q(t))-\phi(t,\bar{q}(t)),\] \[\delta q(t) =C\eta(t)+D\xi(t)+Gw(t),\]
where \(\xi\coloneqq u-\bar{u}\) and \(\delta q\coloneqq q-\bar{q}\) with \(\bar{q}=C\bar{x}+D\bar{u}\). Since continuously differentiable functions are locally Lipschitz, \(f\) and \(\phi\) are locally Lipschitz. It follows that for all \(t\in[t_{0},t_{f}]\)
\[\|p-\bar{p}\|_{2}\leq\gamma(t)\|q-\bar{q}\|_{2},\quad\forall\,q,\bar{q}\in \mathcal{Q},\]
where \(\gamma(t)\in\mathbb{R}_{+}\) is a Lipschitz constant for each \(t\) and \(\mathcal{Q}\subseteq\mathbb{R}^{n_{x}}\) is any compact set. By employing the linear feedback controller, that is \(\xi(t)=K(t)\eta(t)\), the closed-loop system can be written as
\[\dot{\eta} =(A+BK)\eta+Fw+E\delta p, \tag{2a}\] \[\delta q =C\eta+D\xi+Gw,\] (2b) \[\|\delta p\|_{2} \leq\gamma\|\delta q\|_{2},\quad\|w(\cdot)\|_{\infty}\leq 1. \tag{2c}\]
### _Lyapunov Conditions_
With a continuously differentiable positive definite matrix-valued function \(Q:\mathbb{R}_{+}\to\mathbb{S}_{++}^{n_{x}}\), the Lyapunov function is defined as
\[V(t,\eta)\coloneqq\eta^{\top}(t)Q^{-1}(t)\eta(t). \tag{3}\]
Here we aim to impose the following Lyapunov condition for the closed-loop system (2):
\[\dot{V}(t,\eta) \leq-\alpha V(t,\eta),\] (4a) for all \[\|\delta p(t)\|_{2}\leq\gamma(t)\|\delta q(t)\|_{2},\] (4b) and \[V(t,\eta) \geq\|w(t)\|_{2}^{2},\quad\forall\,t\in[t_{0},t_{f}], \tag{4c}\]
where \(\alpha\in\mathbb{R}_{++}\) is a decay rate. With the above Lyapunov condition, we can establish the following lemma.
**Lemma 1**.: _Suppose that the Lyapunov condition (4) holds with a positive definite matrix-valued continuous function \(Q(t)\), then the time-varying ellipsoid defined as_
\[\mathcal{E}(t)=\{\eta\mid\eta^{\top}Q(t)^{-1}\eta\leq 1\}, \tag{5}\]
_is invariant for the closed-loop system (2), that is, if \(\eta(\cdot)\) is any solution with \(\eta(t_{0})\in\mathcal{E}(t_{0})\), then \(\eta(t)\in\mathcal{E}(t)\) for all \(t\in[t_{0},t_{f}]\). Furthermore, the ellipsoid \(\mathcal{E}(t)\) is attractive such that for any solution \(\eta(\cdot)\), the following holds:_
\[V(t,\eta(t))\leq\max\{e^{-\alpha(t-t_{0})}V(t_{0},\eta(t_{0})),1\}, \tag{6}\]
_for all \(t\) in \([t_{0},t_{f}]\)._
The above lemma can be deduced from [10, Lemma B10] and [14, Lemma 1], so here we skip the proof.
Now we define a _invariant state funnel_ with a pair of \(Q\) in (3) and a continuous scalar-valued function \(c:\mathbb{R}_{+}\to(0,1]\) as
\[\mathcal{E}_{c}(t)\coloneqq\left\{\eta\left|\,\eta^{\top}Q(t)^{-1}\eta\leq \frac{1}{c(t)}\right.\right\}, \tag{7}\]
where the function \(c(t)\) satisfy the following condition:
\[\frac{1}{c(t)}\geq\max\left\{1,e^{-\alpha(t-t_{0})}\frac{1}{c(t_{0})}\right\}, \tag{8}\]
with \(0<c(t_{0})\leq 1\). With the ellipsoid \(\mathcal{E}_{c}(t)\) having \(1/c(t)\) as the support value, we show the invariance property of \(\mathcal{E}_{c}(t)\) in the following lemma.
**Lemma 2**.: _The ellipsoid \(\mathcal{E}_{c}(t)\) defined in (7) with (8) is invariant for the closed-loop system (2) such that if \(\eta(\cdot)\) is any solution with \(\eta(t_{0})\in\mathcal{E}_{c}(t_{0})\), then \(\eta(t)\in\mathcal{E}_{c}(t)\) for all \(t\in[t_{0},t_{f}]\)._
Proof.: If the solution \(\eta(\cdot)\) satisfies \(\eta(t_{0})\in\mathcal{E}(t_{0})\), it is trivial to prove the invariance of \(\mathcal{E}_{c}(t)\) since \(\mathcal{E}(t)\) is invariant and \(\mathcal{E}(t)\subset\mathcal{E}_{c}(t)\). Consider the solution \(\eta(\cdot)\) such that \(\eta(t_{0})\in\mathcal{E}_{c}(t_{0})\setminus\mathcal{E}(t_{0})\). By the attractivity condition (6), we have \(V(t,\eta(t))\leq\max\{e^{-\alpha(t-t_{0})}V(t_{0},\eta(t_{0})),1\}\). It follows from \(V(t_{0},\eta(t_{0}))\leq 1/c(t_{0})\) and \(0<c(t_{0})\leq 1\) that \(V(t,\eta(t))\leq\max\{e^{-\alpha(t-t_{0})}\frac{1}{c(t_{0})},1\}\leq 1/c(t)\) for \(t\in[t_{0},t_{f}]\). This completes the proof.
The illustration of both the ellipsoids \(\mathcal{E}(t)\) and \(\mathcal{E}_{c}(t)\) is given in Figure 1. Any solution \(\eta(\cdot)\) of the closed-loop system (2) starting at \(\mathcal{E}_{c}(t_{0})\) remains in the state funnel \(\mathcal{E}_{c}(t)\) for all \(t\in[t_{0},t_{f}]\) because of the invariance condition of \(\mathcal{E}_{c}\) derived in Lemma 2. Furthermore, the solution \(\eta(\cdot)\) starting at \(\mathcal{E}_{c}(t_{0})\) converges to the ellipsoid \(\mathcal{E}\) if \(t_{f}\) is sufficiently large because of the attractivity of \(\mathcal{E}\) given in Lemma 1. Since we use the attractivity condition of \(\mathcal{E}\) as a key property for our funnel generation, we refer to \(\mathcal{E}\) in (5) as an _attractive funnel_.
Additionally, with the linear feedback control \(\xi=K\eta\), the condition \(\eta\in\mathcal{E}_{c}\) implies that \(\xi\) is in the following ellipsoid:
\[\mathcal{E}_{u}=\{(KQK^{\top})^{\frac{1}{2}}y\mid\|y\|_{2}\leq 1/\sqrt{c},y \in\mathbb{R}^{n_{u}}\}, \tag{9}\]
where we omit the time argument \(t\) in \(\mathcal{E}_{u}\), \(K\), \(Q\) and \(c\). The set \(\mathcal{E}_{u}\) represents the ellipsoid inside which the input deviation \(\xi\) remains, so we refer to \(\mathcal{E}_{u}\) as an _invariant input funnel_.
**Theorem 1**.: _Suppose that there exists \(Q:[t_{0},t_{f}]\to\mathbb{S}^{n_{x}}_{++}\), \(Y:[t_{0},t_{f}]\to\mathbb{R}^{n_{u}\times n_{x}}\), \(\nu:[t_{0},t_{f}]\to\mathbb{R}_{++}\), \(0<\alpha\), and \(0<\alpha\) such that the following differential matrix inequality holds for all \(t\in[t_{0},t_{f}]\):_
\[H\coloneqq\left[\begin{array}{cccc}M-\dot{Q}&*&*&*\\ \nu E^{\top}&-\nu I&*&*\\ F^{\top}&0&-\lambda_{w}I&*\\ CQ+DY&0&G&-\nu\frac{1}{\gamma^{2}}I\end{array}\right]\preceq 0, \tag{10}\] \[M\coloneqq QA^{\top}+Y^{\top}B^{\top}+AQ+BY+\alpha Q+\lambda_{w}Q.\]
_Then, the Lyapunov condition (4) holds for the closed-loop system (2) with \(K=YQ^{-1}\). Thus, with \(Q(t)\) and \(K(t)\) satisfying the DLMI (10), the ellipsoid \(\mathcal{E}(t)\) in (5) is invariant and attractive, and \(\mathcal{E}_{c}(t)\) in (7) is invariant by Lemma 1 and Lemma 2._
Proof.: By definition of positive definiteness and S-procedure [13], the sufficient condition for the Lyapunov condition (4) is that if there exists scalars \(\lambda_{p}>0\) and \(\lambda_{w}>0\) such that
\[\left[\begin{array}{cccc}\bar{M}-Q^{-1}\dot{Q}Q^{-1}&*&*\\ E^{\top}Q^{-1}&0&*\\ F^{\top}Q^{-1}&0&0\end{array}\right]+\lambda_{\nu}\!\!\left[\begin{array}{ ccccc}C_{c}^{cl}&0&G_{k}\\ 0&I&0\end{array}\right]^{\top}\!\left[\begin{array}{cccc}\gamma^{2}I&0\\ 0&-I\end{array}\right]C_{G}\\ +\lambda_{\omega}\!\!\left[\begin{array}{cccc}Q^{-1}&0&0\\ 0&0&0\end{array}\right]\preceq 0,\]
where \(\bar{M}\coloneqq A_{cl}^{\top}Q^{-1}+Q^{-1}A_{cl}+\alpha Q^{-1}\). Applying Schur complement, and then multiplying either side by \(\text{diag}\{Q,\lambda_{p}^{-1}I,I,I\}\) complete the proof with \(\nu\coloneqq\lambda_{p}^{-1}\).
Notice that the above differential matrix inequality (10) is linear in \(\dot{Q}\), \(Q\), \(Y\), and \(\nu\) once \(\lambda_{w}\) and \(\alpha\) are fixed.
### _Feasibility of State and Input Funnels_
The funnel synthesis of the proposed work aims to be not only invariant but also feasible, so constraints on the invariant state and input funnels should be satisfied. The feasible sets for the state and input funnels can be described as
\[\mathcal{X} =\{x\mid h_{i}(x)\leq 0,\quad i=1,\ldots,m_{x}\},\] \[\mathcal{U} =\{u\mid g_{j}(u)\leq 0,\quad j=1,\ldots,m_{u}\},\]
where \(h_{i}:\mathbb{R}^{n_{x}}\to\mathbb{R}\) and \(g_{j}:\mathbb{R}^{n_{u}}\to\mathbb{R}\) are assumed to be at least once differentiable (possibly nonconvex) functions. Since it is not tractable to impose the nonconvex constraints on the ellipsoid funnel, we linearize the above constraints around the nominal trajectory, resulting in the following polyhedral constraints sets:
\[\mathcal{P}_{x} =\{x\mid(a_{h}^{h})^{\top}x\leq b_{i}^{h},\quad i=1,\ldots,m_{x}\},\] \[\mathcal{P}_{u} =\{u\mid(a_{j}^{g})^{\top}u\leq b_{j}^{g},\quad j=1,\ldots,m_{u}\},\]
where \((a_{i}^{h},b_{i}^{h})\) and \((a_{j}^{g},b_{j}^{g})\) are first-order approximations of \(h_{i}\) and \(g_{j}\), respectively. The inclusions \(\mathcal{P}_{x}\subseteq\mathcal{X}\) and \(\mathcal{P}_{u}\subseteq\mathcal{U}\) hold if the function \(h_{i}\) is a concave function, such as ellipsoidal obstacle avoidance constraints.
Now we aim to design \(\mathcal{E}_{c}\) in (7) and \(\mathcal{E}_{u}\) in (9) with \(Q\) and \(K\) such that \(\{\bar{x}\}\oplus\mathcal{E}_{c}\subseteq\mathcal{P}_{x}\), \(\{\bar{u}\}\oplus\mathcal{E}_{u}\subseteq\mathcal{P}_{u}\). These
conditions could be equivalently written as
\[\|(Q/c)^{\frac{1}{2}}a_{i}^{h}\|_{2} \leq b_{i}^{h}-(a_{i}^{h})^{\top}\bar{x},\quad i=1,\ldots,m_{x},\] \[\|(K(Q/c)K^{\top})^{\frac{1}{2}}a_{j}^{g}\|_{2} \leq b_{j}^{g}-(a_{j}^{g})^{\top}\bar{u},\quad j=1,\ldots,m_{u}.\]
Squaring both sides and applying Schur complement equivalently generates
\[0 \preceq\left[\begin{array}{cc}\left(b_{i}^{h}-(a_{i}^{h})^{\top }\bar{x}\right)^{2}c&(a_{i}^{h})^{\top}Q\\ Qa_{i}^{h}&Q\end{array}\right], \tag{11}\] \[0 \preceq\left[\begin{array}{cc}\left(b_{j}^{g}-(a_{j}^{g})^{\top }\bar{u}\right)^{2}c&(a_{j}^{g})^{\top}Y^{\top}\\ Y\alpha_{j}^{g}&Q\end{array}\right],\] (12) \[i =1,\ldots,m_{x},\quad j=1,\ldots,m_{u}.\]
The feasibility conditions (11) and (12) are linear in \(Q\), \(Y\), and \(c\).
### _Objectives_
Here we illustrate the objectives of the funnel synthesis problem. The goal of the funnel synthesis aims to maximize the invariant funnel entry from which the system can remain feasible and converge to the attractive funnel. While maximizing the funnel entry \(\mathcal{E}(t_{0})\), we also minimize the attractive funnel \(\mathcal{E}\) for the disturbance attenuation. Thus, the cost function is designed to balance maximizing the size of \(\mathcal{E}_{c}(t_{0})\) and minimizing that of \(\mathcal{E}(t)\) for all \(t\) in \([t_{0},t_{f}]\).
The volume of funnel entry \(\mathcal{E}_{c}(t_{0})\) can be approximated as follows:
\[\text{vol}(\mathcal{E}_{c}(t_{0})) \approx\log\det\frac{Q(t_{0})}{c(t_{0})}\] \[=-n_{x}\log c(t_{0})+\log\det Q(t_{0})\]
Hence, to maximize the volume of the funnel entry, we minimize \(c(t_{0})\) and \(-\log\det Q(t_{0})\). On the other hand, the volume of the set \(\mathcal{E}\) is proportional to \(\log\det Q\) that is concave, so it is not tractable to minimize this function. Instead, we can minimize the maximum eigenvalue of \(Q\), which is the square of the maximum radius, by introducing slack variables \(v^{Q}\). In summary, the funnel synthesis aims to minimize a cost function \(J\) given as
\[J=w_{c}c(t_{0})-w_{Q_{0}}\log\det Q(t_{0})+\int_{t_{0}}^{t_{f}}w _{Q}v^{Q}(t)\text{dt}, \tag{13}\] \[\text{with }Q(t)\preceq v^{Q}(t)I,\quad\forall\,t\in[t_{0},t_{f}], \tag{14}\]
where \(v^{Q}(t)\in\mathbb{R}_{++}\) is a slack variable introduced to minimize the maximum eigenvalue of \(Q\), and \(w_{c},w_{Q_{0}},w_{Q}\in\mathbb{R}_{++}\) are user-defined weight parameters.
### _Continuous-time funnel synthesis problem_
The continuous-time funnel synthesis problem can be formulated as follows:
\[\underset{Q(t),Y(t),c(t),\nu(t),v^{Q}(t)}{\text{minimize}} (\ref{eq:constraint})\] (15a) subject to \[\forall\,t\in[t_{0},t_{f}], \tag{15b}\] \[(\ref{eq:constraint}),(\ref{eq:constraint}),(\ref{eq:constraint}),(\ref{eq:constraint}),(\ref{eq:constraint}),(\ref{eq:constraint}),(\ref{eq:constraint}),(\ref{eq:constraint}),( \ref{eq:constraint}),(\ref{eq:constraint}),\] (15c) \[Q(t_{0})\succeq c(t_{0})Q_{i},\] (15d) \[Q(t_{f})\preceq c(t_{f})Q_{f}, \tag{15e}\]
where the matrices \(Q_{i}\in\mathbb{S}_{++}^{n_{x}}\) and \(Q_{f}\in\mathbb{S}_{++}^{n_{x}}\) are constant parameters used for the boundary conditions.
Additionally, we may need the following state and input funnel constraints to prohibit the problem (15) from being unbounded:
\[0\prec Q(t)\preceq c(t)Q_{max}, \tag{16}\] \[0\preceq\left[\begin{array}{cc}c(t)R_{max}&Y(t)\\ Y(t)^{\top}&Q(t)\end{array}\right], \tag{17}\]
where \(Q_{max}\in\mathbb{S}_{++}^{n_{x}}\) and \(R_{max}\in\mathbb{S}_{++}^{n_{u}}\). Notice that the condition (17) is equivalent to \(KQK^{\top}\preceq R_{max}\) by Schur complement. One can easily set these matrices sufficiently large not to be conservative.
## III Optimizing Funnel via Optimal Control
The problem formulated in (15) is an infinite-dimensional continuous-time optimization problem, so it is not readily straightforward to solve it numerically. Here we discuss a way to transform the problem into a finite-dimensional discrete-time convex problem.
### _Changing a DLMI to a DME_
In this subsection, we illustrate how the funnel synthesis problem (15) can be interpreted as a continuous-time optimal control problem. Observe that the DLMI in (10) can be equivalently written as a DME by introducing a PSD-valued slack variable \(Z(t)\in\mathbb{S}_{+}^{n_{z}}\) with \(n_{z}=n_{x}+n_{p}+n_{w}+n_{q}\) as follows:
\[H+\underbrace{\left[\begin{array}{cccc}Z^{11}&*&*&*\\ Z^{21}&Z^{22}&*&*\\ Z^{31}&Z^{32}&Z^{33}&*\\ Z^{41}&Z^{42}&Z^{43}&Z^{44}\end{array}\right]}_{\coloneqq Z\succeq 0}=0, \tag{18}\]
where \(H\) is defined in (10) and \(Z^{ij}(t)\) have appropriate sizes for all \(i,j\in\{1,\ldots,4\}\). The first-row and first-column block has the following form:
\[\dot{Q}(t)=M(t)+Z^{11}(t). \tag{19}\]
with \(M(t)\) defined in (10). The DME (19) can be interpreted as a differential equation for a linear dynamical system where \(Q\) is a state, and \(Y\) and \(Z^{11}\) are control inputs.
To derive further, we define the following vectors using the vectorization operation:
\[q\coloneqq\text{vec}(Q),y\coloneqq\text{vec}(Y),z^{11}\coloneqq\text{vec}(Z^ {11}), \tag{20}\]
where the operation \(\text{vec}(\cdot)\) stacks the columns to make a single vector. Then the DME (19) can be equivalently expressed with the vector variables in (20) as
\[\dot{q}(t) =A_{q}(t)q(t)+B_{q}(t)y(t)+S_{q}(t)z^{11}(t), \tag{21}\]
with
\[A_{q} =(I\otimes A)+(A\otimes I)+(\alpha+\lambda_{w})(I\otimes I),\] \[B_{q} =(I\otimes B)+(B\otimes I)K^{c},\quad S_{q}=(I\otimes I),\]
where \(K^{c}\in\mathbb{R}_{n^{x}n_{u}\times n_{x}n_{u}}\) is a commutation matrix [15] such that \(K^{c}\text{vec}(N)=\text{vec}(N^{\top})\) for any arbitrary matrix \(N\in\mathbb{R}_{+}^{n_{u}\times n_{x}}\).
### _Multiple Shooting Numerical Optimal Control_
To transform (15) into the finite-dimensional discrete-time optimal control problem, we first choose uniform time grids as
\[t_{k}=\frac{k}{N_{node}}(t_{f}-t_{0}),\quad k\in\mathcal{N}_{0}^{N}.\]
The decision variables and the Lipschitz constant \(\gamma\) at each node point are set as \(\triangle_{k}=\triangle(t_{k})\) where a placeholder \(\triangle\) represents \(Q,Y,Z,c,\nu\), and \(\gamma\).
We apply continuous piecewise linear interpolation for \(Y\), \(Z\), \(\nu\) and \(c^{-1}\) for each \(k\in\mathcal{N}_{0}^{N-1}\) as follows:
\[\square(t) =\lambda_{k}^{m}(t)\square_{k}+\lambda_{k}^{p}(t)\square_{k+1}, \quad\forall\,t\in[t_{k},t_{k+1}],\] \[\lambda_{k}^{m}(t) =\frac{t_{k+1}-t}{t_{k+1}-t_{k}},\quad\lambda_{k}^{p}(t)=\frac{t- t_{k}}{t_{k+1}-t_{k}},\]
where a placeholder \(\square\) stands for \(Y,\nu,Z,c^{-1}\). Notice that we apply FOH interpolation to the inverse of \(c\), that is \(c^{-1}\), not \(c\) itself. With this interpolation and additional conditions, we can show that \(c(t)\) satisfies the condition (8) for all \(t\in[t_{0},t_{f}]\).
**Proposition 1**.: _Suppose that for each subinterval \(c(t)\) satisfies_
\[c(t)=\frac{c_{k}c_{k+1}}{\lambda_{k}^{m}(t)c_{k+1}+\lambda_{k}^{p}(t)c_{k}}, \forall\,t\in[t_{k},t_{k+1}],\forall\,k\in\mathcal{N}_{0}^{N-1},\]
_and_
\[0<c_{k}\leq 1,\quad e^{-\alpha(t_{k}-t_{0})}c_{k}\leq c_{0},\quad\forall\,k \in\mathcal{N}_{0}^{N}. \tag{22}\]
_Then, \(c(t)\) satisfies the condition (8)._
Proof.: We want to show that \(\frac{1}{c(t)}\geq\max\{1,e^{-\alpha(t-t_{0})}\frac{1}{c(t_{0})}\}\) for all \(t\in[t_{0},t_{f}]\). The condition (22) implies \(1/c_{k}\geq\max\{1,e^{-\alpha(t_{k}-t_{0})}\frac{1}{c(t_{0})}\}\) for all \(k\in\mathcal{N}_{0}^{N}\). Notice that \(\max\{1,e^{-\alpha(t-t_{0})}\frac{1}{c(t_{0})}\}\) is convex in \(t\), and \(1/c(t)\) is the convex combination of two points \(1/c_{k}\) and \(1/c_{k+1}\) for \(t\in[t_{k},t_{k+1}]\). Thus, it follows from the definition of the convex function that \(\frac{1}{c(t)}\geq\max\{1,e^{-\alpha(t-t_{0})}\frac{1}{c(t_{0})}\}\) for \(t\in[t_{k},t_{k+1}]\). Since this holds for all \(k\in\mathcal{N}_{0}^{N-1}\), we complete the proof.
The PD-valued function \(Q(t)\) is not assumed to be piecewise linear, so \(q(t)\) is not. Instead, \(q(t)\) is the solution of the ordinary differential equation in (21). To this end, we impose the following continuity condition:
\[q_{k+1}=q_{k}+\int_{t_{k}}^{t_{k+1}}\dot{q}(\tau)\text{d}\tau.\]
Since the system (21) is linear, we can equivalently express the above constraint through the discretization using the state transition matrix [16], resulting in
\[\begin{split} q_{k+1}&=A_{k}^{q}q_{k}+B_{k}^{-}y_{ k}+B_{k}^{+}y_{k+1}+S_{k}^{-}z_{k}^{11}+S_{k}^{+}z_{k+1}^{11},\\ &\forall\,k\in\mathcal{N}_{0}^{N-1},\end{split} \tag{23}\]
where \(q_{k}=\text{vec}(Q_{k})\), \(y_{k}=\text{vec}(Y_{k})\), and \(z_{k}^{11}=\text{vec}(Z_{k}^{11})\). The other blocks in (18) are imposed as
\[\begin{split}\nu_{E}&+Z_{k}^{21}=0,-\nu_{k}I+Z_{k} ^{22}=0,F_{k}^{2}+Z_{k}^{21}=0,Z_{k}^{22}=0,-\lambda_{k}I+Z_{k}^{23}=0,\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad
## IV Numerical Simulation
For the numerical simulation, we consider a unicycle model with addictive disturbances written as
\[\left[\begin{array}{c}\dot{r_{x}}\\ \dot{r_{y}}\\ \theta\end{array}\right]=\left[\begin{array}{c}u_{v}\cos\theta\\ u_{v}\sin\theta\\ u_{\theta}\end{array}\right]+\left[\begin{array}{c}0.1w_{1}\\ 0.1w_{2}\\ 0\end{array}\right], \tag{26}\]
where \(r_{x}\), \(r_{y}\), and \(\theta\) are a \(x\)-axis position, a \(y\)-axis position, are a yaw angle, respectively, and \(u_{v}\) is a velocity and \(u_{\theta}\) is an angular velocity. The values \(w_{1}\) and \(w_{2}\) are disturbances such that \(w=[w_{1},w_{2}]^{\top},\|w\|\leq 1\). We consider \(N=30\) nodes evenly distributed over a time horizon of \(5\) seconds with \(t_{0}=0\) and \(t_{f}=5\). The boundary parameters \(Q_{i}\) and \(Q_{f}\) are \(\text{diag}([0.08~{}0.08~{}0.06])\). We consider two obstacle avoidance constraints, leading to nonconvex constraints for the state illustrated in Fig. 2. The input constraints are given as: \(0\leq u_{v}\leq 2\) and \(|u_{\theta}|\leq 2\). The 100 samples are used for the Lipschitz constant \(\gamma_{k}\) estimation for all \(k\) in \(\mathcal{N}_{0}^{N}\) by following the procedure given in [3]. The weights \(w_{c}\), \(w_{Q_{0}}\), and \(w_{Q}\) are \(10^{3}\), \(0.1\), and \(0.1\), respectively. The parameters \(\alpha\) and \(\lambda_{w}\) are \(0.7\) and \(0.5\), respectively. The simulation can be reproducible by using the code at [https://github.com/taewankiml/funnel_synthesis_multiple_shooting](https://github.com/taewankiml/funnel_synthesis_multiple_shooting).
The results of the proposed work are given in Fig. 2 and Fig. 3. We can see that the generated funnel satisfies both state constraints (obstacle avoidance) and input constraints at each node point. Also, the resulting support value \(1/c\) satisfies the constraint (8). To test the invariance and attractivity conditions, we take a total of 100 samples, 50 from the surface of \(\mathcal{E}(t_{0})\) and 50 from that of \(\mathcal{E}_{c}(t_{0})\). We propagate each sample through the model (26) with a randomly chosen disturbance \(w\) such that \(\|w\|=1\). In the bottom figure of Fig. 2, the value of the Lyapunov function for each sample trajectory is plotted. We can see that the invariance conditions of both \(\mathcal{E}\) and \(\mathcal{E}_{c}\) hold well, and the samples starting from the surface of \(\mathcal{E}_{c}\) converge to the attractive funnel \(\mathcal{E}\) due to the attractivity condition.
## V Conclusions
This paper presents a funnel synthesis method for locally Lipschitz nonlinear systems under the presence of bounded disturbances. The proposed funnel synthesis approach aims to maximize the funnel entry while minimizing the attractive funnel to bound the effect of the disturbances. To solve the continuous-time funnel optimization problem having the DLMI, we apply the direct multiple shooting optimal control method. In the numerical evaluation with the unicycle model, the results show that the generated funnel satisfies both invariance and feasibility properties.
|
2305.04827
|
Modeling glycemia in humans by means of Grammatical Evolution
|
Diabetes mellitus is a disease that affects to hundreds of millions of people
worldwide. Maintaining a good control of the disease is critical to avoid
severe long-term complications. In recent years, several artificial pancreas
systems have been proposed and developed, which are increasingly advanced.
However there is still a lot of research to do. One of the main problems that
arises in the (semi) automatic control of diabetes, is to get a model
explaining how glycemia (glucose levels in blood) varies with insulin, food
intakes and other factors, fitting the characteristics of each individual or
patient. This paper proposes the application of evolutionary computation
techniques to obtain customized models of patients, unlike most of previous
approaches which obtain averaged models. The proposal is based on a kind of
genetic programming based on grammars known as Grammatical Evolution (GE). The
proposal has been tested with in-silico patient data and results are clearly
positive. We present also a study of four different grammars and five objective
functions. In the test phase the models characterized the glucose with a mean
percentage average error of 13.69\%, modeling well also both hyper and
hypoglycemic situations.
|
J. Ignacio Hidalgo, J. Manuel Colmenar, José L. Risco-Martín, Alfredo Cuesta-Infante, Esther Maqueda, Marta Botella, José Antonio Rubio
|
2023-04-27T14:33:52Z
|
http://arxiv.org/abs/2305.04827v1
|
# Modeling Glycemia in Humans by Means of Grammatical Evolution
###### Abstract
Diabetes mellitus is a disease that affects to hundreds of millions of people worldwide. Maintaining a good control of the disease is critical to avoid severe long-term complications. In recent years, several artificial pancreas systems have been proposed and developed, which are increasingly advanced. However there is still a lot of research to do. One of the main problems that arises in the (semi) automatic control of diabetes, is to get a model explaining how glycemia (glucose levels in blood) varies with insulin, food intakes and other factors, fitting the characteristics of each individual or patient. This paper proposes the application of evolutionary computation techniques to obtain customized models of patients, unlike most of previous approaches which obtain averaged models. The proposal is based on a kind of genetic programming based on grammars known as Grammatical Evolution (GE). The proposal has been tested with in-silico patient data and results are clearly positive. We present also a study of four different grammars and five objective functions. In the test phase the models characterized the glucose with a mean percentage average error of 13.69%, modeling well also both hyper and hypoglycemic situations.
keywords: Gramatical Evolution, Diabetes, Glucose Level, Modeling +
Footnote †: journal: Applied Soft Computing
## 1 Introduction
Diabetes mellitus is a disease caused by a defect in either the secretion or in the action of insulin, which is essential for the control of blood glucose
levels. Both of them cause in cells not to assimilate the sugar and, as a consequence, there is a rise in blood glucose levels, or hyperglycemia. Several types of diabetes differ in origin. According to the ADA (American Diabetes Association) we can distinguish four types of diabetes:
* Type 1 Diabetes (T1DM): Cells do not produce insulin because of an autoimmune process. Currently, requires the person to inject insulin or wear an insulin pump.
* Type 2 Diabetes (T2DM): Results from insulin resistance, where cells fail to use insulin properly, sometimes combined with an absolute insulin deficiency.
* Gestational Diabetes: appears in the gestation period in one out of ten pregnant women. Pregnancy is a change in the body's metabolism, since the fetus uses the mother's energy for food, oxygen and others. This causes a decrease in the secretion of insulin from the mother.
* Other Types: such as problems on \(\beta\) -cells, genetic defects affecting insulin action, induced by drugs, genetic syndroms, etc.
In most cases, diabetic patients with long time evolution need exogenous insulin either injected into various injection doses, or introduced by an insulin pump. It is important to maintain good glycemic control to prevent not only from the acute complications specific to diabetes (diabetic ketoacidosis and hypoglyemia, defined as blood glucose value less than \(70mg/dl\)), but also from a set of multi-chronic complications associated with diabetic patients: nephropathy, retinopathy, microangiopathy and macroangiopathy.
In recent years, it has been shown that a strict glycemic control in critically ill patients improves performance and reduces medical costs [1][2]. Glucose levels control is a demanding and difficult task for both patients and their families. To keep good levels of blood glucose, the patient must have some capacity of prediction to know what level of glucose would have if ingested a certain amount of food or injected with a quantity of a insulin of a certain kind. In fact, the objective is to avoid not only long periods of hyperglyemia (glucose levels \(\geq 120mg/dl\)) but also episodes of severe hypoglycemia (glucose levels \(\leq 40mg/dl\)) that can lead to patient death.
One of the aspects that make it difficult to control blood glucose level is the lack of a general model of response to both insulin and the various factors
mentioned above, due to the particularities of each patient [3]. Models in the literature apply classical modeling techniques, resulting in linear equations, defined profiles, or models with a limited set of inputs. Here we propose a novel technique that involves obtaining the patient model using genetic programming (GP). GP eliminates barriers in building the model, such as linearity or limitations on the input parameters.
Evolutionary techniques such as GP, have certain characteristics that make them particularly suited to address optimization problems and complex modeling. First, they are conceptually simple in its application but have a theoretical basis defined and widely studied. GP has demonstrated its applicability to many real problems, and is intrinsically parallelizable to work with a set of solutions. Furthermore, EAs have great potential to incorporate knowledge about the domain and to incorporate other search mechanisms (not necessarily evolutionary).
One of the best known applications of GP is symbolic regression and the application of one of its variants, Grammatical Evolution (GE), allows to obtain solutions that incorporate non-linear terms. GE is an evolutionary computation technique established in 1998 by Conor Ryan's group at the University of Limerick (Ireland) [4]. GP aims to find an executable program or function that respond to the reference data. The key advantage is that GE applies genetic operators to a whole chain, which simplifies the search application in different programming languages. In addition, there are no memory problems, unlike with GP where the tree representation could have the well know problem of bloating (an excessive growing of the computer structures in memory). Hence, we propose to apply GE to find a custom model that describes and predicts the blood glucose level in a patient. Our method takes the historic data of a patient consisting in previous glucose levels, ingested carbohydrates and injected insulin, and obtains an expression that can be used to predict near future glucose values. The contributions of this work are:
* We propose a method based on GE to obtain individualized and customized glycemia (glucose level in blood) models in humans.
* We have tested this proposal with five in-silico patients taken from AIDA simulator [5].
* We present a study of four different grammars and five objective functions.
* We have selected the best models for each patient and run a test phase with a new dataset. In the test phase the models characterized the glucose with a mean percentage average error of 13.69%, reflecting also a good representation of both hyper and hypoglycemic situations.
The rest of the paper is organized as follows. Section 2 describes the related work. Section 3 details how grammatical evolution can be applied to this problem. Section 4 shows the general model we propose, as well as the grammars, particular models and objective functions we have studied for the glucose estimation problem. Section 5 is devoted to the experimental setup, while Section 6 presents the results obtained in both training and test phases. Finally, Section 7 explains the conclusions and the future work.
## 2 Related Work
Glucose level control is a very demanding and difficult task for both patients and their families. Trying to keep a good control of blood glucose involves tof perform blood glucose regular measurements (which involves at least one puncture in each measure or using a continuous monitoring system during some periods), insulin dose estimation, carbohydrates estimation, analyze that information somehow and to have some capacity of prediction that allows the patient to know what level of glucose would have if ingested a certain amount of food or injected with a quantity of a insulin of a certain kind.
As we have already mentioned, one of the main problems in controlling and predicting blood glucose levels is the lack of reliable models of response to both insulin and the various factors involved. Although there are some general approximations, there are hardly few adapted to the particularities of each patient [6][3]. The models in the literature apply classical modeling techniques, resulting in linear equations defined profiles, or models with a limited set of inputs. There are other factors that make a good control hard to achieve [7]. For instance, we can mention that there is a significant delay between insulin administration and the appearance of insulin in the blood stream with the use of subcutaneous (SC) insulin. This delay time limits the achievable control performance on subcutaneous administration of insulin.
In [6] authors propose the use of models to maintain margins of robustness when there is a mismatch between the model and the patient. The approach used there is personalized using information a priori known (ie, easy access)
of patients to limit conservatism. However, this model only applies to linear models and can not incorporate other factors such as exercise or stress that clearly affect the expected levels of glucose. Models based on data for individual subjects are often inaccurate, since clinical data in T1DM are not extensive enough to identify the exact models [8][9]. To obtain continuous series of data, glucose levels should be measured using a subcutaneous continuous glucose monitoring (CGM) system. To calculate the dose of insulin the patient or the physician may use different mechanisms and control algorithms. Hence, we can also find some personalized control approaches [10][11][12][13] corresponding to clinical practice. Current treatment for subjects with T1DM uses rates of basal insulin delivery, insulin to carbohydrate ratios (CHO) and individual correction factors, typically from observations of the specialist.
There are also some models used in artificial pancreas systems or models of closed loop control [14][15]. The main risk is hypoglycemia as a result of excessive insulin administration. However we know that it is possible to reach a good control with approximate models, provided that the model is related to the control objective [16][17]. Again, the most important factor for the focus of this paper is the lack of accurate individualized models. If there is an accurate model of the subject's response to insulin, the design of the controller is relatively simple using classical control techniques. Autoregressive models (AR) may be applied to overcome problems of identifiability[18][19], although those are not useful for controlling since they have not an exogenous input. Some protocols have also been proposed to improve the reliability of the models [9][14][20] but the possibilities for the design of experiments are limited due to the strict security requirements and limitations in clinical protocols.
There have been also different approaches to facilitate the diabetes control from commercial companies. However, most of them have been designed only for specific glucometers and when providing insulin recommendations, the model is not available. _Glucofacts Deluxe_ by _Bayern_[21], _CoPilot Health Management System_ by _Abbot_[22], and _Mena Diab_[23] by _Menarini_ are some of them.
Although there are many works that use control models, up to the date the modeling problem has not been addressed by evolutionary computation techniques that, as mentioned, have a high potential to incorporate to the model factors which are difficult to quantify, in other words to collect system dynamics. The main new aspect is the use of individualized models, i.e. we
obtain a solution of the problem for each set of data on a single patient or individual. This approach has not been seen to date, given its complexity with traditional methods, but affordable with evolutionary methods.
## 3 Evolutionary Approach
The aim of this work is to find out an expression to model the glucose level of a diabetic patient. This expression should be obtained from previous collected data of glucose, carbohydrates and insulin. Therefore, we deal with a kind of Symbolic Regression (SR) Problem. SR tries to obtain a mathematical expression to reproduce a set of discrete data. Genetic Programming GP has proven effective in a number of SR problems, although there are some limitations, which often come in the way of representation. such as bloating. Another point to be considered is that in GP, evolution is produced on the phenotype of the individual and not on its representation (genotype). During last years, variants to GP like Grammatical Evolution (GE) appeared to propose different evaluation approaches.GE allows generation of computer programs in an arbitrary language. This is achieved by using grammars to specify the rules for obtaining the programs. Specifically we will use grammars expressed in Backus Naur form (BNF).
In contrast to genetic algorithms, which work with representation of solutions, GE works (evolves) with a genetic code that determines the production process of this solution. The code translation process is determined by grammars represented as BNF.
BNF is a notation technique for expressing context-free grammars. The BNF can be any specification of a complete language or a subset of a problem-oriented language. A BNF specification is a set of derivation rules, expressed in the form:
<symbol> ::= <expression>
The rules are composed of sequences of terminals and non-terminals. Symbols that appear at the left are non-terminals while terminals never appear on a left side. In this case we can affirm that <symbol> is a non-terminal and, although this is not a complete BNF specification, we can affirm also that <expression> will be also a non-terminal since those are always enclosed between the pair <>. So, in this case the non-terminal <symbol> will be replaced (indicated by ::=) by an expression. The rest of the grammar must indicate the different possibilities.
A grammar is represented by the 4-Tuple {N, T, P, S}, being N the non-terminal set, T is the terminal set, P the production rules for the assignment of elements on N and T, and S is a start symbol which should appear in N. The options within a production rule are separated by a " |" symbol.
Figure 1 represents an example of a grammar in BNF designed for symbolic regression. The code that represents an expression will consist of elements of the set of terminals T. These have been combined with the rules of the grammar, as will be explained below.
Besides, grammars can be adapted to bias the search of the evolutionary process because there is a finite number of options on each production rule, which limits the search space.
### Mapping Process
As we have mentioned above we will use an EA to evolve genotypes, i.e. a string of integer values. We use the individual genotype to map the start symbol onto terminals by reading codons which, in our work, have 8-bits length. The process is similar to the explained on the previous section, but instead of doing random choices, we will take our decisions by reading the individual genotype. Each codon is represented by an integer value on the
Figure 1: Example of a grammar in BNF format designed for symbolic regression.
genotype, which is processed by the following mapping function:
\[Choice_{i}=(CIV)\ MOD\ (\#\ of\ choices_{i})\]
where \(Choice_{i}\) is the choice selected for non-terminal \(i\), \(CIV\) is the codon integer value we are decoding, \(MOD\) is the module function, and (\(\#\ of\ choices_{i}\)) is the number of possible choices at rule for the non-terminal \(i\).
The mapping function was proposed in [4] and takes the integer value of the chromosome, computes the module function in relation to the number of the choices of a rule, and selects the choice according to that result. Given that the module function will return values from 0 to (\(\#\ of\ choices_{i})-1\), the first choice will correspond to the first value, 0, the second to 1, and so on. Therefore, if a rule has only one possible choice, this choice will always be selected because \(k\ MOD\ 1=0\) for any \(k\) integer value.
We will illustrate the mapping process using the example grammar shown in Figure 1, designed for solving a symbolic regression problem, which is indeed a possible grammar for the glucose model problems (we only need to particularize the terminal set as it will be explained on next section). An individual is composed of a set of integer genes. Each gene can take a numeric value from 0 to 255 since we are working with codons of 8 bits. Let us suppose we are mapping the following 7-genes individual:
\[12-55-23-47-38-254-2\]
The start symbol is S = { expr }, hence the solution expression will begin with this non-terminal:
\[Solution=\ \verb
\[Solution=\verb+expr+><op+><expr+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>+>>+>+>+>+>+>+>+>>+>+>+>+>+>+>+>+>+>+>+>>+>+>+>+>+>>+>+>+>+>+>+>>+>+>>+>+>>+>+>+>+>>+>+>+>>+>+>>+>+>+>>+>+>>+>>+>+>+>+>>+>+>+>>+>>+>+>>+>+>>+>+>>+>>+>+>+>>+>>+>+>>+>>+>>+>>+>>+>>+>+>>+>>+>+>>+>>+>>+>>+>>+>>+>>+>>+>>+>>+>>+>>+>>+>>>+>>+>>+>>+>>+>>+>>+>>+>>+>>+>>+>>+>>+>>+>>>+>>+>>+>>+>>+>>+>>+>>+>>+>>+>>+>>+>>+>>+>>+>>>+>>+>>+>>+>>>+>>+>>+>>+>>>+>>>+>>>+>>>+>>+>>>+>>+>>>+>>>+>>>+>>>+>>>+>>+>>>+>>>+>>>+>>>+>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>>+>>>+>>>+>>>+>>>+>>>+>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>+>>+>>>+>>+>>>+>>>+>>>+>>>+>>>+>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>+>>>+>>>+>>>+>>>+>>+>>>+>>>+>>>+>>>+>>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>+>>>+>>+>>+>>>+>>>+>>+>>>+>>+>>>+>>>+>>>+>>+>>+>>>+>>+>>>+>>+>>>+>>>+>>>+>>>+>>>+>>>+>>+>>>+>>>+>>+>>+>>>+>>>+>>+>>>+>>+>>+>>+>>>+>>+>>+>>>+>>+>>+>>>+>>+>>>+>>+>>>+>>+>>>+>>>+>>>+>>>+>>>+>>>+>>+>>>+>>+>>+>>>+>>+>>+>>>+>>>+>>+>>>+>>>+>>+>>>+>>>+>>>+>>>+>>+>>>+>>+>>>+>>>+>>+>>>+>>>+>>>+>>>+>>>+>>>+>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>+>>>+>>+>>>+>>+>>>+>>>+>>>+>>>+>>+>>>+>>>+>>>+>>>+>>>+>>+>>+>>>+>>>+>>+>>>+>>>+>>>+>>+>>>+>>>+>>>+>>>+>>+>>>+>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>+>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>>+>>>+>>>+>>>+>>>+>>>+>>+>>>+>>>+>>>>+>>>+>>>+>>>>+>>+>>>+>>>+>>>+>>>+>>>+>>+>>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>>+>>+>>>+>>>+>>>>+>>>+>>>+>>>+>>>>+>>>+>>>+>>>+>>>+>>>+>>>>+>>>+>>>>+>>>+>>>>+>>>+>>>+>>>+>>>+>>>+>>>+>>>>+>>>+>>>+>>>+>>>>+>>>+>>>+>>>+>>>+>>>>+>>>>+>>+>>>>+>>>>+>>>>+>>>>+>>>>+>>>>>>+>>>>+>>>>+>>>+>>>>>+>>>>+>>>>+>>>>+>>>>>+>>>>+>>>>+>>>>+>>>>+>>>>>+>>>>+>>>>>+>>>>>+>>>>>+>>>>>+>>>>+>>>>>>+>>>>+>>>>>+>>>>>+>>>>>+>>>>>>+>>>>>+>>>>>>+>>>>>+>>>>>+>>>>>>+>>>>>>+>>>>>+>>>>>+>>>>>+>>>>>>+>>>>>>+>>>>>>+>>>>>>+>>>>>>+>>>>>>+>>>>>>+>>>>>>>+>>>>>>>+>>>>>>>+>>>>>>+>>>>>>>>+>>>>>>>+>>>>>>>+>>>>>>>+>>>>>>+>>>>>+>>>>>>>+>>>>>>>+>>>>>>>>+>>>>>>+>>>>>>>+>>>>>>>+>>>>>>>+>>>>>>>>+>>>>>>>+>>>>>>+>>>>>>>+>>>>>>>+>>>>>>+>>>>>>+>>>>>>>>+>>>>>>>+>>>>>>>>+>>>>>>+>>>>>>+>>>>>>>+>>>>>>>+>>>>>>>+>>>>>>>+>>>>>>>+>>>>>>+>>>>>>>+>>>>>>>+>>>>>>>+>>>>>>>+>>>>>>>+>>>>>>>+>>>>>>+>>>>>>>>+>>>>>>>>+>>>>>>>+>>>>>>+>>>>>>>>+>>>>>>>>+>>>>>>>+>>>>>>>>+>>>>>>>>>+>>>>>>>+>>>>>>>>+>>>>>>>>+>>>>>>>>+>>>>>>>>+>>>>>>>>>+>>>>>>>>+>>>>>>>>+>>>>>>>+>>>>>>>>+>>>>>>>>>>+>>>>>>>>>>+>>>>>>>>>+>>>>>>>>+>>>>>>>>>+>>>>>>>>>+>>>>>>>>>+>>>>>>>>>>+>>>>>
Non-terminal <op> is decoded with 254 and rule II:
\[254\ MOD\ 4=2\]
This value selects the third option, terminal *.
\[Solution=Abs(X)\ *\ \text{<expr>}\]
Next codon, 2, decodes <expr> with rule I:
\[2\ MOD\ 3=2\]
This value selects the third option, non-terminal <var>.
\[Solution=Abs(X)\ *\ \text{<var>}\]
At this point, the genotype-to-phenotype process has run out of codons. That is, once we have used all the genes or codons we have not arrived to an expression with terminals in all of its components.
The solution is to reuse codons starting from the first one, although this is not usual in other EA approaches. In fact it is possible to reuse the codons more than once. This technique is known as _wrapping_ and mimics the gene-overlapping phenomenon of many organisms [24]. Reusing codons it is not a problem since in GE a codon always generates the same integer value and, if applied to the same rule, it generates the same solutions. However, if we use it with different rules we will obtain different phenotypes parts. What the GE grammars should make certain is that an individual genotype will always produce the same phenotype. In these conditions wrapping is not a problem.
So, applying wrapping, the process go back to first gene, 12, which is used to decode <var> with rule IV:
\[12\ MOD\ 2=0\]
This value selects the first option, non-terminal \(X\), giving the final expression of the phenotype.
\[Solution=Abs(X)\ *\ X\]
In the next section we describe how the four grammars under study represent different search spaces for expressions to model the blood glucose level.
## 4 Model Description
A model for glucose levels should be based on observable factors as well as on intrinsic non-observable features of the patient's body. Observable factors are those data that either the patient or a measure machine can collect, while non-observable factors should be inferred. Hence, we propose a model that considers all these factors, applying GE to infer an expression that characterize the behavior of the glucose in diabetic patients. In addition, we describe in this section the different objective functions we have studied to make GE evolve towards useful expressions for the model.
### Available Data and General Glucose Model
The actual level of glucose in the patient's blood depends on several factors, some of them intrinsic to its own organism functions [25]. The most important among these factors are the glucose level, the carbohydrates ingested and the insulin injected.
These factors are considered in the datasets of our in-silico patients, which were obtained with AIDA simulator [5]. Notice that, for real patients, these data are easy to collect. Actual glucose values are obtained from blood analyzers, carbohydrate units ingested are calculated based on the daily meals, and insulin injected, distinguished by insulin type, is also an information that the patient usually knows.
Therefore, we have developed our research based on collections of data that follow the previous idea. More precisely, our data series represent measures taken each 15 minutes along the day. Table 1 shows a 24-hours dataset of one of our in-silico patients, named Patient 1. For each time step, represented in one line of the table, \(k\) is the actual time, \(GL\) is the actual glucose level, \(CH\) is the carbohydrates units ingested, \(IS\) is the short effect insulin injected and \(IL\) is the long effect insulin injected.
In our dataset \(k\) represents the time step corresponding to a moment of the day. Then, \(k=1\) represents 12:00 AM, \(k=2\) represents 12:15 AM, and so on. As seen in the table, many of the time steps do not have any data about carbohydrates or insulin, whereas time steps surrounding the meal hours do provide that information.
The model we propose provides estimated glucose values, denoted as \(\widehat{GL}\). Hence, for each time step, estimated glucose is obtained by using previous estimated glucose values and actual carbohydrates and insulin units. A general form of this model should be similar to the following:
\[\widehat{GL}(k+1)=f(\widehat{GL},CH,IS,IL),1\leq k\leq N \tag{1}\]
where \(\widehat{GL}(k+1)\) is the next estimated glucose value, \(\widehat{GL}\) corresponds to previous estimated glucose values, \(CH\) corresponds to previously ingested carbohydrates and \(IS\) and \(IL\) correspond to previously injected insulin for both types, short and long effect. Therefore, the dataset provides input values for the variables in our glucose model proposal.
In this way, the GE engine should be able to decide how \(f\) looks like. However, in order to guide the search of the evolutionary process, we do need a grammar that will both limit the search space and represent the behavior of the blood glucose level. Next, we detail the grammars that we studied in this work.
### BNF Grammars for Modelling Glucose Levels
Following the general model shown in (1), we have designed four grammars where the estimated glucose depends on the observable factors. As shown in [26] the incorporation of some of the problem's knowledge into the grammar will improve the exploration performance. Therefore, we designed
\begin{table}
\begin{tabular}{|c|
an expression for glucose which depends on previous glucose, carbohydrates and insulin. This expression is coded as rule I in all our grammars but, as seen next, is surrounded by different rules that are translated into different concrete models.
The grammars were designed by following the advice of the medical doctors in our research team. According to them, the expected behavior of the glucose depends on previous carbohydrates ingested and insulin injected, but it may vary along the day in a different way for each patient. In addition, glucose may be influenced at different degrees by each ingestion of carbohydrates and each injection of insulin. Therefore, we selected four different approaches that considered different degrees of influence, as well as different influence time windows.
_Grammar 1_
Given that it is well known that carbohydrate ingestion rises glucose while insulin injections lowers it, we tried a grammar with such a behavior. The general model will be approximated with expressions similar to (2), where any previous values of glucose, carbohydrates and insulin may be used. Besides, carbohydrates are always added, while insulin values are always subtracted.
\[\widehat{GL}(k+1)=f_{gl}(\widehat{GL}(k-m))+f_{ch}(CH(k-m))-f_{in}(IS(k-m),IL( k-m)),0\leq m\leq k \tag{2}\]
The concrete form of \(f_{gl}\), \(f_{ch}\) and \(f_{in}\) will be determined by GE with the help of the grammar that we called Grammar 1, shown in Figure 2. The three terms <exprgluc>, <exprch> and <exprins> correspond to \(f_{gl}\), \(f_{ch}\) and \(f_{in}\), respectively, and they are expressions that could use prefix operands like those in rule IX, variables for each of one the terms, or combinations of them through operators in rule VIIII.
_Grammar 2_
This grammar is a particularization of the previous one in the sense that it does not allow any previous value of variables. On the contrary, the grammar limits the values to just the two previous data in time, that is, \(k\) and \(k-1\). The resulting model, with the only difference of the range allowed for \(m\), is shown in (3).
\[\widehat{GL}(k+1)=f_{gl}(\widehat{GL}(k-m))+f_{ch}(CH(k-m))-f_{in}(IS(k-m),IL( k-m)),0\leq m\leq 1 \tag{3}\]
Figure 3 shows the grammar, where the indexes are limited to 00 and 01 in rules III, V and VII, which means the current and previous values of each variable.
### Grammar 3
In order to provide more freedom to the search, we decided to leave the connecting operands opened to any simple arithmetic operation. Therefore, the model changes as shown in (4), and \(f\) corresponds to the function that connects the three expressions.
\[\widehat{GL}(k+1)=f(f_{gl}(\widehat{GL}(k-m)),f_{ch}(CH(k-m)),f_{in}(IS(k-m), IL(k-m))),0\leq m\leq k \tag{4}\]
Figure 2: Grammar 1. Any previous carbohydrates and insulin; carbohydrates are added and insulin subtracted.
The grammar that defines this model is Grammar 3, which presents a slight modification of the rule I of Grammar 1. It consists on changing the fixed \(+\) and \(-\) operands with the non-terminal <op>, which can be any of the four arithmetic operands in rule VIII. Figure 4 shows the grammar.
Grammar 4The model here is the same as in Grammar 2, but giving freedom to operands that connect the expressions for glucose, carbohydrates and insulin, as done in Grammar 3. The model is shown in (5), where \(f\) corresponds to the function that connects the three expressions.
\[\widehat{GL}(k+1)=f(f_{gl}(\widehat{GL}(k-m)),f_{ch}(CH(k-m)),f_{in}(IS(k-m), IL(k-m))),0\leq m\leq 1 \tag{5}\]
Figure 3: Grammar 2. Only two previous values for carbohydrates and insulin are allowed; carbohydrates are added and insulin subtracted.
Therefore, Grammar 4 is similar to Grammar 2, but giving freedom to operands in rule I. Figure 5 shows the grammar.
Once the grammars are presented, we next describe the fitness evaluation of an individual, as well as the different objectives studied in this work.
### Fitness Evaluation
Grammars limit the search space in GE, but fitness functions are committed to guide the evolution to a good solution. So, in order to obtain the fitness of an individual, our evolutionary process first obtains the glucose values generated by the expression of the individual phenotype. As explained before, these are the estimated glucose values, denoted as \(\widehat{GL}\). So, for each time step, estimated glucose is obtained by using previous estimated glucose
Figure 4: Grammar 3. Any previous carbohydrates and insulin; connector operators selected from rule VIII.
values and actual carbohydrates and insulin units.
Once \(\widehat{GL}\) is obtained, the absolute difference between the actual and the predicted glucose values is calculated for each time step. As in the general symbolic regression problem, this measure is called the error. The formula we apply is shown in (6), where \(GL\) is the actual glucose value and \(\widehat{GL}\) is the glucose value that the phenotype expression generates. We have studied five different objectives with their corresponding fitness functions, shown in Table 2.
\[e_{k}=|GL(k)-\widehat{GL}(k)|,1\leq k\leq N \tag{6}\]
Figure 5: Grammar 4. Only two previous values for carbohydrates and insulin are allowed; connector operators selected from rule VIII.
## 5 Experimental Setup
In this section we describe the characteristics of the five in-silico patients we dealt with, as well as the configuration of each set of experiments.
### In-silico Patients
We work with a set of in-silico patients obtained with AIDA simulator [5]. The website of the simulator offers several characterized patients from which we selected five of them. The glucose values for each patient were obtained by introducing different carbohydrates and insulin values and then running the simulator. The description of each one of the patients can be found on the website, but we replicate them here for the sake of clarity. The patients are the following:
**Patient 1**. This woman is on three injections of short and/or intermediate acting insulin each day, with a split-evening dose. She wants to start a family, but consistently has had quite high blood glucose levels in the early afternoon.
**Patient 2**. This 45 year old man was diagnosed as having diabetes at the age of 14. He is currently on a regimen of combined short and/or intermediate acting insulin preparations four times per day. As you can see from his home monitoring blood glucose measurements, he tends to higher blood glucose values overnight but has a low blood glucose in the mid-morning.
**Patient 3**. This man is a relatively newly diagnosed insulin-dependent (type 1) diabetic patient. He has had problems maintaining his blood glucose
\begin{table}
\begin{tabular}{|c|c|} \hline Objective & Fitness Function \\ \hline Least Squares & \(F_{1}=\sum_{k=1}^{N}{e_{k}}^{2}\) \\ Average Error & \(F_{2}=\frac{1}{N}\sum_{k=1}^{N}{e_{k}}\) \\ Maximum Error & \(F_{3}=max(e_{k}),1\leq k\leq N\) \\ RSME & \(F_{4}=\sqrt{\frac{1}{N}\sum_{k=1}^{N}{e_{k}}^{2}}\) \\ MAD & \(F_{5}=\frac{1}{N}\sum_{k=1}^{N}\frac{e_{k}}{GL(k)}\) \\ \hline \end{tabular}
\end{table}
Table 2: Fitness functions for the five objectives under study. \(N\) is the total number of measures.
profile on two and more recently three injections per day; so currently he is controlled on four injections per day. He tends to quite high blood glucose levels in the middle of the day, despite not eating excessively.
**Patient 4**. It has taken a lot of effort to stabilize this girl's blood glucose profile. However, she still often goes hypoglycemic in the middle of the day, especially between breakfast and lunch. She is on a slightly unusual regimen taking a short acting insulin preparation three times per day, with an intermediate acting preparation twice a day - at lunchtime and before bed.
**Patient 5**. This overweight 58 year old insulin-dependent (type 1) diabetic patient has had major problems losing weight. She is quite sensitive to insulin. In addition, she smokes and is at great risk of suffering a heart attack or stroke.
### Genetic Parameters
As with genetic programming, GE can use any search algorithm able to operate on integer or binary strings. We have selected a simple GA with one-point crossover and point mutation. Population initialization is made by randomly generating fixed integer strings. Table 3 shows the rest of the genetic and GE parameters.
## 6 Results
Our experiments are divided into training and test phases. The objective of the training phase is to evaluate the performance of the proposed grammars
\begin{table}
\begin{tabular}{|l|c|} \hline Parameter & Value \\ \hline Max. wraps & 3 \\ Codon size & 256 \\ Chromosome length & 100 \\ Population size & 100 \\ Generations & 2500 \\ Crossover probability & 0.6 \\ Mutation probability & 0.2 \\ Tournament size & 2 \\ \hline \end{tabular}
\end{table}
Table 3: Parameters for GE experiments.
in combination with the fitness functions, as well as to obtain models that characterize the glucose behavior on each patient. In this phase, the training dataset is formed by the 24-hours records of five in-silico patients. We have executed 30 runs with the same configuration of grammar and objective for each patient. Given that we have studied four grammars and five different objectives, a total of 600 runs were performed for each one of the in-silico patients in the training phase. Hence, we have obtained 600 glucose models for each patient.
In order to validate the goodness of the models, we have performed the test phase, where no GE was applied. In this phase, a different set of 24-hours records was employed for the same five in-silico patients. Using this test dataset, we have calculated the glucose values of each patient applying the best models obtained in the training phase.
Next, we analyze the optimizations and describe the results obtained in both phases.
### Training Phase
In order to compare the performance of the grammars, we have obtained the average fitness for each set of optimization runs. We have grouped the runs by objective function, comparing the results for all patients.
Table 4 shows the mean and standard deviation fitness values for least squares objective. As seen, Grammar 2 obtains the best average fitness values for three of the patients, and is very close to the best value for Patient 5. Grammar 4 obtains values close to Grammar 2, but always worse.
The mean and standard deviation fitness values for the average error objective are shown in Table 5. Grammar 4 obtains here two out of five best results, being close to the best one in patient Patient 3. Grammar 2 also performs quite well, obtaining second best results where Grammar 4 wins.
Table 6 displays the mean and standard deviation fitness for the maximum error objective. Here, Grammars 3 and 4 obtain two best results each. However Grammar 4 is better for patient Patient 3, where Grammar 2 wins.
The mean and standard deviation fitness values for RSME objective are shown in Table 7. We can see here that Grammar 2 obtains four out of five best results. Grammar 4 obtains four second best values, which is also a good performance.
Table 8 presents mean and standard deviation fitness values for objective MAD. Here we found that Grammar 1 obtains two best results and one second best.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Patient & Grammar 1 & Grammar 2 & Grammar 3 & Grammar 4 \\ \hline Patient 1 & \(90707.25_{34144.06}\) & \(\mathbf{45248.19_{8007.10}}\) & \(107420.94_{22109.20}\) & \(45595.31_{7059.14}\) \\ Patient 2 & \(178148.928_{511.68}\) & \(\mathbf{163723.13_{92492.90}}\) & \(189369.64_{49435.68}\) & \(172541.57_{97498.73}\) \\ Patient 3 & \(83291.12_{20894.22}\) & \(\mathbf{50788.42_{10489.82}}\) & \(95872.31_{24151,13}\) & \(60035.69_{16633.91}\) \\ Patient 4 & \(89494.99_{10067.22}\) & \(97425.94_{11745.69}\) & \(\mathbf{87620.73_{11431.03}}\) & \(98039.99_{12244.67}\) \\ Patient 5 & \(\mathbf{46531.15_{10810.73}}\) & \(46826.41_{2588.46}\) & \(49618.53_{10649.51}\) & \(49502.43_{13359.12}\) \\ \hline \end{tabular}
\end{table}
Table 4: Mean and standard deviation fitness values of \(F_{1}\), **least squares**.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Patient & Grammar 1 & Grammar 2 & Grammar 3 & Grammar 4 \\ \hline Patient 1 & \(90707.25_{34144.06}\) & \(\mathbf{45248.19_{8007.10}}\) & \(107420.94_{22109.20}\) & \(45595.31_{7059.14}\) \\ Patient 2 & \(178148.928_{511.68}\) & \(\mathbf{163723.13_{92492.90}}\) & \(189369.64_{49435.68}\) & \(172541.57_{97498.73}\) \\ Patient 3 & \(83291.12_{20894.22}\) & \(\mathbf{50788.42_{10489.82}}\) & \(95872.31_{24151,13}\) & \(60035.69_{16633.91}\) \\ Patient 4 & \(89494.99_{10067.22}\) & \(97425.94_{11745.69}\) & \(\mathbf{87620.73_{11431.03}}\) & \(98039.99_{12244.67}\) \\ Patient 5 & \(\mathbf{46531.15_{10810.73}}\) & \(46826.41_{2588.46}\) & \(49618.53_{10649.51}\) & \(49502.43_{13359.12}\) \\ \hline \end{tabular}
\end{table}
Table 4: Mean and standard deviation fitness values of \(F_{1}\), **least squares**.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Patient & Grammar 1 & Grammar 2 & Grammar 3 & Grammar 4 \\ \hline Patient 1 & \(25.74_{4.38}\) & \(16.82_{1.82}\) & \(25.72_{3.95}\) & \(\mathbf{16.64_{1.73}}\) \\ Patient 2 & \(30.96_{1.46}\) & \(34.35_{8.27}\) & \(31.06_{2.25}\) & \(\mathbf{29.66_{9.04}}\) \\ Patient 3 & \(24.86_{3.30}\) & \(\mathbf{18.09_{2.47}}\) & \(24.85_{4.25}\) & \(19.19_{3.36}\) \\ Patient 4 & \(23.94_{1.82}\) & \(25.18_{2.19}\) & \(\mathbf{23.50_{1.52}}\) & \(24.54_{2.37}\) \\ Patient 5 & \(\mathbf{16.39_{2.02}}\) & \(17.13_{1.76}\) & \(16.74_{1.92}\) & \(17.28_{1.75}\) \\ \hline \end{tabular}
\end{table}
Table 5: Mean and standard deviation fitness values of \(F_{2}\), **average error**.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Patient & Grammar 1 & Grammar 2 & Grammar 3 & Grammar 4 \\ \hline Patient 1 & \(68.29_{7.99}\) & \(43.74_{1.72}\) & \(67.99_{9.72}\) & \(\mathbf{42.94_{1.87}}\) \\ Patient 2 & \(105.24_{9.56}\) & \(98,87_{18.35}\) & \(\mathbf{98.09_{11.95}}\) & \(106.20_{14.38}\) \\ Patient 3 & \(54.95_{3.47}\) & \(\mathbf{44.38_{3.70}}\) & \(56.82_{4.67}\) & \(44.40_{3.37}\) \\ Patient 4 & \(67.92_{3.37}\) & \(68.52_{3.04}\) & \(\mathbf{66.14_{5.32}}\) & \(68.49_{3.14}\) \\ Patient 5 & \(48.23_{3.71}\) & \(44.07_{3.60}\) & \(46.83_{5.10}\) & \(\mathbf{43.80_{6.55}}\) \\ \hline \end{tabular}
\end{table}
Table 6: Mean and standard deviation fitness values of \(F_{3}\), **maximum error**.
In general, the best objective-grammar combination will be the one that obtains best average fitness for any input data. In our experiments, none of the combinations reached this goal. Grammar 2 is close to it in least squares and RSME objectives, but the other grammars perform well in the other objectives.
Therefore, in order to complete this analysis, we next compare the quality of the solutions obtained for each one of the patients. Breaking down these results by objective will give the idea of which fitness function could be better.
### Analysis and Test Phase
Once we have seen the overall performance of the grammars and fitness functions, we analyze the results for our input datasets.
For each patient, we have calculated the percentage that the average error of each simulation run represents in the range of the patient glucose values. Hence, we have obtained the mean and standard deviation of the percentage average error for the 30 runs of each grammar and objective combination.
Then, we have run the test phase for the best grammar and objective combination on each patient training. To this aim we needed different inputs which, in this case, consisted on different 24-hours datasets generated with AIDA simulator. Hence, in order to obtain variations on the actual glucose values, we changed the parameters in the simulator, varying carbohydrates and/or insulin units trying to represent realistic situations like bigger or smaller meals and little changes in the insulin doses.
Next, we analyze the results for each patient dataset.
#### 6.2.1 Patient 1
Table 9 shows the mean and standard deviation of percentage average error for Patient 1. As seen, the best combination of grammar and objective is Grammar 2 optimizing MAD. Notice that the best average error is not obtained optimizing the average error objective, which also happens for Grammar 1 and least squares objective.
Figure 6a shows the glucose values obtained with the best grammar-objective combination of the training phase for Patient 1. The actual glucose curve of the patient (in blue), the glucose value generated with the best solution of this combination (in red) and the glucose value generated with the average of the 30 solutions (in yellow) are displayed in the figure. The best solution obtained a percentage average error value of 7.37%, and its expression was the following:
\[GL(k+1)=GL(k)+CH(k-1)-cos(IL(k-1))+tan(exp(IL(k-1)+cos(tan(exp(exp(cos(k)))))))\]
The test phase for Patient 1 was run with a dataset obtained by decreasing the 10 AM snack from 20 to 15 carbohydrate units, increasing lunch from 40 to 45 carbohydrate units and decreasing dinner from 30 to 25 carbohydrate units in the simulator. Insulin values were not modified. Figure (b)b shows the actual glucose value given by the simulator, as well as the values given by the best and average solutions obtained from training. As seen, the first third of the best solution is close to the actual glucose, while in the rest the gap is bigger than in training. The best solution obtained a percentage average error value of 7.41% in the test phase.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Objective & Grammar 1 & Grammar 2 & Grammar 3 & Grammar 4 \\ \hline Least Squares & \(16.18_{3.7}\) & \(11.37_{1.38}\) & \(18.04_{2.23}\) & \(11.6_{1.26}\) \\ Average Error & \(16.68_{2.83}\) & \(10.9_{1.18}\) & \(16.67_{2.56}\) & \(10.79_{1.12}\) \\ Max. Error & \(19.22_{2.56}\) & \(15.29_{1.05}\) & \(19.37_{1.96}\) & \(15.11_{0.76}\) \\ RSME & \(17.21_{2.84}\) & \(11.24_{0.87}\) & \(18.04_{2.05}\) & \(11.61_{1.36}\) \\ MAD & \(17.72_{1.57}\) & \(\mathbf{10.68_{0.99}}\) & \(18.24_{2.03}\) & \(10.91_{0.82}\) \\ \hline \end{tabular}
\end{table}
Table 9: Mean and standard deviation of percentage average error, patient Patient 1.
Figure 6: Best combination for Patient 1: Grammar 2 and MAD.
#### 6.2.2 Patient 2
Statistics for Patient 2 are shown in Table 10. Here, the best percentage average error is obtained with Grammar 4 optimizing the average error. This objective obtains the best value also for Grammar 1, while for Grammar 2 is not as good as least squares and RSME.
Figure 6(a) shows the actual glucose value obtained from the simulator as well as the best solution and the average of the runs for Grammar 4 optimizing average error. It can be seen that the average glucose does not follow the actual glucose as well as the best solution. This is because the variability of the solutions in this combination, expressed in the value of the standard deviation, 4.07%, which is a little high. The best solution obtained a percentage average error value of 6.62%, and its expression was the following:
\[GL(k+1)=\frac{41.57*GL(k)}{43.24}+k*cos(exp(exp(sin(IS(k-1))-IL(k)-cos(IS(k-1)* IS(k-1)))))\]
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Objective & Grammar 1 & Grammar 2 & Grammar 3 & Grammar 4 \\ \hline Least Squares & \(14.25_{0.58}\) & \(14.07_{4.84}\) & \(14.68_{2.21}\) & \(14.9_{4.96}\) \\ Average Error & \(13.93_{0.65}\) & \(15.45_{3.72}\) & \(13.97_{1.01}\) & \(\mathbf{13.34_{4.07}}\) \\ Max. Error & \(21.21_{2.34}\) & \(19.27_{4.52}\) & \(19.25_{3.26}\) & \(21.18_{3.35}\) \\ RSME & \(14.06_{0.56}\) & \(14.9_{4.42}\) & \(14.15_{1.69}\) & \(16.36_{4.44}\) \\ MAD & \(14.18_{0.28}\) & \(17.09_{4.99}\) & \(13.83_{2.75}\) & \(15.26_{4.57}\) \\ \hline \end{tabular}
\end{table}
Table 10: Mean and standard deviation of percentage average error, patient Patient 2.
Figure 7: Best combination for Patient 2: Grammar 4 and average error.
The test phase for Patient 2 consisted on varying the insulin doses on one unit less or one unit more. The change on the actual glucose that is more relevant in the test phase is the decrease around the measure number 20, as shown in Figure 7b. This was caused because we increased the value of the long effect insulin from 6 to 7 units. The other changes on the insulin were not so evident on the actual glucose. However, given that the best model is very dependent on the insulin, those changes separated the best solution curve in relation to the training. As a result, the best solution obtained a percentage average error value of 11.33% in the test phase.
#### 6.2.3 Patient 3
The percentage average error values for Patient 3 are shown in Table 11. The best result is obtained with Grammar 2 optimizing the average error. Besides, we can see that Grammar 2 obtains the best average in all objectives but in the case of maximum error, where it also is very close indeed.
The plots for the glucose in the training phase are displayed in Figure 8a. Both best and average solutions have a shape similar to the actual glucose. However, despite the shapes are similar, due to a short range of glucose values, the best solution obtained a percentage average error value of 9.83%. Notice that this percentage is calculated with respect to the range of glucose of each patient. The expression of the best solution was the following:
\[GL(k+1)=GL(k)+CH(k)-tan(tan(sin(exp(tan(IS(k-1))))))-tan(k*14.33)\]
In the test phase of Patient 3 we increased the lunch carbohydrates from 30 to 40 units and the short insulin from 3 to 4 units. As seen in Figure 8b, this caused a lower peak between measures 50 and 60 in the actual glucose. We also decreased the 5 PM dinner from 30 to 25 carbohydrate units maintaining the insulin doses. Hence, we see that the end of the glucose curve
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Objective & Grammar 1 & Grammar 2 & Grammar 3 & Grammar 4 \\ \hline Least Squares & \(18.91_{3.15}\) & \(14.71_{1.73}\) & \(20.58_{3.59}\) & \(15.87_{2.33}\) \\ Average Error & \(19.26_{2.55}\) & \(\mathbf{14.01_{1.91}}\) & \(19.25_{3.29}\) & \(14.86_{2.6}\) \\ Max. Error & \(22.28_{0.98}\) & \(18.42_{2.35}\) & \(22.34_{1.82}\) & \(18.23_{1.97}\) \\ RSME & \(19.17_{2.94}\) & \(14.61_{2.14}\) & \(19.66_{3.24}\) & \(15.38_{2.21}\) \\ MAD & \(18.56_{2.55}\) & \(15.66_{3.52}\) & \(18.95_{2.93}\) & \(14.96_{3.25}\) \\ \hline \end{tabular}
\end{table}
Table 11: Mean and standard deviation of percentage average error, patient Patient 3.
is lower in the test plot. In this patient, these variations are not captured by the best and average models. In fact, the best solution obtained a percentage average error value of 11.53% in the test phase.
#### 6.2.4 Patient 4
Table 12 shows the percentage average error values for Patient 4. The best combination is Grammar 3 with RSME objective. Once again, minimizing the average error objective does not obtain the best average results. Besides, in this patient grammars 1 and 3 obtain better results than the others. These grammars may consider any previous carbohydrate or insulin values, while grammars 2 and 4 may consider just the two previous data. This behavior is caused by the shape of the actual glucose values, that are similar to sawtooth in the middle, being very difficult to imitate.
Figure 8(a) shows the special shape of the actual glucose, as well as the more special squared shape of the best solution given, and the plain average solution. The latter is similar to an interpolation, and does not help so much.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Objective & Grammar 1 & Grammar 2 & Grammar 3 & Grammar 4 \\ \hline Least Squares & \(17.24_{1.46}\) & \(18.07_{1.34}\) & \(16.86_{1.35}\) & \(18.23_{1.38}\) \\ Average Error & \(16.91_{1.28}\) & \(17.79_{1.54}\) & \(16.6_{1.07}\) & \(17.34_{1.67}\) \\ Max. Error & \(20.46_{1.37}\) & \(20.57_{0.92}\) & \(20.24_{1.33}\) & \(20.59_{0.85}\) \\ RSME & \(17.36_{1.21}\) & \(17.79_{1.6}\) & \(\mathbf{16.46_{1.6}}\) & \(17.34_{1.78}\) \\ MAD & \(17.47_{0.94}\) & \(19.2_{1.14}\) & \(16.97_{1.38}\) & \(18.96_{1.56}\) \\ \hline \end{tabular}
\end{table}
Table 12: Mean and standard deviation of percentage average error, patient Patient 4.
Figure 8: Best combination for Patient 3: Grammar 2 and average error.
The best solution obtained a percentage average error value of 13.46%. The expression of the best solution was the following:
\[GL(k+1)=cos(GL(k-11))*46.98+15.45+96.93\]
The expression in this case is very constant, and the cosine depends on the glucose estimated for a time step three hours ago. For the test phase we decreased from 6 to 5 units the short effect insulin at breakfast, decrease the 10 AM snack from 20 to 10 carbohydrate units, increase dinner from 40 to 50 carbohydrate units, increase short effect insulin at dinner from 3 to 4 units, and decrease the 10 PM snack from 20 to 10 carbohydrate units. These changes modified both the actual glucose in test and the best solution values, as seen in Figure 8(b). However, this is a difficult dataset where the best solution obtained a percentage average error value of 26.40% in the test phase.
#### 6.2.5 Patient 5
Looking at one objective of the results for patient Patient 5, presented in Table 13, we see that there are not very significative differences between grammars. However, the best result is obtained with Grammar 1 optimizing the average error objective.
Actual glucose values for training and best and average solutions are displayed in Figure 9(a). As seen, the best solution p
Figure 9: Best combination for Patient 4: Grammar 3 and RSME.
the actual glucose, while the average, once again, behaves like an interpolation. The best solution obtained a percentage average error value of 9.34%. The expression of the best solution was the following:
\[GL(k+1)=(89.91+k)+50.51-\frac{cos(37.82*k)}{tan(84.79)}\]
In this case it is clear that the best solution will obtain worse results in the test phase because its expression only depends on \(k\). So, for the test we reduced the snacks in from 20 to 10 carbohydrate units, and increased lunch and dinner in 5 carbohydrate units. As seen in Figure 9(b) the best solution remains the same than in training, while the actual glucose value changes. The best solution obtained a percentage average error value of 11.8% in this test phase.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Objective & Grammar 1 & Grammar 2 & Grammar 3 & Grammar 4 \\ \hline Least Squares & \(16.62_{1.93}\) & \(16.98_{1.44}\) & \(17.19_{1.72}\) & \(17.17_{2.46}\) \\ Average Error & \(\mathbf{16.11_{1.98}}\) & \(16.84_{1.73}\) & \(16.46_{1.88}\) & \(16.99_{1.72}\) \\ Max. Error & \(25.24_{2.66}\) & \(22.56_{2.6}\) & \(24.15_{3.47}\) & \(22.36_{4.37}\) \\ RSME & \(17.17_{1.43}\) & \(16.71_{1.63}\) & \(17.12_{1.92}\) & \(16.8_{2.24}\) \\ MAD & \(16.32_{1.95}\) & \(17.19_{1.47}\) & \(16.87_{1.75}\) & \(17.16_{1.74}\) \\ \hline \end{tabular}
\end{table}
Table 13: Mean and standard deviation of percentage average error, patient Patient 5.
Figure 10: Best combination for Patient 5: Grammar 1 and average error.
### Discussion
The results obtained in our experiments raised several issues related both with grammars and with objective functions.
Regarding the grammars, we have found that grammars 2 and 4, which consider the data previous to each time step, behave quite well in all the objectives under study. Therefore, the intuition that recent values recall the previous history (glucose, meals, insulin) is validated here. In addition, the expressions obtained as best solutions with grammars 2 and 4 depend on carbohydrate and insulin units. Therefore, these expressions consider the input values that a patient can collect and, as a consequence, the expressions may behave well in test phases.
On the other hand, despite of the average error good values, grammars 1 and 3 have provided useless expressions that are less parameterized with the inputs of the patient. Moreover, the average solutions in these cases tend to interpolate the glucose values, while in the other grammars the average is similar to the actual value.
In terms of objectives, and given that we use the average error as a quality measure, this could be the best objective to choice. Nevertheless, the most of the studied objectives behave quite well, with the exception of the maximum error one. In fact, minimizing this objective does not tend to minimize the average error so much, which can be viewed as an opposite objective useful for future multi-objective optimizations.
## 7 Conclusions and Future Work
In this paper we propose an evolutionary method based on GE that automatically obtains custom models for blood glucose levels in diabetic patients. Up to our knowledge, this is the first proposal where GE is applied to obtain glucose models in diabetics.
The main advantages of our method are: (1) the model is obtained as a custom expression for each patient, which improves the individual treatment of a diabetic person; (2) the training dataset can be easily collected by a patient or by a simple system because models require values of previous glucose measures, carbohydrate units ingested and insulin doses injected; (3) this method may be integrated in a progressive optimization system where the model is generated and stored and, after several days of data gathering, the model can be updated using the new dataset.
In our work, we have studied four different grammars and five different objective functions for our optimization scheme on five in-silico patients. The grammars incorporated some knowledge about the problem, trying to limit the search space of the algorithm. We have concluded that grammars which consider previous data that are close to the current time step are better than those able to select any previous data. That is, the most recent data are more valuable than the past ones. In addition, these grammars obtained more useful models because their expressions depend on almost all the involved variables. Besides, we saw that optimizing the average error objective obtains the best results, as well as we identified that the maximum error is an opposite objective that could be considered in future multi-objective optimizations.
Once the training phase finished, we selected the best model expressions for each patient and run the test phase with a different dataset. The results showed a mean percentage average error of 13.69% for the best models in the test phase. In addition, the best models predicted quite well the dangerous situations of hyper and hypoglucemias for all the patients.
In the future, we expect to manage datasets from real patients, which will allow the study of new variables in the models like stress or exercise. This will require the refinement of the grammars. In addition, we will consider the multiobjective optimization with both average and maximum error objectives. We will also consider to integrate fuzzy regression into the GP process [27][28].
|
2310.11722
|
Quantifying Self-diagnostic Atomic Knowledge in Chinese Medical
Foundation Model: A Computational Analysis
|
Foundation Models (FMs) have the potential to revolutionize the way users
self-diagnose through search engines by offering direct and efficient
suggestions. Recent studies primarily focused on the quality of FMs evaluated
by GPT-4 or their ability to pass medical exams, no studies have quantified the
extent of self-diagnostic atomic knowledge stored in FMs' memory, which is the
basis of foundation models to provide factual and reliable suggestions. In this
paper, we first constructed a benchmark of Self-diagnostic Atomic Knowledge
(SdAK), including the most common types of atomic knowledge involved in
self-diagnostic queries, with 17 atomic types and a total of 14, 048 pieces of
atomic knowledge. Then, we evaluated both generic and open-source Chinese
medical FMs on the benchmark. The experimental results showcase that generic
FMs perform better than medical FMs in terms of self-diagnostic atomic
knowledge. Error analysis revealed that both generic and medical FMs are
sycophantic, e.g., always catering to users' claims when it comes to unknown
knowledge. We further explored different types of data commonly adopted for
fine-tuning medical FMs, i.e., real-world, semi-distilled, and distilled data,
and found that distilled data can benefit FMs most. The code and data are
available at https://github.com/FreedomIntelligence/SDAK.
|
Yaxin Fan, Feng Jiang, Benyou Wang, Peifeng Li, Haizhou Li
|
2023-10-18T05:42:22Z
|
http://arxiv.org/abs/2310.11722v3
|
Quantifying Self-diagnostic Atomic Knowledge in Chinese Medical Foundation Model: A Computational Analysis
###### Abstract
Foundation Models (FMs) have the potential to revolutionize the way users self-diagnose through search engines by offering direct and efficient suggestions. Recent studies primarily focused on the quality of FMs evaluated by GPT-4 or their ability to pass medical exams, no studies have quantified the extent of self-diagnostic atomic knowledge stored in FMs' memory, which is the basis of foundation models to provide factual and reliable suggestions. In this paper, we first constructed a benchmark of Self-diagnostic Atomic Knowledge (SdAK), including the most common types of atomic knowledge involved in self-diagnostic queries, with 17 atomic types and a total of 14, 048 pieces of atomic knowledge. Then, we evaluated both generic and open-source Chinese medical FMs on the benchmark. The experimental results showcase that generic FMs perform better than medical FMs in terms of self-diagnostic atomic knowledge. Error analysis revealed that both generic and medical FMs are sycophantic, e.g., always catering to users' claims when it comes to unknown knowledge. We further explored different types of data commonly adopted for fine-tuning medical FMs, i.e., real-world, semi-distilled, and distilled data, and found that distilled data can benefit FMs most. The code and data are available at [https://github.com/FreedomIntelligence/SDAK](https://github.com/FreedomIntelligence/SDAK).
## 1 Introduction
In the digital age, seeking health-related information on the Internet for self-diagnosis has become a common practice White and Horvitz (2009); Demner-Fushman et al. (2019); Farnood et al. (2020). The health-related information can assist users in making necessary medical decisions, such as self-treatment or going to the hospital for professional treatment. With the development of generative models Ouyang et al. (2022); Sun et al. (2021); OpenAI (2023), Foundation Models (FMs) hold the promise of revolutionizing the retrieval paradigm that seeks health-related suggestions via a search engine because they can provide more efficient suggestions.
Recently, more and more studies Wang et al. (2023); Zhang et al. (2023); Zhu and Wang (2023); Yang et al. (2023) attempt to enhance the medical capabilities of open-source FMs in Chinese by fine-tuning the data in the medical field. To evaluate the medical abilities of FMs, most of previous work focused on the quality of FMs evaluate by GPT-4 Zhang et al. (2023); Yang et al. (2023) or the abilities to pass medical exams Umapathi et al. (2023); Wang et al. (2023). However, the results evaluated by GPT-4 are not fair and transparent because GPT-4 has problems with position bias Wang et al. (2023) and limited knowledge in the medical domain. Besides, passing an exam question requires FMs to remember, recall, and reason about the knowledge Zheng et al. (2023). Even if FMs give a wrong answer, it is unclear which part is responsible for it, making it difficult to move forward with the development of FMs in medical domain.
Since memory is the basis of recalling and reasoning, some work Min et al. (2023); Chern et al. (2023) pay more attention on the memorization abilities of FMs and evaluate the factuality of the atomic knowledge contained in the long-form contexts generated by FMs. Atomic knowledge is a fundamental unit conveying a single piece of information Min et al. (2023) and the performance of FMs on atomic knowledge reflects their memorization ability. Nevertheless, there are two drawbacks to applied the approach in the medical domain. First, the method requires the use of GPT-4 to break a long-form contents into a series of atomic knowledge, but the limited medical capability of GPT-4 hinders its application in the medical field. Second, responses to a same query from different FMs may
contain different atomic knowledge, leading it difficult to fairly compare the extent of self-diagnostic atomic knowledge in memory across open-source Chinese FMs.
To address above limitations, we construct a benchmark of Self-diagnostic Atomic Knowledge (SdAK) manually, which not only get rid of the reliance on GPT-4 but also can fairly compare the memorization abilities of Chinese medical FMs. The benchmark includes the most common types of atomic knowledge involved in the self-diagnostic queries of the real world, with 17 atomic types and a total of 14, 048 pieces of atomic knowledge. Each piece of atomic knowledge consists of the factual and non-factual claims in the form of implication and non-implication relation, respectively. FMs are considered to memory the atomic knowledge only if it both supports the factual claim and refute the non-factual claim.
Then, we evaluate the generic and Chinese medical FMs on the benchmark to explore the extent of self-diagnostic atomic knowledge stored in the memory of FMs. We observed that generic FMs perform better than medical FMs in terms of atomic knowledge and instruction-following ability. Error analysis showcased that both generic and medical FMs are sycophantic, e.g., always catering to user's claims. Besides, generic FMs show the stronger safety by providing a more rigorous explanation of atomic knowledge, which can be learned by medical FMs through distilled data.
In addition, we analyze the performance of FMs on different types of atomic knowledge and find that generic FMs perform well for atomic types involved in medical common sense and poor for some more professional atomic types, While the medical FMs perform poorly than generic FMs in both general and specialized medical knowledge. This indicates that a lot of attention to open-source Chinese medical FMs still needs to be paid.
Finally, we further explored the effect of different types of data commonly adopted for supervised fine-tuning by open-source Chinese medical FMs, i.e., real-world, semi-distilled, and distilled data. The experimental results showcased that the distilled data contributes the most in terms of atomic knowledge and instruction-following capabilities, while real-world data contributes the least. This shed light on the direction to enhance the medical abilities of open-source FMs.
## 2 Related Work
We first introduce the open-source Chinese medical FMs, then introduce the commonly adopted evaluation method in medical field, and finally introduce the related work of fact-checking.
### Chinese Medical FMs
To enhance the medical capability of open-source FMs Du et al. (2022); Scao et al. (2022); Touvron et al. (2023); Baichuan (2023), recent work commonly fine-tune the FMs by adopting the doctor-patient conversations, which are divided into three types according to the source of doctor and patient. The first data type is **Real-world** that both doctor and patient are from the real world. These workXiong et al. (2023); Chen et al. (2023); Xu (2023); Wang et al. (2023); (2023) collected the data from the public resource and fine-tune the various types of FMs. The second type is **Semi-distilled** that the doctor is played by the advanced FMs (Chat-GPT or GPT-4) and the patient are from the real world. These work Zhu and Wang (2023); Yang et al. (2023) collected the queries of patient from the Internet and fed the queries to advanced FMs to obtain answers. The last is **Distilled** that both the doctor and patient are played by ChatGPT or GPT-4. Zhang et al. (2023) proposed _HuatuoGPT_ by leveraging both distilled and real-world data to fine-tune Baichuan (Baichuan, 2023).
### Medical Evaluation Methods
Early efforts to evaluate the medical abilities of FMs focus on the case studiesXu (2023); Xiong et al. (2023); Chen et al. (2023); Zhu and Wang (2023) or manual analysisWang et al. (2023). However, case studies can hardly measure the medical capability of FMs fairly and manual evaluation faces the problem of high cost. Then, some work utilized GPT-4 to evaluate medical FMs from multiple perspectives, such as politeness, professional ability, safety, etc. Whereas, the limited medical capability of GPT-4 is still a concern. Recently, some studiesWang et al. (2023) have paid attention to the ability of FMs to pass medical exams. Nevertheless, passing a medical question requires the ability of FMs to memorize, recall, and reason the knowledge Zheng et al. (2023). The opacity of failures to medical questions hinders the development of open-source FMs in the medical domain. Since memory is the basis of recalling and reasoning, we construct a benchmark of self-diagnostic atomic
knowledge manually in this paper, which not only get rid of the reliance on GPT-4 but also can fairly compare the memorization abilities of open-source Chinese medical FMs.
### Fact-checking
The fact-checking task Thorne et al. (2018); Guo et al. (2022) aims to determine whether the claims are supported by the evidence provided, which has been an active area of research in NLP. Previous work mainly focuses on the domain of societyThorne et al. (2018), politicsHanselowski et al. (2019), online rumorsAugenstein et al. (2019), healthcareWadden et al. (2020); Saakyan et al. (2021); Sarrouti et al. (2021); Mohr et al. (2022), etc. Recently, some researchers Liu et al. (2023); Min et al. (2023) have paid attention to the factuality evaluation of fine-grained atomic knowledge that conveys a single piece of information. Min et al. (2023) introduced FACTSCORE to conduct the atomic evaluation by breaking a generation into a series of atomic claims. Chern et al. (2023) propose FACTOOL for atomic evaluation. It utilized GPT-4 to evaluate the factuality of atomic claims based on evidence retrieved by a search engine. In this paper, we adopted the task form of fact-checking to explore the extent of self-diagnostic atomic knowledge in FMs' memory. The difference is that we do not provide evidence and FMs need to evaluate the factuality of atomic claims according to the atomic knowledge memorized by them.
## 3 Construction of Self-diagnostic Atomic Knowledge Benchmark
To explore the extent of atomic knowledge for self-diagnosis stored in open-source Chinese medical FMs, we first constructed a benchmark, which contains the most common types of atomic knowledge involved in users' queries for self-diagnosis. Figure 1 shows the process of benchmark construction. We first conducted the thematic analysis on self-diagnostic queries to identify the most common types of atomic knowledge. Then, the factual and non-factual claims for each atomic type are constructed. Finally, the manual evaluation is conducted to ensure the reliability of the atomic knowledge.
### Types of Atomic Knowledge
Since it is impossible to consider all of the self-diagnostic atomic knowledge in the world, we mainly focus on the most common types of atomic knowledge involved in self-diagnostic queries. To obtain the most common atomic types, we conducted a manual analysis of the KUAKE-QIC Zhang et al. (2022) dataset focusing on the intent classification of self-diagnostic queries that via a search engine, including 10 types of intents. Examples are shown in Table 6 in Appendix A.
For each intent of the queries, we conducted thematic analysis Braun and Clarke (2012) of 200 samples randomly selected to identify the types of atomic knowledge. Thematic analysisBraun and Clarke (2012); Zheng et al. (2023) is a manual method to identify themes within data by inductive-then-deductive method. We first conduct the induction by initiating the preliminary type of atomic knowledge for each selected sample, where we mainly focus on medical-related knowledge, specializing in 'Disease-Symptom', 'Medicine-Effect', etc. Then, we deduce the most common type of atomic knowledge by aggregating the type into a broader atomic type if more samples fall into this type. Take the query with _Diagnosis_ intent in Figure 1 as an example. We can see that both _'breast pain'_ and'_breast cancer_' in this query are symptoms and disease, respectively. Thus, the atomic type involved in this query is 'Disease-Symptom'.
Figure 1: Benchmark construction of self-diagnostic atomic knowledge.
Table 1 shows the atomic types and percentages contained in the queries with various intents. We can find that over 80% of queries of each intent fall into different atomic types we deduced, indicating that atomic knowledge is a more fine-grained fundamental unit. Besides, the queries with different intents tend to involve the same type of atomic knowledge, e.g., queries with both _'Diagnosis'_ and _'Cause'_ intents involve the same atomic type of 'Disease-Symptom', which demonstrates the necessity and efficiency of evaluating FMs in terms of atomic knowledge.
It is worth noting that we discard the intent type '\(Price\)' because it is always about the cost of treatment, which is hard to quantify. In addition, we also dropped the atomic type '_Disease-Hospital'_ in intent type '_Advice_', which is always related to the recommended hospitals in different locations. Consequently, we explored the 17 most common types of atomic knowledge from real-world self-diagnostic queries, as shown in Table 1.
### Construction of Atomic Claims
After obtaining the most common types of self-diagnostic atomic knowledge, we construct atomic claims that convey a single piece of atomic knowledge for each atomic type. While some Chinese medical knowledge graphs, e.g., CMeKG(Odmaa et al., 2019), are direct sources to access atomic knowledge, data contamination is a problem that can not be ignored because some Chinese medical FMs, e.g., BenTSao(Wang et al., 2023) and ChatGLM-Med(Wang et al., 2023), have already incorporated the data of knowledge graph to empower the medical abilities of open-source FMs.
Hence, to quantify the extent of self-diagnostic atomic knowledge stored in the memory of open-source Chinese medical FMs fairly, we access the structured medical content from the public medical websites12. On one hand, the medical content is reliable because it is edited and verified by professional medical teams. On the other hand, these websites are also the main source of health-related information for self-diagnostic queries.
Footnote 1: [https://www.xiahoe.cn/medical](https://www.xiahoe.cn/medical)
Footnote 2: [https://www.120ask.com/disease](https://www.120ask.com/disease)
As shown in Figure 1, we extract the atomic knowledge for each disease-related atomic type from the structured disease content. For example, we extract the disease _Pancreatic tail cancer (\(\mathbb{H}\) )_ and symptom _abdominal pain (\(\mathbb{H}\) )_ for the 'Disease-Symptom' atomic type. Then, we heuristically at most 1000 factual claims for each atomic type in the form of implication relation, each claim conveys a single piece of atomic knowledge, as shown in Table 2. Similarly, we adopted the same method to construct the factual claims for examination/medicine-related atomic types.
Given that FMs may exhibit a sycophantic bias (Wei et al., 2023; Du et al., 2023), e.g., it always supports the user's claims, it is not reliable for FMs to judge whether factual claims are correct. To avoid this bias, we construct a non-factual atomic claim for each factual atomic claim by converting the 'implication' into a 'non-implication' relation, as shown in Table 2. It is worth noting that FMs are considered to possess one piece of atomic knowledge only if they both support the factual claim and refute the non-factual claim.
### Human Evaluation of Atomic Claim
To verify the reliability of the atomic claims, we conducted a manual evaluation based on the evidence retrieved through a search engine. We first randomly selected 50 factual claims for each atomic type. Then, we follow the previous work (Chern et al., 2023) and retrieve evidence by feeding factual claims into a search engine3. Finally, We keep the top 10 items retrieved by a search en
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Intent** & **Atomic Type** & **Percentage** \\ \hline Diagnosis & \begin{tabular}{c} Disease-Symptom \\ Disease-Examination \\ \end{tabular} & 81\% \\ \hline Cause & \begin{tabular}{c} Disease-Cause \\ Disease-Symptom \\ \end{tabular} & 64\% \\ \hline Method & \begin{tabular}{c} Disease-Medicine \\ Disease-Method \\ \end{tabular} & 34\% \\ \hline \multirow{3}{*}{Advice} & \begin{tabular}{c} Disease-Hospital \\ Disease-Department \\ Disease-Examination \\ \end{tabular} & 8\% \\ \hline \multirow{2}{*}{Metric\_explain} & Examination-Range & 63\% \\ & Metric-Effect & 37\% \\ \hline \multirow{2}{*}{Disease\_express} & Disease-Symptom & 62\% \\ & Disease-Infectivity & 15\% \\ & Diseases-Complication & 15\% \\ \hline \multirow{3}{*}{Result} & \begin{tabular}{c} Disease-Symptom \\ Western Medicine-SideEffect \\ Chinese Medicine-SideEffect \\ Food-Effect \\ \end{tabular} & 36\% \\ \hline Attention & \begin{tabular}{c} Disease-Food \\ Disease-Prevention \\ \end{tabular} & 59\% \\ \hline \multirow{2}{*}{Effect} &
\begin{tabular}{c} Western Medicine-Effect \\ Chinese Medicine-Effect \\ Food-Effect \\ \end{tabular} & 20\% \\ \hline Price & \multicolumn{2}{c}{**Treatment-Price**} & 97\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Types and percentages of atomic knowledge contained in each intent of self-diagnostic queries.
gine as evidence and manually judge whether the evidence supports the factual claims.
Table 3 shows the results of human evaluation, where _Support_, _Neural_, and _Refute_ indicate that evidence supports claims, insufficient evidence, and evidence refutes claims, respectively. We can observe that 88% of claims can be fully supported by the evidence and only 4% are refuted, which shows the reliability of the atomic claims we constructed. In addition, the reliability of about 8% of factual claims can not verified due to insufficient evidence. We attribute it to the fact that these pieces of atomic knowledge are relatively low-frequency, leading to search engines failing to retrieve the related evidence. Overall, the atomic claims we constructed are reliable in quantifying the extent of self-diagnostic atomic knowledge in the memory of open-source Chinese medical FMs.
## 4 Experiments
In this section, we first introduce the foundation models for evaluation, and then provide the details of experimental settings, including the prompt, hyper-parameters, and metrics. Finally, we present the experimental results.
### Foundation Models for Evaluation
There are two types of FMs used for evaluation: generic FMs and open-source Chinese medical FMs. The generic FMs mainly include closed-source models, i.e., ChatGPT and GPT-4 (OpenAI, 2023) and open-source Chinese FMs, i.e., ChatGLM-2 (Du et al., 2022), Baichuan2-7/13b-Chat (Baichuan, 2023), and Qwen-7/14b-Chat (Bai et al., 2023). The open-source Chinese medical FMs are categorized by the type of data used for fine-tuning, i.e., **Real-world**: BenTsao (Wang et al., 2023), ChatGLM-Med (Wang et al., 2023), DoctorGLM (Xiong et al., 2023), MedicalGPT(Xu, 2023), and Bianque (Chen et al., 2023), **Semi-distilled**: Chatmed-Consult (Zhu and Wang, 2023),
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Atomic Type** & **Example of factual Non-factual atomic claim** & **Number** \\ \hline Metric-Effect & Anti-endORMAL antibody tests can (not) be used in vasculitis. & 840 \\ &
\begin{tabular}{c} \(\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}\texttt{\#}}\texttt{\#}\texttt{\#}\texttt{\#}{\#}\texttt{\#}\texttt{\#}{\#}\texttt{\#}\texttt{\#}{\#}\texttt{\#}{\#}\texttt{\#}\texttt{\#}\texttt{\#}{\#}\texttt{\#}\texttt{\#}{\#}\texttt{\#}{\#}\texttt{\#}{\#}\texttt{\#}{\#}\texttt{\#}{\#}\texttt{\#}{\#}\texttt{\#}{\#}\texttt{\#}{\#}\texttt{\#}{\#}\texttt{\#}{\#}\texttt{\#}{\#}\texttt{\#}{\#}\texttt{\#}{\#}\texttt{\#}{\#}}\texttt{\#}{\#}\texttt{\#}{\#}\texttt{\#}{\#}\texttt{\#}{\#}\texttt{\#}{\#}\texttt{\#}{\#}\texttt{\#}{\#}}{\texttt{\#}}{\texttt{\#}}{\texttt{\#}}{\texttt{\#}}{\texttt{\#}}{\texttt{\#}}{\texttt{\#}}{\texttt{\#}}{\texttt{\#}}{\texttt{\#}}{\texttt{\}}{\texttt{\#}}{\texttt}{\}{\texttt{}}{\}{\#}{\#}}\texttt{\}}{\texttt{\#}}{\texttt{\}}{\}{\texttt}{\}{\)}\(\text}}\) & \(\!\!
Semi-distilled & Real-world**: Zhongjing Yang et al. (2023), and **Distilled & Real-world**: HuatuoGPT Zhang et al. (2023).
### Experimental Settings
#### 4.2.1 Evaluation Prompt
To evaluate the performance of FMs on the self-diagnostic atomic knowledge benchmark, we designed an appropriate prompt to instruct FMs to output as we specified. Since our goal is to explore the extent of self-diagnostic atomic knowledge stored in FMs' memory, we do not explore the prompt engineering in depth but designed a prompt that is easy to understand for FMs. The prompt is as follows: If the following claim is correct, please reply "correct" first, and then give the reason. If no, please reply "incorrect" first, then give the reason. (\(\top\))
The prompt specifies two parts of the output: the **answer** and the **reason**. The answer directly gives whether the claim is supported or not and the reason provides the evidence of answers given by FMs. We concatenated the prompt and atomic claims and fed them into FMs for experiments. In our preliminary study, we explored the performance of ChatGPT with different prompts and the results showed no significant difference. The details are shown in Appendix B. Hence, we used the prompt we designed above for all experiments.
#### 4.2.2 Evaluation Metrics
To evaluate the performance of FMs on self-diagnostic atomic knowledge, we adopted three cascaded metrics, including the Following Rate (FR), Accuracy (Acc), and Answer Reliability (AccR). The evaluation process is shown in Figure 2.
**Following Rate (FR)** evaluates the abilities of FMs to follow instructions. For a piece of atomic knowledge, FMs are considered to follow the instruction if FMs can give the answers (correct or incorrect) to factual and non-factual atomic claims at the beginning of the response. **Accuracy (Acc)** measures the abilities of FMs on self-diagnostic atomic knowledge. FMs are considered to have a piece of atomic knowledge if it gives the answer _'correct'_ to the factual claim and _'incorrect'_ to the non-factual claim. **Answer Reliability (AnsR)** assesses the reliability of FMs' answers. Specifically, we randomly selected 100 pieces of atomic knowledge possessed by FMs according to their answers for manual analysis. If the reason given by FMs can support the answer '_correct_' to a factual claim, and the answer '_incorrect_' to a non-factual claim, we believe that the answers given by FMs are reliable.
#### 4.2.3 Hyper-parameters
For ChatGPT and GPT-4, we adopted the _GPT-3.5-turbo-0301_ and _GPT-4-0314_ version, respectively, and the generation settings are set by default. For other open-source generic FMs and medical FMs, we adopted the same generation settings as Baichuan2 Baichuan (2023) for a fair comparison. The temperature, top_k, top_p, and repetition_penalty are set to 0.3, 5, 0.85, and 1.05, respectively, and other parameters are set by default. All experiments for each LLM are conducted two times, and we report the mean and standard deviation values.
### Experimental Results
Table 4 showcased the performance of various FMs on SdAK benchmark and we have several findings.
Firstly, generic FMs perform better than specialized LLMs on the capability of instruction following. We can observe that the _following rate_ of generic FMs is almost close to 100%, while the performance of specialized FMs varies, some FMs, e.g., Bianque, DoctorGL
Figure 2: Process of evaluation.
structions. This reveals that open-source Chinese medical FMs still have a lot of room for improvement in terms of instruction following.
Secondly, almost all generic FMs outperform specialized FMs in terms of atomic knowledge. Among them, GPT-4 unsurprisingly achieved the best performance of 65.42% in the _accuracy_ metric. Among the open-source Chinese generic FMs, Qwen-14b-Chat performs the best performance of 57.29%, even surpassing ChatGPT by 5.57%. Besides, as the model size increases, more knowledge can be memorized by foundation models. We can find that the performance of Qwen on the SdAk benchmark increases \(57.29-43.68=13.61\%\) in the Accuracy metric when the model size increases from 7 billion to 14 billion. The same phenomenon can be found in Baichuan. This suggests that increasing the model size is still an optional solution to empower the medical capability of FMs. However, the best performance of open-source Chinese medical FMs in the Accuracy metric only reaches about 25%. This suggests that the open-source Chinese medical FMs do not well memorize self-diagnostic atomic knowledge and more effort still needs to be invested.
Thirdly, specialized FMs that adopted distilled or semi-distilled data for fine-tuning generally perform better on the SdAk benchmark than that used real-world data in the Accuracy metric. This indicates that open-source FMs can learn the medical capabilities from advanced generic FMs, e.g., GPT-4 and ChatGPT, by distilled/semi-distilled data, which is an option worth considering to enhance the medical capabilities of open-source Chinese FMs.
Finally, the answers given by most FMs to factual and non-factual claims are reliable. Since the very small standard deviation of the two experiments, we conducted a manual analysis of the first experimental results. We can observe that almost all FMs achieve a performance of more than 95% in _answer reliability_ metric, demonstrating the reliability of the answers given by FMs. This also reveals the reliability of _accuracy_ metric.
## 5 Analysis
In this section, we first conduct the error analysis of FMs on atomic knowledge. Then, we explore the performance of FMs on various types of atomic knowledge. Finally, we study the impact of different types of data for supervised fine-tuning on FMs.
### Error Analysis of FMs on Atomic Knowledge
To analyze the error types of FMs on SdAK, we first randomly select 100 pieces of atomic knowledge not memorized by FMs from the first experimental results. Then, we conducted a thematic analysis of the responses to the factual and non-factual claims. The errors can be divided into four types: **NotFollow**: FMs do not directly provided
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Domain** & **Data Type** & **FMs** & **FR(\%)** & **Acc(\%)** & **AnsR(\%)** \\ \hline \multirow{6}{*}{Generic} & \multirow{6}{*}{-} & GPT-4 & 99.96(0.00) & **65.42(0.60)** & 100 \\ & & Qwen-14b-Chat & 100(0.00) & 57.29(0.03) & 98 \\ & & ChatGPT & 99.97 (0.00) & 51.72(0.40) & 97 \\ & & Qwen-7b-Chat & 100 (0.00) & 43.68(0.10) & 98 \\ & & Baichuan2-13b-Chat & 99.71(0.00) & 42.01(0.05) & 96 \\ & & ChatGLM2 & 99.84(0.01) & 37.90(0.04) & 97 \\ & & Baichuan2-7b-Chat & 99.89(0.01) & 16.14(0.09) & 95 \\ \hline \multirow{6}{*}{Specialized} & Semi-distilled\&Real & Zhongjing & 90.22(0.17) & 24.78(0.10) & 97 \\ \cline{2-6} & Semi-distilled & Chatmed-Consult & 85.10(0.14) & 24.50(0.34) & 98 \\ \cline{1-1} \cline{2-6} & Distilled\&Real & HuatuoGPT & 99.73(0.00) & 16.15(0.01) & 98 \\ \cline{1-1} \cline{2-6} & & MedicalGPT & 76.04(0.50) & 7.86(0.50) & 100 \\ \cline{1-1} & & ChatGLM-Med & 94.91(0.07) & 7.46(0.15) & 75 \\ \cline{1-1} & & BentSao & 84.43(0.06) & 3.35(0.07) & 70 \\ \cline{1-1} & & Bianque & 0 & - & - \\ \cline{1-1} & & DoctorGLM & 0 & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance of generic and specialized FMs on self-diagnostic atomic knowledge. No more statistics were conducted for Bianque and DoctorGLM because they can hardly follow instructions (FR=0).
the answer('correct' or 'incorrect') to the claims. **Sycophancy**: FMs support both factual and non-factual claims. **Safety**: FMs argue that claims are not strictly expressed and provide a more rigorous explanation. **Misinterpretation**: FMs misinterpret a non-factual claim as a factual claim. Appendix C shows the examples of each type.
Table 5 shows the results of manual analysis. We can see that generic FMs do not make _NotFollow_ errors due to the strong ability of instruction-following. While most specialized FMs make _NotFollow_ errors because of the degradation in instruction-following ability. In addition, the most common type of error is Sycophancy for both generic and specialized FMs. This suggests that FMs tend to support the users' claims when in the face of unknown self-diagnostic atomic knowledge, showing their characteristic of friendliness. Furthermore, generic FMs always exhibit stronger safety than specialized FMs by providing a more rigorous explanation of atomic claims. Notably, the FMs adopting semi-distilled/distilled data, i.e., Chatmed-Consult, Zhongjing, HuatuoGPT, also show better safety than FMs using real-world doctor-patient conversations, i.e., MedicalGPT, ChatGLM-med, BenTsao, indicating that the safety of advanced FMs can be learned by the type of distilled data. Finally, the specialized FMs make more _Misinterpretation_ errors than the generic FMs, indicating that the understanding ability of the specialized FMs still needs to be further improved.
### Performance of FMs on Various Types of Atomic Knowledge
To explore the performance of different FMs on various types of atomic knowledge, we plotted the radar map, as shown in Figure 3. We can observe that GPT-4 performs well for some atomic types involved in medical common sense (right half part in Figure 3), achieving about 90% in the Accuracy metric. However, the performance of some more professional atomic types (upper left part in Figure 3), such as treatment of disease, traditional Chinese medicine diet, etc., is poor, which indicates that GPT-4 is still insufficient in medical professionalism. In addition, specialized FMs perform variably and are poorer than GPT-4 in each atomic type. It is worth noting that the specialized FMs in the figure are fine-tuned using distilled data from advanced FMs, e.g., ChatGPT or GPT-4, which makes it difficult for specialized FMs to surpass the advanced FMs in each atomic type.
### Effect of Different Types of Data on Specialized FMs
To further explore the effect of different types of data on the atomic knowledge of specialized FMs, we fine-tune the distilled, simi-distilled and real-world data on the same backbone Baichuan-7b-base. Specifically, The real-world and distilled data are from HuatuoGPT(Zhang et al., 2023), with 69768 and 61400 single-turn conversations, respectively. The semi-distilled data are from Chatmed-Consult (Zhu and Wang, 2023), with 549326 single-turn conversations. Following the previous work of
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{**Domain**} & \multirow{2}{*}{**FMs**} & \multicolumn{4}{c}{**Error Type**} \\ & & \multicolumn{2}{c}{**NotFollow**} & \multicolumn{2}{c}{**Sycophancy Safety**} & \multicolumn{2}{c}{**Misinterpretation**} \\ & GPT4 & 0 & 68 & 26 & 6 \\ & Qwen-14b-Chat & 0 & 68 & 24 & 8 \\ Generic & ChatGPT & 0 & 79 & 17 & 4 \\ & Qwen-7b-Chat & 0 & 74 & 20 & 6 \\ & Baichuan2-13b-Chat & 0 & 72 & 24 & 4 \\ & ChatGLM2-6b & 0 & 70 & 25 & 5 \\ & Baichuan2-7b-Chat & 0 & 74 & 21 & 5 \\ \hline \multirow{5}{*}{Specialized} & Chatmed-Consult & 20 & 48 & 18 & 14 \\ & Zhongjing & 8 & 62 & 18 & 12 \\ & HuatuoGPT & 0 & 64 & 22 & 14 \\ & MedicalGPT & 23 & 62 & 2 & 13 \\ & ChatGLM-med & 5 & 54 & 1 & 40 \\ & BenTsao & 5 & 90 & 5 & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Error analysis of FMs on atomic knowledge.
(Zhang et al., 2023), we add 48818 general single-turn conversations for each experiment to prevent knowledge forgetting. We adopt the ZeRO strategy to distribute the model across 4 A100 GPUs for training. The epoch, learning rate, batch_size, and maximum context length are set to 2, \(5e-5\), 128, 64, and 2048, respectively. The model saved in the second epoch is used for inference.
Figure 4 shows the performance of the FMs with different types and amounts of data. We can observed that without the introduction of medical data, the instruction-following performance of FMs is 98.94%, indicating that the generic data can empower FMs with well instruction-following abilities.
As the amount of data increases, all three types of data affect the instruction-following ability of FMs, and the real-world data has the most negative effect. This may be due to the fact that dialogue in the real world is an equal dialogue between people, and it is rare for one party to give instructions to the other. Therefore, the learned instruction-following ability of the model from this type of data is not as high as for human-machine dialogue.
Besides, real-world data contributes the least to the atomic knowledge of FMs. This may be due to the fact that there is less medical knowledge in real-world doctor-patient conversations, as the doctor's are more concerned with diagnosis rather than introduce medical knowledge to patients. Thus, how to leverage the real-world data is a problem worth pondering.
In addition, semi-distilled data with 20k contributes the most to the LLM with a performance of 39.29%, even surpassing the Chatmed-Consult with 549k data (Table 4) by 39.29-24.50=14.79%. However, more semi-distilled data could lead to performance degradation. This may be due to the introduction of more low-quality queries in semi-supervised data.
Furthermore, distilled data contributes the most to FMs on self-diagnostic atomic knowledge. With the amount of distilled data increases, the performance of FMs on atomic knowledge continue to benefit. This is because both doctors and patients of distilled data are played by advance FMs, which is able to have a conversation involving rich medical knowledge, thus better stimulating the atomic knowledge of FMs. Therefore, it is expected to continue to improve the atomic knowledge of FMs with more introduction of distilled data.
## 6 Conclusion
In this paper, we designed a Self-Diagnostic Atomic Knowledge (SDAK) benchmark to quantitatively explore the extent of atomic knowledge stored in the memory of open-source Chinese Medical foundation models. The benchmark contains 17 atomic types and a total of 14048 pieces of atomic knowledge. we evaluated both generic and specialized FMs and observed that generic FMs perform better than specialized FMs in terms of atomic knowledge and instruction-following ability. In-depth analysis revealed that both generic and specialized FMs are sycophantic and generic FMs showed stronger safety that can be learned by specialized FMs through distilled data. Finally, we explored different types of data commonly adopted
Figure 4: Performance of the LLM with different types of data.
Figure 3: Performance of FMs on various types of atomic knowledge.
by specialized FMs, e.g. distilled, semi-distilled, and real-world data. The experimental result showcased that distilled data can benefit specialized FMs most and real data contributes the least. We hope that the benchmark we constructed and some of our findings will contribute to the development of Chinese medical FMs.
## Acknowledgements
|
2306.02906
|
Magnetic exchange interactions at the proximity of a superconductor
|
Interfacing magnetism with superconductivity gives rise to a wonderful
playground for intertwining key degrees of freedom: Cooper pairs, spin, charge,
and spin-orbit interaction, from which emerge a wealth of exciting phenomena,
fundamental in the nascent field of superconducting spinorbitronics and
topological quantum technologies. Magnetic exchange interactions (MEI), being
isotropic or chiral such as the Dzyaloshinskii-Moriya interactions (DMI), are
vital in establishing the magnetic behavior at these interfaces as well as in
dictating not only complex transport phenomena, but also the manifestation of
topologically trivial or non-trivial objects as skyrmions, spirals,
Yu-Shiba-Rusinov states and Majorana modes. Here, we propose a methodology
enabling the extraction of the tensor of MEI from electronic structure
simulations accounting for superconductivity. We apply our scheme to the case
of a Mn layer deposited on Nb(110) surface and explore proximity-induced impact
on the MEI. Tuning the superconducting order parameter, we unveil potential
change of the magnetic order accompanied with chirality switching. Owing to its
simple formulation, our methodology can be readily implemented in
state-of-the-art frameworks capable of tackling superconductivity and
magnetism. Our findings opens intriguing exploration paths, where chirality and
magnetism can be engineered depending on the conducting nature of
magneto-superconducting interfaces. We thus foresee implications in the
simulations and prediction of topological superconducting bits as well as in
cryogenic superconducting hybrid devices involving magnetic units.
|
Uriel Allan Aceves Rodríguez, Filipe Souza Mendes Guimarães, Sascha Brinker, Samir Lounis
|
2023-06-05T14:12:40Z
|
http://arxiv.org/abs/2306.02906v2
|
# Magnetic exchange interactions at the proximity of a superconductor
###### Abstract
Interfacing magnetism with superconductivity gives rise to a wonderful playground for intertwining key degrees of freedom: Cooper pairs, spin, charge, and spin-orbit interaction, from which emerge a wealth of exciting phenomena, fundamental in the nascent field of superconducting spinorbitronics and topological quantum technologies. Magnetic exchange interactions (MEI), being isotropic or chiral such as the Dzyaloshinskii-Moriya interactions (DMI), are vital in establishing the magnetic behavior at these interfaces as well as in dictating not only complex transport phenomena, but also the manifestation of topologically trivial or non-trivial objects as skyrmions, spirals, Yu-Shiba-Rusinov states and Majorana modes. Here, we propose a methodology enabling the extraction of the tensor of MEI from electronic structure simulations accounting for superconductivity. We apply our scheme to the case of a Mn layer deposited on Nb(110) surface and explore proximity-induced impact on the MEI. Tuning the superconducting order parameter, we unveil potential change of the magnetic order accompanied with chirality switching. Owing to its simple formulation, our methodology can be readily implemented in state-of-the-art frameworks capable of tackling superconductivity and magnetism. Our findings opens intriguing exploration paths, where chirality and magnetism can be engineered depending on the conducting nature of magneto-superconducting interfaces. We thus foresee implications in the simulations and prediction of topological superconducting bits as well as in cryogenic superconducting hybrid devices involving magnetic units.
Introduction
Despite the hostility between the superconducting and magnetic orders, together they are known to bring to life an abundance of interesting physics, such as in-gap effects such as Majorana[1; 2], Andreev, and Yu-Shiba-Rusinov (YSR) states[3; 4; 5; 6]. These phenomena are currently in the spotlight given their potential applications in the field of topological quantum computing[7; 8; 9]. In the context of superconducting spinorbitronics, the interplay of the underlying Cooper pairs with the three electronic degrees of freedom--spin, charge and spin-orbit interaction--can trigger tantalizing opportunities for cryogenic quantum technologies.
Since Majorana zero modes are essential for topological quantum computing, numerous platforms have been proposed for their physical realization: magnetic islands[10], skyrmions[11] and spin chains[12], among others. In the latter two examples, the non-collinearity of the magnetic moments is a crucial ingredient for the emergence of the coveted in-gap states. For non-collinearity to occur, there must be competition between the magnetic interactions in the system. We need thus methods to obtain and analyze these interactions within a realistic description of the electronic structure of the given systems. Moreover, we need to understand how are these interactions affected by superconductivity and, in turn, how superconductors are influenced by the magnetic structures in their proximity.
The microscopic theory to describe conventional superconductivity goes back more than 60 years, first by the hands of John Bardeen, Leon Cooper, and Robert Schrieffer (BCS)[13] in 1957. Later that year Nikolai Bogoliubov provided a rigorous mathematical foundation to the BCS theory[14; 15] and shortly after, Pierre-Gilles de Gennes extended Bogoliubov's formalism to handle superconductors with surfaces, field-induced vortices and other imperfections[16]. This new approach in real space is now known as the Bogoliubov-de Gennes (BdG) method[17; 18], and it is extensively used to investigate superconducting systems with impurities, superconductor/non-superconductor heterostructures, Josephson junctions, and topological superconductors, to name some notable examples[19; 20; 21; 22]. The BdG method is a mean-field approximation that relies upon Bogoliubov-Valatin transformations that take the Hamiltonian from a particle space into a particle-hole one, and it has been used in a variety of situations from tight-binding[23; 24] to density functional theory (DFT)[25; 26]. Especially in latter, there have been efforts to computationally analyze superconductor/non-superconductor heterostructures based on a realistic description
of the electronic structure[27; 28; 29; 19]. On the experimental front, the activity regarding magnetic/superconductor interfaces has been intense as well, with much focus on Majorana modes and other end-states on atomic chains[30; 31; 32; 33; 34; 12]. In this paper, we provide a simple and detailed demonstration on how to quantify the magnetic exchange interactions (MEI) from electronic structure simulations of realistic materials accounting for electron-hole coupling channels originating from the superconducting order in the BdG method, spin-orbit coupling and multi-orbital hybridization phenomena. We thus go beyond fundamental basic models suggested in the past[35; 36]. Furthermore, we apply the proposed methodology to a magnetic interface with a superconducting substrate: a Mn monolayer deposited on Nb(110) surface.
This paper is organized as follows: In Subsection II.1, we introduce our multi-orbital tight-binding theory necessary for the incorporation of the BdG equations. Subsection II.2 contains a discussion and description of the theoretical formalism enabling the quantification of the bilinear tensor of magnetic exchanges in the context of the BdG method. We present a simple formula to calculate the bilinear magnetic exchanges within the Green function formalism in the particle-hole space, and establish how this formula converges to the same one as in the metallic case[37; 38] in absence of superconductivity. In Section III, a prototypical system composed of one monolayer of Mn (110) on top of a 5-atom-thick slab of Nb (110) is used as a proof-of-concept to which we apply our theory. We perform self-consistent calculations to investigate the effect that the superconductivity in Nb has on the magnetic properties of the Mn atoms and vice-versa. We find the magnetic ground state of the Mn monolayer to be row-wise antiferromagnetic, agreeing with recent experimental and theoretical results[39]. We proceed to test a wide range of electron-phonon coupling strengths that directly influences the size of the superconducting gap, evaluating the resulting self-consistent gap parameters and the corresponding magnetic moments in Mn. The effect of superconductivity in the Heisenberg exchange interactions for different gap sizes is scrutinized in Subsection III.1. We find that for a realistic value of the superconducting gap, the change induced by superconductivity is minimal and therefore does not have important repercussions. However, for gap values of the order of the Heisenberg exchange, we observe a change in the magnetic ground state, from row-wise antiferromagnetic to ferromagnetic. Finally, in Subsection III.2, we focus on the Dzyaloshinskii-Moriya interaction (DMI) and observe that the corrections due to superconductivity are also relatively small in this case.
Nevertheless, we notice that in our case the chirality of the corrective term is opposite to the one at the non-superconducting case. Utilizing a multiple-scattering expansion, we identify how the intertwining of the superconducting parameter, intra-atomic spin-orbit and exchange interactions impact the sign of the Heisenberg exchange as well as the DMI.
## II Theoretical description
To investigate the characteristics and effects of superconducting magnetic structures, we explored a system consisting of a slab of Nb (110) with a thickness of 5 layers, with a monolayer of Mn on top, as shown in Figs. 1(a) and (b). We chose Nb as a substrate given its large superconducting gap[40] (\(2\Delta=3.8\,\)meV), critical temperature of \(T_{C}=9.3\,\)K[40], and most importantly, given the development of recent experimental techniques to fabricate clean surfaces of Nb (110)[41]. The latter work led to a breakthrough, and since its publication the Nb(110) surface became a standard playground for the exploration of potential Majorana and YSR states hosted by adatoms[42; 43; 26; 44], nanowires[45; 12; 34] and thin films[46; 47; 39].
### Tight-binding and the Bogoliubov-de Gennes method
The magnetic and superconducting system may be described by the Hamiltonian
\[H_{S}=\frac{1}{N}\left\{\sum_{ij,\sigma\eta,\mu\nu,\mathbf{k}}H^{\mu\nu}_{ij,\sigma \eta}(\mathbf{k})c^{\dagger}_{i\mu\sigma}(\mathbf{k})c_{j\nu\eta}(\mathbf{k})-\sum_{i,\mu \nu,\mathbf{k}\mathbf{k}^{\prime}}\lambda_{i\mu}c^{\dagger}_{i\mu\uparrow}(\mathbf{k})c^ {\dagger}_{i\mu\downarrow}(-\mathbf{k})c_{i\mu\downarrow}(-\mathbf{k}^{\prime})c_{i \mu\uparrow}(\mathbf{k}^{\prime})\right\}, \tag{1}\]
with \(c^{\dagger}_{i\mu\sigma}(\mathbf{k})\) and \(c_{j\nu\eta}(\mathbf{k})\) being the creation and annihilation operators of electrons with wave vector \(\mathbf{k}\) and spin \(\sigma\) in the orbitals \(\mu\) or layer \(i\) and spin \(\eta\) in the orbitals \(\nu\) or layer \(j\), respectively. \(N\) is the number of wave vectors in the Brillouin zone. The second element on the right-hand side corresponds to the BCS term, allowing electrons to form Cooper pairs and, therefore to give rise to superconductivity. Its strength, given \(\lambda_{i\mu}\in\mathbb{R}\), originates from the electron-phonon coupling and may depend on the orbital \(\mu\) of layer \(i\). \(H^{\mu\nu}_{ij,\sigma\eta}(\mathbf{k})\) is the non-superconducting Hamiltonian, which can be further separated into
\[H^{\mu\nu}_{ij,\sigma\eta}(\mathbf{k})=H^{0\mu\nu}_{ij}(\mathbf{k})\sigma^{0}+\mathbf{ \sigma}\cdot\hat{\mathbf{e}}_{i}B^{[\mathrm{xc}]\mu\nu}_{i}(\mathbf{k})\delta_{ij}+ \mathbf{\sigma}\cdot\mathbf{B}^{[\mathrm{soc}]\mu\nu}_{i}(\mathbf{k})\delta_{ij}, \tag{2}\]
where \(H_{ij}^{0\mu\nu}\) is the spin-independent tight-binding term, the second term comprises the intra-atomic exchange interactions (originating from a Hubbard-like contribution[48; 49]), and the last term describe the spin-orbit interaction. The hopping parameters for Nb and Mb were obtained from first-principles calculations from Ref. [50].
In the mean-field approximation, Eq. (1) simplifies to
\[\begin{split} H_{S}^{\text{MF}}=&\frac{1}{N}\sum_{ \mathbf{k}}\left\{\sum_{ij,\sigma\eta,\mu\nu}H_{ij,\sigma\eta}^{\mu\nu}(\mathbf{k})c_{ i\mu\sigma}^{\dagger}(\mathbf{k})c_{j\nu\eta}(\mathbf{k})\right.\\ &-\left.\sum_{i,\mu}\left(\Delta_{i\mu}^{*}c_{i\mu\downarrow}(- \mathbf{k})c_{i\mu\uparrow}(\mathbf{k})+\Delta_{i\mu}c_{i\mu\uparrow}^{\dagger}(\mathbf{ k})c_{i\mu\downarrow}^{\dagger}(-\mathbf{k})\right)\right\},\end{split} \tag{3}\]
with
\[\Delta_{i\mu}=\lambda_{i\mu}\frac{1}{N}\sum_{\mathbf{k}}\langle c_{i\mu\downarrow} (-\mathbf{k})c_{i\mu\uparrow}(\mathbf{k})\rangle,\quad\Delta_{i\mu}^{*}=\lambda_{i \mu}\frac{1}{N}\sum_{\mathbf{k}}\langle c_{i\mu\uparrow}^{\dagger}(\mathbf{k})c_{j \mu\downarrow}^{\dagger}(-\mathbf{k})\rangle, \tag{4}\]
Figure 1: (a) Side view of a slab of five Nb (110) layers with a Mn monolayer on top, blue spheres represent Nb atoms, ilac spheres Mn. (b) Top view. Each layer has the structure of a centered rectangular lattice. The arrows show the magnetic ground state which for this case is row-wise antiferromagnetic. (c) Schematics of different Green functions after perturbative expansion with respect to the superconducting order parameter. (d) Superconducting gap parameter as function of \(\lambda\), the vertical dotted line is at \(3.264\,\)meV and indicates the lowest value of \(\lambda\) for which TITAN converged to a state with a finite \(\Delta\). All calculations were done at \(4.2\,\)K with 15,000 k-points. (d) Magnetic moments of the Mn layer as a function of \(\lambda\).
\(\Delta_{i\mu}\) is known as the superconducting gap parameter. For clean superconductors it is half of the superconducting gap, as it defines the necessary energy to scatter Cooper pairs (which live at the Fermi level)[51]. It is important to note that for each choice of \(\lambda_{i\mu}\) and \(B_{i}^{[\text{xc}]\mu\nu}\) the final values for \(\Delta_{i\mu}\) and the magnetic moments \(\mathbf{m}^{\mu}\) in the ground state are obtained self-consistently. This means that even though it seems to be linearly proportional to \(\lambda_{i\mu}\), this is not the case in practice as seen in Fig. 1 (d). This leads to a more realistic characterization of materials as no determined state is enforced to the system, and they can then evolve into their own ground state. In this work, we restrict ourselves to only two different values for \(\lambda_{i\mu}\), namely, a constant \(\lambda_{i\mu}=\lambda\) for all the orbitals of the Nb layers, and \(\lambda_{i\mu}=0\) for all \(\mu\) in the Mn one.
To diagonalize the Hamiltonian in Eq. (3) we use a Bogoliubov-Valatin transformation[18], thus transferring the original Hamiltonian from an electron representation to an electron-hole one. The transformation is given by
\[c_{i\mu\sigma}(\mathbf{k})=\sum_{n}^{{}^{\prime}}u_{i\sigma}^{n}(\mathbf{k})\gamma_{n} +v_{i\sigma}^{n*}(\mathbf{k})\gamma_{n}^{\dagger},\quad c_{i\mu\sigma}^{\dagger}( \mathbf{k})=\sum_{n}^{{}^{\prime}}u_{i\sigma}^{n*}(\mathbf{k})\gamma_{n}^{\dagger}+v_ {i\sigma}^{n}(\mathbf{k})\gamma_{n}^{\dagger}, \tag{5}\]
where the tilde indicates that the sums run only over the states with positive energy[18; 52]. This restriction in the sum is done to counteract the doubling of the degrees of freedom originated from the change of basis. After the transformation, we land in a system where the new particles (sometimes called _bogolons[53]_) are constituted by mixtures of electron and hole operators. The transformation in Eq. (5) is canonical, this means that the new operators \(\gamma_{n}\) and \(\gamma_{n}^{\dagger}\) fulfill the same anticommutation relations as \(c_{n}\) and \(c_{n}^{\dagger}\), namely
\[\{\gamma_{n},\gamma_{m}\}=\{\gamma_{n}^{\dagger},\gamma_{m}^{\dagger}\}=0, \quad\{\gamma_{n}^{\dagger},\gamma_{m}\}=\delta_{nm}. \tag{6}\]
After the transformation, we arrive at a set of equations of the form[18]
\[\sum_{j\mu}H_{\text{BdG}}^{ij,\mu\nu}(\mathbf{k})\phi_{j\mu}(\mathbf{k})=E_{n}(\mathbf{k} )\phi_{i\nu}(\mathbf{k}), \tag{7}\]
where the Bogoliubov-de Gennes Hamiltonian is given by
\[H_{\text{BdG}}^{ij,\mu\nu}(\mathbf{k})=\begin{pmatrix}H_{ij,\uparrow\uparrow}^{ \mu\nu}(\mathbf{k})-E_{F}&H_{ij,\uparrow\downarrow}^{\mu\nu}(\mathbf{k})&0&-\Delta_{i \mu}\mathbb{I}\\ H_{ij,\downarrow\uparrow}^{\mu\nu}(\mathbf{k})&H_{ij,\downarrow\downarrow}^{\mu\nu }(\mathbf{k})-E_{F}&\Delta_{i\mu}\mathbb{I}&0\\ 0&\Delta_{i\mu}^{*}\mathbb{I}&-H_{ij,\uparrow\uparrow}^{\mu\nu*}(-\mathbf{k})+E_{ F}&-H_{ij,\uparrow\downarrow}^{\mu\nu*}(-\mathbf{k})\\ -\Delta_{i\mu}^{*}\mathbb{I}&0&-H_{ij,\downarrow\uparrow}^{\mu\nu*}(-\mathbf{k})&- H_{ij,\downarrow\downarrow}^{\mu\nu*}(-\mathbf{k})+E_{F}\end{pmatrix}. \tag{8}\]
It is important to notice that due to the transformation given in Eq. (5), the hole-space change the wave vector arguments from \(\mathbf{k}\) to \(-\mathbf{k}\). The eigenvector of Eq. (7) is
\[\phi_{i\nu}(\mathbf{k})=\begin{pmatrix}u_{i\nu\uparrow}(\mathbf{k})\\ u_{i\nu\downarrow}(\mathbf{k})\\ v_{i\nu\uparrow}(\mathbf{k})\\ v_{i\nu\downarrow}(\mathbf{k})\end{pmatrix}. \tag{9}\]
\(E_{F}\) in Eq. (8) is the Fermi energy and it is placed in the diagonal such that the Fermi level is at zero for the BdG system.
Structurally we can consider \(H^{ij,\mu\nu}_{\text{BdG}}\) as subdivided into four parts; namely, blocks of electron-electron, electron-hole, and hole-hole interactions. The main diagonal consists of non-hybrid interactions, while the antidiagonal terms contain only the superconducting gap parameter that hybridizes electrons and holes. For simplicity in the discussions, we break down the BdG Hamiltonian as follows
\[H^{\text{ee}}_{ij,\mu\nu}(\mathbf{k}) =\begin{pmatrix}H^{\mu\nu}_{ij,\uparrow\uparrow}(\mathbf{k})-E_{F}&H^ {\mu\nu}_{ij,\uparrow\downarrow}(\mathbf{k})\\ H^{\mu\nu}_{ij,\downarrow\uparrow}(\mathbf{k})&H^{\mu\nu}_{ij,\downarrow\downarrow} (\mathbf{k})-E_{F}\end{pmatrix}, H^{\text{eh}}_{ij,\mu\nu} =\begin{pmatrix}0&-\Delta_{i\mu}\mathbb{I}\\ \Delta_{i\mu}\mathbb{I}&0\end{pmatrix},\] \[H^{\text{hh}}_{ij,\mu\nu}(\mathbf{k}) =\begin{pmatrix}-H^{\mu\nu*}_{ij,\uparrow\uparrow}(-\mathbf{k})+E_{F }&-H^{\mu\nu*}_{ij,\uparrow\downarrow}(-\mathbf{k})\\ -H^{\mu\nu*}_{ij,\downarrow\uparrow}(-\mathbf{k})&-H^{\mu\nu*}_{ij,\downarrow \downarrow}(-\mathbf{k})+E_{F}\end{pmatrix}, H^{\text{he}}_{ij,\mu\nu} =\begin{pmatrix}0&\Delta^{*}_{i\mu}\mathbb{I}\\ -\Delta^{*}_{i\mu}\mathbb{I}&0\end{pmatrix}.\]
To avoid confusion from handling too many indices we will drop them for these submatrices whenever the context allows it. Thus, we write the BdG Hamiltonian in the following form:
\[H_{\text{BdG}}=\begin{pmatrix}H^{\text{ee}}&H^{\text{eh}}\\ H^{\text{he}}&H^{\text{hh}}\end{pmatrix}. \tag{10}\]
From Eq. (10), we can obtain the corresponding retarded Green function via
\[G_{\text{BdG}}(\mathbf{k},E+i\eta)=(E-H_{\text{BdG}}(\mathbf{k})+i\eta)^{-1}. \tag{11}\]
\(G_{\text{BdG}}(E+i\eta)\) in turn is a matrix that for the sake of simplicity we also consider as subdivided into four blocks as in Eq. (10)
\[G_{\text{BdG}}=\begin{pmatrix}G^{\text{ee}}&G^{\text{eh}}\\ G^{\text{he}}&G^{\text{hh}}\end{pmatrix}. \tag{12}\]
When the system is not superconducting \(\Delta_{ij}=0\), the Hamiltonian given in Eq. (10) becomes diagonal in particle-hole space (i.e., \(H^{\text{eh}}\) and \(H^{\text{he}}\) vanish). Consequently \(G_{\text{BdG}}\) is also diagonal (\(G^{\text{eh}}=G^{\text{he}}=0\)), and the non-superconducting system is described by \(G^{\text{ee}}\).
### Bilinear magnetic exchange tensor and the BdG method
Mapping the magnetic interactions of a realistic system into model Hamiltonians gives us the possibility of isolating different phenomena and analysing each of them separately. In our case, we use the Heisenberg model, represented by
\[H_{\rm Hb}=-\frac{1}{2}\sum_{ij}\hat{\mathbf{e}}_{i}\cdot\mathcal{J}_{ij}\cdot\hat{ \mathbf{e}}_{j}, \tag{13}\]
where \(\hat{\mathbf{e}}_{i}\) is the direction of the magnetization at location \(i\) and \(\mathcal{J}_{ij}\) is the bilinear tensor of magnetic exchanges. This Hamiltonian can be divided into three terms with different symmetries[54]
\[J_{ij}=\frac{\mathrm{Tr}(\mathcal{J}_{ij})\mathbb{I}_{3}}{3},\quad J_{ij}^{ \rm s}=\frac{\mathcal{J}_{ij}+\mathcal{J}_{ji}}{2}-\frac{\mathrm{Tr}(\mathcal{ J}_{ij})\mathbb{I}_{3}}{3},\quad D_{ij}=\frac{\mathcal{J}_{ij}-\mathcal{J}_{ji}}{2}. \tag{14}\]
Using these definitions, Eq. (13) can also represented as
\[H_{\rm Hb}=-\frac{1}{2}\sum_{ij}J_{ij}\hat{\mathbf{e}}_{i}\cdot\hat{\mathbf{e}}_{j}- \frac{1}{2}\sum_{ij}\hat{\mathbf{e}}_{i}\cdot J_{ij}^{\rm s}\cdot\hat{\mathbf{e}}_{j} -\frac{1}{2}\sum_{ij}\mathbf{D}\cdot(\hat{\mathbf{e}}_{i}\times\hat{\mathbf{e}}_{j}). \tag{15}\]
The first term on the right-hand side (Heisenberg exchange) favours ferro- or antiferromagnetic alignments depending on the sign of J. The second one is the traceless anisotropic part of \(\mathcal{J}_{ij}\) induced by spin-orbit coupling. Finally, the last item in Eq. (15) corresponds to the DMI, which is finite when inversion symmetry is broken and requires spin-orbit coupling. It may induce a relative rotation in the magnetic moments of neighbouring atoms and is vital for the stabilization of magnetic textures such as skyrmions[55; 56; 57]. In this work we focus only on the Heisenberg exchange and the DMI, since \(J_{ij}^{\rm s}\) is negligible.
The components of the DMI are extracted as follows[54]
\[D_{ij}=\left(\begin{array}{ccc}0&D_{z}&-D_{y}\\ -D_{z}&0&D_{x}\\ D_{y}&-D_{x}&0\end{array}\right). \tag{16}\]
Having a realistic representation of the magnetic interactions of the system through these terms is instrumental for the development of upcoming technologies that rely on magnetism, such as spintronic devices[58] with magnetic domain walls[59], spintronic diodes[60; 61], and superconductor/ferromagnet systems for quantum computing[62]. Although the purely magnetic scenario has been intensely investigated, that is no longer the case when the
structure contains a superconductor. Here, Cooper pairs enter the picture through the coupling between the electronic and hole states, which is not taken into account by the basic theory, developed exclusively for metals. To analyze those cases we need to account for the potential mutual impact of superconductivity and magnetism.
There are several techniques to obtain the tensor \(\mathcal{J}_{ij}\) from electronic structure simulations, one of the most common being the infinitesimal rotations method[37], which presents a way to map energies from the electronic structure into energies on an extended Heisenberg model. The basic idea behind this approach is to perturb the magnetic moments at two different locations \(i\) and \(j\), and quantify the resultant change in energy. To get the bilinear tensor of magnetic exchanges \(\mathcal{J}_{ij}\) we take the second order term of the energy change. For a non-superconducting system, such change (given a perturbation potential \(\delta V\)) is represented by[37; 38; 63; 64]
\[\delta E=-\frac{1}{\pi}\mathrm{Im}\int_{-\infty}^{\epsilon_{F}}d\epsilon\sum _{p}\frac{1}{p}\mathrm{Tr}[G(\epsilon)\delta V]^{p}. \tag{17}\]
where \(p\) describes the order of the expansion and \(\mathrm{Tr}\) is the trace over the site, spin and orbital spaces. The extension of the matrix space by the Bogoliubov-de Gennes equations imposes changes on the Green functions as well as on the perturbation potential. Within this formalism Eq. (17) becomes
\[\delta E=\frac{1}{\pi}\mathrm{Im}\int_{-\infty}^{\epsilon_{F}}d\epsilon\sum _{p}\frac{1}{p}\;\mathrm{tr}\left\{\Theta[G_{\mathrm{BdG}}(\epsilon)\delta V]^ {p}\right\}. \tag{18}\]
where the new trace (tr) runs over electron-hole space in addition to the site, spin and orbital ones. \(\Theta\) is a matrix in electron-hole space, whose function in Eq. (18) is to isolate the purely electronic terms, and it is given by
\[\Theta=\begin{pmatrix}\mathbb{I}&0\\ 0&0\end{pmatrix}. \tag{19}\]
The tensor of bilinear magnetic exchanges is obtained by taking the second-order term of the expansion of Eq. (18), that is
\[\mathcal{J}_{ij}=-\frac{\partial^{2}E_{ij}}{\partial\hat{\mathbf{e}}_{i}\partial \hat{\mathbf{e}}_{j}} \tag{20}\]
as expected from Eq. 13, with
\[\delta E_{ij}=\frac{1}{2\pi}\Theta\mathrm{Im}\mathrm{Tr}\int_{-\infty}^{ \epsilon_{F}}d\epsilon G_{\mathrm{BdG}}(\epsilon)\delta VG_{\mathrm{BdG}}( \epsilon)\delta V. \tag{21}\]
Here, the shape of the dispersion potential \(\delta V\) must also be generalized by varying the magnetization orientation vector \(\hat{\mathbf{e}}_{i}\) in the BdG Hamiltonian in Eq. (8). This results in
\[\delta V=\begin{pmatrix}\delta V^{e}&0\\ 0&\delta V^{h}\end{pmatrix}, \tag{22}\]
where
\[\delta V^{h}_{i}=-\delta V^{e*}_{i}=-(B^{\rm[xc]}\mathbf{\sigma}\cdot\delta\mathbf{e}_ {i})^{*}. \tag{23}\]
Collecting these results into Eq. (21), we obtain
\[\begin{split}\mathcal{J}_{ij}=\delta E_{ij}=-\frac{1}{2\pi}{\rm Im Tr _{Ls}}\int_{-\infty}^{\epsilon_{F}}& d\epsilon\,\left[B^{\rm[xc]}\mathbf{\sigma}\cdot\delta\mathbf{e}_{i}G^{\rm ee}_{ ij}(\epsilon)B^{\rm[xc]}\mathbf{\sigma}\cdot\delta\mathbf{e}_{j}G^{\rm ee}_{ji}( \epsilon)\right.\\ &\left.-B^{\rm[xc]}\mathbf{\sigma}\cdot\delta\mathbf{e}_{i}G^{\rm eh}_{ij}( \epsilon)B^{\rm[xc]*}\mathbf{\sigma}^{*}\cdot\delta\mathbf{e}_{j}G^{\rm he}_{ji}( \epsilon)\right].\end{split} \tag{24}\]
For non-superconducting systems, \(G^{\rm he}_{ji}(\epsilon)\) and \(G^{\rm eh}_{ji}(\epsilon)\) vanish and so does the second term in the right-hand side of Eq. (24), leading to the resulting equation that is the same as for the non-superconducting case. Nevertheless, it is important to notice that when \(\Delta>0\), not only is the second term finite, but also the first term gets renormalized by the presence of the superconducting gap. When analyzing the resulting MEI it is often useful to separate both terms in the right hand side of Eq. (24), here we will do so in the following form
\[\begin{split}\mathcal{J}_{ij}&=\mathcal{J}^{ee}_{ ij}+\mathcal{J}^{ee}_{ij},\\ \mathcal{J}^{ee}_{ij}&=-\frac{1}{2\pi}{\rm ImTr _{Ls}}\int_{-\infty}^{\epsilon_{F}}d\epsilon\,B^{\rm[xc]}\mathbf{\sigma}\cdot \delta\mathbf{e}_{i}G^{\rm ee}_{ij}(\epsilon)B^{\rm[xc]}\mathbf{\sigma}\cdot\delta\bm {e}_{j}G^{\rm ee}_{ji}(\epsilon),\\ \mathcal{J}^{ee}_{ij}&=\frac{1}{2\pi}{\rm ImTr _{Ls}}\int_{-\infty}^{\epsilon_{F}}d\epsilon\,B^{\rm[xc]}\mathbf{\sigma}\cdot \delta\mathbf{e}_{i}G^{\rm eh}_{ij}(\epsilon)B^{\rm[xc]*}\mathbf{\sigma}^{*}\cdot \delta\mathbf{e}_{j}G^{\rm he}_{ji}(\epsilon).\end{split} \tag{25}\]
Utilizing perturbation theory, we expect a second order correction to \(G^{\rm ee}\) and \(G^{\rm hh}\) due to superconductivity, while the electron-hole parts of the Green function \(G^{\rm eh}\) and \(G^{\rm he}\) would at least experience a first-order correction involving a possible sign change (see Fig. 1 (c)), which could counteract the electron-electron contribution to the magnetic exchange interaction.
## III Results
To ascertain how the superconducting gap on the material influences the magnetic states, we selected a large range of values for \(\lambda\) on the Nb slab, ranging from \(2.45\,\)eV to \(8.16\,\)eV. The calculations were performed with a broadening of the energy levels of \(0.113\,\)meV, which
should mimic a temperature of \(4.2\,\mathrm{K}\). For bulk Nb at this temperature and \(\lambda=2.45\,\mathrm{eV}\), we found a superconducting gap parameter of \(\approx 1.8\,\mathrm{meV}\), which is close to the realistic values found by experiments (from \(\approx 1.41\,\mathrm{meV}\) to \(\approx 1.57\,\mathrm{meV}\)[65]).
In Fig. 1 (d), we display the resulting self-consistent gap parameters \(\Delta\) at the Nb layer adjacent to the Mn layer attained for each value of \(\lambda\). For this system, we observe that the strength needed to open the superconducting gap are larger compared to the one in bulk Nb, for which \(\lambda=2.45\,\mathrm{eV}\) already opens a gap. Here, the lowest electron-phonon coupling strength to produce a superconducting state is \(3.26\,\mathrm{eV}\), resulting in a gap parameter of \(\Delta=6.25\,\mathrm{meV}\). The resulting growth of the self-consistent \(\Delta\) with respect to \(\lambda\) is not linear, and although the coupling strength is in the order of eV, the resulting gaps are of the same order of magnitude as the ones reported experimentally[39]. We note that this is enabled by self-consistency of our simulations. A one-shot calculation leads to a gap of the same order than \(\lambda\).
We show the immediate impact superconductivity has on the magnetic moments of the Mn atoms in Fig. 1 (e). The magnetic moment experiences a total change of about \(0.25\)\(\mu_{B}\) starting from \(3.80\)\(\mu_{B}\) in the metallic regime and intriguingly increases to reach a maximum of \(4.07\)\(\mu_{B}\) when the gap is about \(5.30\,\mathrm{meV}\) before experiencing a decrease for larger gaps. This observed behavior is induced by the non-trivial impact of \(\lambda\), simulating here the electron-phonon coupling, on the electronic structure.
### Symmetric magnetic exchange
According to our convention in Eq. (15), a positive \(J_{ij}\) favours a parallel alignment as it leads to lower (negative) energies, while negative \(J_{ij}\) favours anti-parallel alignment. The red points in Fig. 2 (a) represent the MEI for the normal (non-superconducting) case. While the strongest interaction coming from the nearest neighbour interaction is antiferromagnetic, the next-nearest neighbour interactions are ferromagnetic. Since the crystal lattice for Mn(110) is centered rectangular these MEI lead to a magnetic ground state which is row-wise antiferromagnetic as shown in Fig. 1 (b). In Fig. 2 (a) we also display the Heisenberg exchange for a the system when superconductivity is present (blue points). In this case, \(\lambda=3.26\,\mathrm{eV}\), the smallest value in the investigated grid that leads to a finite gap parameter (\(\Delta=6.25\,\mathrm{meV}\); as a reference, the measured value for bulk Nb is in a range from \(\approx 1.41\,\mathrm{meV}\) to \(1.57\,\mathrm{meV}\)[65]).
The difference between the interactions in the normal and superconducting systems is presented in Fig. 2 (b). One can notice that the maximum change in the symmetric part of the MEI is in the order of meV, which is small in comparison to the values of \(J_{ij}\) themselves. Interestingly, enabling superconductivity at the interface does not have a uniform impact on the magnetic exchange interactions as function of distance. At short distances, a small decrease in the absolute value of the magnetic interaction is identified. In the particularly investigated interface, however, the magnetic ground state stays unperturbed and the row-wise antiferromagnetic ordering in the Mn monolayer prevails, even after including effects derived from superconductivity. Experimentally, Roberto lo Conte et al.[39] observed a row-wise antiferromagnetic ground state, where they theoretically obtain a row-wise antiferromagnetic ground state for the non-superconducting case, and experimentally observe that the magnetic ground state remains like that even when the Nb(110) slab is superconducting.
To uncover the possible effects that superconductivity may cause in the magnetic states, we artificially increase \(\lambda\) to \(8.16\,\mathrm{eV}\) such that \(\Delta=295\,\mathrm{meV}\). Although the gap is large, the magnetic moments on the Mn layer (\(3.75\mu_{\mathrm{B}}\)) are still on the order of the ones in the non-superconducting case (\(3.80\mu_{\mathrm{B}}\). See Fig. 1 (e)). Despite the conditions for the systems being artificial, the observed effects might be relevant for situations in which the exchange energy is of the same order of magnitude as the gap parameter or smaller. The resulting Heisenberg interaction is displayed in Fig. 2 (c), both for the normal (red) and superconducting (blue) states. Their differences can be seen in Fig. 2 (d), in which we recognize crucial changes--the largest one being the nearest neighbour interaction that changes sign and switches from antiferromagnetic to ferromagnetic. This leads to a dramatic impact on the magnetic ground state, which switches from the row-wise antiferromagnetic to a ferromagnetic one.
While the previous figures show the extreme cases of low and high superconductivity in relation to the magnetic interactions, Figs. 3 (a)-(c) present a complete depiction of how the Heisenberg exchange as a function of the distance is affected by the value of \(\lambda\). The electron-hole contribution to the Heisenberg exchange for \(\Delta=0\) vanishes, as expected, and hence the total Heisenberg exchange is given only by the electron-electron one. Additionally, we notice that the electron-hole part is mostly negative, favouring antiferromagnetism. When focusing on the first two nearest neighbors (given by the first two columns of each plot), we detect a change in the magnetic ground state, from antiferromagnetic (blue) alignment to
ferromagnetic (red). It is interesting to note that this does not originate from the electron-hole contribution, but rather from the changes experienced by the electron-electron term. These changes are caused by the shift of the bands as \(\Delta\) increases.
Performing a multiple-scattering expansion of the Green functions involved in defining the magnetic exchange interactions, we end up with systematic corrections illustrated in Fig. 4 (a)-(d). For the symmetric part, we expect a second order correction due to the superconducting order parameter. Therefore the electron-electron part of the Heisenberg exchange is modified by interference effects occurring from the electron-hole and hole-electron propagation when mediating the magnetic interaction between two sites \(i\) and \(j\). The electron-hole part, however, inherently involves the scattering of a hole at the intera-atomic exchange of a given site \(j\). such a hole-scattering is automatically accompanied with a minus sign, which could then explain the tendency of the electron-hole part to favor antiferromagnetism.
Figure 2: (a) Symmetric exchange of the normal (red) vs superconducting (\(\Delta=6.25\,\mathrm{meV}\), blue) states. (b) Difference between the symmetric magnetic exchanges in the normal (\(J_{N}\)) vs the superconducting regime (\(J_{S}\)), when \(\Delta=6.25\,\mathrm{meV}\). (c) Symmetric exchange of the normal vs superconducting (\(\Delta=295\,\mathrm{meV}\)) states. (d) Difference between \(J_{N}\) and \(J_{S}\) when \(\Delta=295\,\mathrm{meV}\). All calculations were done at 4.2 K, with 15,000 k-points.
### Dzyaloshinskii-Moriya interaction
Even though the possible effects of superconductivity on the Heisenberg exchange interaction may lead to interesting outcomes, one realizes soon that in magnetic systems the correction is mostly not sufficient to heavily impact the magnetic state, given that the original interaction tends to be large. The DMI in turn is most of the times smaller that the Heisenberg exchange, and the impact of the superconducting state might be larger there. Although the DMI has most of the times low values compared to the Heisenberg one, this interaction is important in the realm of non-collinear magnetism[66] and it is responsible for stabilizing magnetic structures[67], influencing skyrmion chirality[68], as well as being instrumental in the study of spin waves[63]. For the case of our system when \(\Delta=6.25\) meV
Figure 3: (a) Total exchange interaction as given by Eq. (24), as each of its contributions separately as defined in Eq. (25): (b) electron-electron contribution \(\mathcal{J}^{ee}\) (c) electron-hole contribution \(\mathcal{J}^{eh}\). (d) Polar plot (\(\theta\) [Deg] vs \(|\mathbf{D}|\) [meV]) showing the evolution of the total DMI vector coming from Eq. (24) as a function of \(\lambda\) for one of the four nearest neighbours (the remaining cases are similar). (e) Evolution of the DMI vector derived from the electron-electron term \(\mathcal{J}^{ee}_{ij}\). (f) Evolution of the DMI vector derived from the electron-hole term \(\mathcal{J}^{eh}_{ij}\). We can see that \(D^{ee}\) and \(D^{eh}\) tend to have opposite orientations. All calculations were done at 4.2 K, with 15,000 k-points.
the DMI vector is small \(|\mathbf{D}|=0.37\,\mathrm{meV}\) with an even smaller electron-hole component \(|\mathbf{D}^{eh}|=25.7\,\mathrm{\SIUnitSymbolMicro eV}\). Therefore, to further the impact of superconductivity we also analyzed the cases with larger superconducting gaps. Analyzing the DMI on Mn/Nb(110), we observe that the out-of-plane component (\(D_{z}\)) is negligible, ensuing an in-plane \(\mathbf{D}\).
In Figs. 3 (d)-(f), we examine the in-plane vector \(\mathbf{D}\) for one of the four nearest neighbours in the Mn monolayer. Here we detect that the electron-hole term tends to go in the opposite
Figure 4: Duality of magnetic exchange interactions impacted by the superconducting order parameter. There are several scattering events that influence the exchange interactions. Intra-atomic exchange interactions (\(B^{[\mathrm{xc}]}\), and \(-B^{[\mathrm{xc}]}\) for electrons and holes respectively). Scattering from electron to hole (\(\Delta\)) and vice versa (\(\Delta^{*}\)). And finally, for the DMI there is the spin-orbit interaction (\(B^{[soc]}\), and \(-B^{[soc]}\) for electrons and holes respectively). (a) Electron-electron symmetric exchange. This term gets a second order correction with respect to \(\Delta\). (b) Electron-hole symmetric exchange. This term has a sign change given the intra-site interaction with the hole. (c) Electron-electron DMI. This term has two corrections, one of them catches a minus sign coming from the spin-orbit interaction with a hole. (d) Electron-hole DMI. The first term gets a sign flip twice, while the second does it just once. (e) Total \(\mathbf{D}\) (red), \(\mathbf{D}^{ee}\) (golden), and \(\mathbf{D}^{eh}\)(blue) for the case with \(\Delta=\)295 meV.
direction to the electron-electron DMI. Thus, both terms have opposite chiralities, thus, favouring different kinds of magnetic textures. This effect opens options such as switching between magnetic states by strengthening or weakening the superconducting order of the system at hand. In Fig. 4 (e) we show \(\mathbf{D}\) for the system with \(\Delta=295\,\mathrm{meV}\), as well as its electron-electron and electron-hole components. We notice that the chiralities are opposite, and the electron-hole term slightly dominates.
Similarly to the symmetric exchange interaction, we scrutinized on the basis of perturbation theory, how the superconducting order parameter, intra-atomic exchange and spin-orbit coupling affect the DMI, as illustrated schematically in Fig. 4 (c) and (d). We notice that besides possible interference effects induced by the different electron and hole propagators, the hole-scattering at an intra-atomic exchange interaction can provide a sign change to the original chirality pertaining to the electron-electron DMI, while this can happen for the electron-hole part due to hole-scattering at the intra-atomic exchange and spin-orbit coupling.
## IV Conclusions
We have introduced a method for extracting the bilinear tensor of magnetic exchange interactions within the Bogoliubov-de Gennes (BdG) formalism, utilizing infinitesimal rotation of magnetic moments. This novel approach has provided insights into the intricate interplay between superconductivity, magnetism, and spin-orbit interaction, unraveling remarkable potential effects on both the Heisenberg exchange and the Dzyaloshinskii-Moriya interactions.
Through rigorous self-consistent simulations based on parameters derived from ab initio calculations, our investigation has captured the intricacies of the electronic structure in the Mn monolayer on Nb(110) surface system. By tuning the electron-phonon coupling and thereby the superconducting state, we have demonstrated the pivotal influence of the superconducting state on the magnetic ground state, specifically through the Heisenberg exchange. Furthermore, intriguing modifications in the chirality of the Dzyaloshinskii-Moriya interactions have been unveiled. Within the confines of our system and for experimentally consistent gap sizes, we have concluded that the impact of superconductivity on the magnetic ground state remains minimal, leaving it largely unperturbed. However, the implications of
our findings extend far beyond the boundaries of our specific system.
The versatility of our method enables its application in various schemes based on Green function techniques, thereby facilitating exploration of complex magneto-superconducting interfaces. These endeavors hold immense potential for uncovering novel and non-trivial effects of superconductivity in topological magnetic and superconducting materials, where intricate magnetic interactions may play a decisive role. The exploration of complex magnetic structures, such as skyrmions or spin chains, in conjunction with superconductivity, opens up possibilities for the discovery of novel states and quasiparticles with implications for topological quantum computing, driving the progress of cutting-edge technological applications.
###### Acknowledgements.
The authors acknowledge funding provided by the Priority Programmes SPP 2137 "Skyrmionics" (Projects LO 1659/8-1) of the Deutsche Forschungsgemeinschaft (DFG). We acknowledge the computing time provided through JARA on the supercomputers JURECA[69]. Simulations were also performed with computing resources granted by RWTH Aachen University under project jara0189.
|
2308.14410
|
Some notes on moment inequalities for heavy-tailed distributions
|
We investigate the relation between moments and tails of heavy-tailed (in
particular, Pareto-type) distributions. We also discuss the sharpness of our
results in a number of examples under certain regularity conditions like
log-convexity. Moreover, we derive concentration bounds for polynomial chaos of
any order $d$.
|
Paul Buterus, Holger Sambale
|
2023-08-28T08:47:54Z
|
http://arxiv.org/abs/2308.14410v2
|
# Some notes on moment inequalities for heavy-tailed distributions
###### Abstract.
We investigate the relation between moments and tails of heavy-tailed (in particular, Pareto-type) distributions. We also discuss the sharpness of our results in a number of examples under certain regularity conditions like log-convexity. Moreover, we derive concentration bounds for polynomial chaos of any order \(d\).
Key words and phrases:concentration of measure, heavy tails, Pareto distribution, moment inequality, polynomial chaos, Hanson-Wright inequality 2020 Mathematics Subject Classification: Primary 60E15, 60F10, Secondary 46E30, 46N30
## 1. Introduction
In classical situations, the family of \(L^{p}\) norms of a random variable \(X\) contains a wealth of information about the tails of \(X\). For sub-Gaussian random variables, for instance, it is standard knowledge that the growth of the \(L^{p}\) norms characterizes the tail behavior, i. e., the property \(\mathbb{P}(|X|\geq t)\leq 2\mathrm{e}^{-ct^{2}}\) for some \(c>0\) and any \(t\geq 0\) is equivalent to
\[\|X\|_{L^{p}}:=(\mathbb{E}[|X|^{p}])^{1/p}\leq Cp^{1/2}\]
for any \(p\geq 2\), where \(C>0\) is some constant only depending on \(c\), cf. e.g. [21, Proposition 2.5.2]. Analogous results remain valid if the random variable under consideration has slightly heavier tails which are no longer sub-Gaussian but still decay exponentially, i. e., \(\mathbb{P}(|X|\geq t)\leq 2\mathrm{e}^{-ct^{\alpha}}\) for some \(\alpha\in(0,2)\) and any \(t\geq 0\), cf. [1, Proposition 5.2].
Against this context, one may ask for similar properties of heavy-tailed random variables, i. e., random variables with tails of the form
\[\mathbb{P}(|X|\geq t)\leq Ct^{-\alpha}\]
for some \(\alpha>0\), any \(t\geq c\) and suitable \(C,c>0\). A classical example are Pareto-type random variables. Recall that a random variable \(X\) has Pareto distribution \(\mathrm{Par}(\alpha,b)\), where \(\alpha,b>0\), if \(\mathbb{P}(X\geq t)=(b/t)^{\alpha}\) for any \(t\geq b\). Equivalently, its density function \(f(x)\) is given by \(f(x)=\alpha b^{\alpha}/x^{\alpha+1}\) for \(x\geq b\). We also write \(X\sim\mathrm{Par}(\alpha,b)\) and call \(X\) a Pareto random variable. Sometimes it is convenient to study symmetrized Pareto random variables which we denote by \(\mathrm{Par}_{\mathrm{s}}(\alpha,b)\), and it is obvious how to modify the definitions.
If \(X\sim\mathrm{Par}(\alpha,b)\), then \(\|X\|_{L^{p}}<\infty\) iff \(p<\alpha\). More precisely,
\[\mathbb{E}[X^{p}]=b^{p}\frac{\alpha}{\alpha-p} \tag{1}\]
for any \(p<\alpha\). In particular, applying Chebyshev's inequality for any \(p<\alpha\) and optimizing in \(p\) leads to the bound
\[\mathbb{P}(X\geq t)\leq\mathrm{e}\alpha\log(t/b)(b/t)^{\alpha} \tag{2}\]
for any \(t\geq b\mathrm{e}^{1/\alpha}\), which is attained at \(p=\alpha-1/\log(t/b)\). Consequently, this approach results in a logarithmic error factor. Moreover, we cannot recover the
###### Abstract
We consider the \(p\)-th moment of a random variable \(X\) on \(\alpha\) and \(p\)-th moment of a random variable \(X\) on \(\alpha\) and \(p\)-th moment of a random variable \(X\) on \(\alpha\). We consider the \(p\)-th moment of a random variable \(X\) on \(\alpha\) and \(p\)-th moment of a random variable \(X\) on \(\alpha\). We consider the \(p\)-th moment of a random variable \(X\) on \(\alpha\) and \(p\)-th moment of a random variable \(X\) on \(\alpha\). We consider the \(p\)-th moment of a random variable \(X\) on \(\alpha\) and \(p\)-th moment of a random variable \(X\) on \(\alpha\). We consider the \(p\)-th moment of a random variable \(X\) on \(\alpha\) and \(p\)-th moment of a random variable \(X\) on \(\alpha\). We consider the \(p\)-th moment of a random variable \(X\) on \(\alpha\) and \(p\)-th moment of a random variable \(X\) on \(\alpha\). We consider the \(p\)-th moment of a random variable \(X\) on \(\alpha\) and \(p\)-th moment of a random variable \(X\) on \(\alpha\) and \(p\)-th moment of a random variable \(X\) on \(\alpha\). We consider the \(p\)-th moment of a random variable \(X\) on \(\alpha\) and \(p\)-th moment of a random variable \(X\) on \(\alpha\) and \(p\)-th moment of a random variable \(X\) on \(\alpha\). We consider the \(p\)-th moment of a random variable \(X\) on \(\alpha\) and \(p\)-th moment of a random variable \(X\) on \
Explicit relations between the constants \(C_{i}\) can be found in the proofs. Note that in parts (b) and (c), one may replace the condition \(r\geq\mathrm{e}\) by \(r\geq r^{*}\) for any fixed \(r^{*}>1\) at the cost of modifying the constants. Similarly, (d) can be extended to \(s\in(0,s^{*}]\) for any fixed \(s^{*}<1\).
To prove Proposition 1, we start with a technical lemma, specialized for \(\alpha=2\), which yields characterizations of Pareto-type growth of the moments of a random variable \(X\) in terms of its Laplace transform and its characteristic function, respectively.
**Lemma 2**.: Let \(X\) be a non-negative random variable and \(C>0\) a fixed constant which may depend on \(X\). Then the following statements are equivalent.
1. For all \(p\in(1,2)\) the moments of \(X\) satisfy \[\mathbb{E}[X^{p}]\leq\frac{C}{2-p}.\]
2. \(X\) is integrable and the Laplace transform \(L(t):=L_{X}(t):=\mathbb{E}[\mathrm{e}^{-tX}]\) of \(X\) satisfies \[\int_{0}^{\infty}\frac{\mathbb{E}[X]+L^{\prime}(u)}{u^{1+s}}\,\mathrm{d}u\leq C \frac{\Gamma(1-s)}{s(1-s)}\] for all \(s\in(0,1)\).
3. The characteristic function \(\varphi(t):=\varphi_{X}(t):=\mathbb{E}[\mathrm{e}^{itX}]\) of \(X\) satisfies \[\int_{0}^{\infty}\frac{1-\mathrm{Re}(\varphi(u))}{u^{2+s}}\,\mathrm{d}u\leq C \frac{\sin(\pi s/2)\Gamma(1-s)}{s(s+1)(1-s)}\] for all \(s\in(0,1)\).
We shall need the equivalence of (a) and (c) in the sequel. In view of (3) and the discussion of polynomial chaos in Section 3, note that similar arguments as used in Lemma 2 can also be applied to singularities of the form \((\alpha-s)^{-h}\), where \(h>0\). Lemma 2 is essentially an application of identities for fractional absolute moments of heavy-tailed distributions from [11]. For the sake of completeness, we provide some details in the proof.
Proof.: First note that \(L\) is continuously differentiable with \(L^{\prime}(t)=-\mathbb{E}[X\exp(-tX)]\) and \(L^{\prime}(0)=-\mathbb{E}[X]\) as long as \(X\) is integrable. Therefore, as shown in [11, Lemma 1.1], the identity
\[\frac{\Gamma(1-s)}{s}x^{s}=\int_{0}^{\infty}\frac{1-\exp(-xu)}{u^{s+1}}\, \mathrm{d}u,\]
valid for \(s\in(0,1)\) and \(x>0\), implies that
\[\mathbb{E}[X^{1+s}]=\frac{s}{\Gamma(1-s)}\int_{0}^{\infty}\frac{L^{\prime}(u) -L^{\prime}(0)}{u^{s+1}}\,\mathrm{d}u\]
for any \(s\in(0,1)\), so that the equivalence of (a) and (b) directly follows.
Similarly, by formula (7) in [10, Theorem 11.4.3],
\[\int_{0}^{\infty}\frac{1-\cos(xu)}{u^{1+\beta}}\,\mathrm{d}u=\frac{\Gamma(2- \beta)}{\beta}\frac{\sin(\pi(1-\beta)/2)}{1-\beta}|x|^{\beta}\]
for \(\beta\in(0,2)\) and \(x\in\mathbb{R}\). Choosing \(\beta=s+1\) and applying Fubini's theorem, it was shown in [11, Lemma 1.3 (2)] that
\[\mathbb{E}[X^{1+s}]=\frac{s(s+1)}{\sin(\pi s/2)\Gamma(1-s)}\int_{0}^{\infty} \frac{1-\mathrm{Re}(\varphi(u))}{u^{2+s}}\,\mathrm{d}u\]
for any \(s\in(0,1)\), which establishes the equivalence of (a) and (c).
Proof of Proposition 1.: To prove that (a) implies (b), we temporarily assume that \(\alpha=2\) by replacing \(X\) by the random variable \(\tilde{X}:=X^{\alpha/2}\) and \(C_{1}\) by \(\tilde{C}_{1}:=2C_{1}/\alpha\). Using part (c) of Lemma 2, we then obtain
\[\tilde{C}_{1}\frac{\sin(\pi s/2)\Gamma(1-s)}{s(s+1)(1-s)}\geq\int_{0}^{\infty} \frac{1-\operatorname{Re}(\varphi_{\tilde{X}}(u))}{u^{2+s}}\,\mathrm{d}u\geq \int_{0}^{\pi/\tilde{r}}\frac{1-\operatorname{Re}(\varphi_{\tilde{X}}(u))}{u^{2 +s}}\,\mathrm{d}u\]
for any \(\tilde{r}>0\). The last integral can be estimated further by
\[\int_{0}^{\pi/\tilde{r}}\frac{1-\operatorname{Re}(\varphi_{ \tilde{X}}(u))}{u^{2+s}}\,\mathrm{d}u \geq 2\mathbb{E}\Big{[}\mathbb{1}_{\{\tilde{X}\leq\tilde{r}\}} \int_{0}^{\pi/\tilde{r}}\frac{\sin(\tilde{X}u/2)^{2}}{u^{2+s}}\,\mathrm{d}u \Big{]}\] \[\geq \frac{2}{\pi^{2}}\mathbb{E}[\mathbb{1}_{\{\tilde{X}\leq\tilde{r }\}}\tilde{X}^{2}]\int_{0}^{\pi/\tilde{r}}u^{-s}\,\mathrm{d}u\] \[\geq \frac{2}{\pi^{1+s}}\frac{\tilde{r}^{s-1}}{1-s}\mathbb{E}[ \mathbb{1}_{\{\tilde{X}\leq\tilde{r}\}}\tilde{X}^{2}],\]
since \(1-\cos(x)=2\sin(x/2)^{2}\) for any \(x\) and \(\sin(x)\geq 2x/\pi\) on \(x\in[0,\pi/2]\) (use e. g. that \(\sin(x)\) is concave on \([0,\pi/2]\)). Combining both inequalities yields
\[\frac{\pi^{1+s}}{2}\tilde{C}_{1}\frac{\sin(\pi s/2)\Gamma(1-s)}{s(s+1)}\tilde{ r}^{1-s}\geq\mathbb{E}[\mathbb{1}_{\{\tilde{X}\leq\tilde{r}\}}\tilde{X}^{2}].\]
Next, we optimize this expression in \(s\) by taking \(s=1-\log(\tilde{r})^{-1}\in[0,1)\) if \(\tilde{r}\geq\mathrm{e}\). Using that \(\sin(x)\leq x\) for any \(x\geq 0\) as well as \(\Gamma(x)=x^{-1}\Gamma(1+x)\leq x^{-1}\) on \(x\in(0,1]\), we find that
\[\mathbb{E}[\mathbb{1}_{\{\tilde{X}\leq\tilde{r}\}}\tilde{X}^{2}]\leq\frac{ \mathrm{e}\,\pi^{3}}{4}\,\tilde{C}_{1}\log(\tilde{r}).\]
Now setting \(\tilde{r}:=r^{\alpha/2}\), it follows that
\[\mathbb{E}[\mathbb{1}_{\{X\leq r\}}X^{\alpha}]=\mathbb{E}[\mathbb{1}_{\{ \tilde{X}\leq\tilde{r}\}}\tilde{X}^{2}]\leq\frac{\mathrm{e}\,\pi^{3}}{4}\, \tilde{C}_{1}\log(\tilde{r})=\frac{\mathrm{e}\,\pi^{3}}{4}\,C_{1}\log(r)\]
provided that \(r\geq\mathrm{e}^{2/\alpha}\). If \(\alpha\geq 2\), this yields (b), while if \(\alpha<2\), it remains to note that
\[\mathbb{E}[\mathbb{1}_{\{X\leq r\}}X^{\alpha}]\leq\mathbb{E}[\mathbb{1}_{\{X \leq\mathrm{e}^{2/\alpha}\}}X^{\alpha}]\leq\frac{\mathrm{e}\,\pi^{3}}{2\alpha }\,C_{1}\leq\frac{\mathrm{e}\,\pi^{3}}{2\alpha}\,C_{1}\log(r)\]
for any \(r\in[\mathrm{e},\mathrm{e}^{2/\alpha}]\).
To show that that (b) implies (a), we replace \(X\) by \(\tilde{X}:=X\mathbb{1}_{\{X\geq\mathrm{e}\}}\). We may then write
\[\mathbb{E}[\hat{X}^{p}]=(\alpha-p)\int_{1}^{\infty}u^{-(\alpha-p)-1}\mathbb{E }[\hat{X}^{\alpha}\mathbb{1}_{\{\tilde{X}\leq u\}}]\,\mathrm{d}u\]
and thus conclude that
\[\mathbb{E}[\hat{X}^{p}]\leq C_{2}(\alpha-p)\int_{1}^{\infty}u^{-(\alpha-p)-1} \log u\,\mathrm{d}u=\frac{C_{2}}{\alpha-p}\]
and hence
\[\mathbb{E}[X^{p}]\leq\mathrm{e}^{\alpha}+\frac{C_{2}}{\alpha-p}\leq\frac{C_{ 2}+\alpha\mathrm{e}^{\alpha}}{\alpha-p}.\]
Moreover, noting that
\[\mathbb{E}[X^{\alpha}\mathbb{1}_{\{X\leq r\}}]=\alpha\int_{0}^{r}y^{\alpha-1} \mathbb{P}(y\leq X\leq r)\,\mathrm{d}y\leq\alpha\int_{0}^{r}y^{\alpha-1} \mathbb{P}(X\geq y)\,\mathrm{d}y,\]
(c) trivially implies (b) with \(C_{2}:=\alpha C_{3}\). On the other hand, using that
\[\int_{0}^{r}y^{\alpha-1}\mathbb{P}(X\geq y)\,\mathrm{d}y=\int_{0}^{r}y^{\alpha- 1}\mathbb{P}(y\leq X\leq r)\,\mathrm{d}y+\frac{r^{\alpha}}{\alpha}\mathbb{P}(X \geq r),\]
the first term is bounded \(C_{2}\alpha^{-1}\log(r)\) due to (b), and by (a) and Chebyshev's inequality similarly as in (2), the second term is bounded by \(C_{1}\mathrm{e}\alpha^{-1}\log(r)\). In particular, (c) follows with \(C_{3}:=(C_{1}\mathrm{e}+C_{2})/\alpha\).
To prove the equivalence of (b) and (d), we have
\[\mathbb{E}[X^{\alpha}\exp(-sX)] =s\mathbb{E}[X^{\alpha}\int_{X}^{\infty}\exp(-st)\,\mathrm{d}t]=s \int_{0}^{\infty}\mathrm{e}^{-st}\mathbb{E}[X^{\alpha}\mathbbm{1}_{\{X\leq t \}}]\,\mathrm{d}t\] \[\leq\mathrm{e}^{\alpha}+C_{2}s\int_{1}^{\infty}\mathrm{e}^{-st} \log(t)\,\mathrm{d}t=\mathrm{e}^{\alpha}+C_{2}\int_{s}^{\infty}\frac{\exp(-u) }{u}\mathrm{d}u\]
by splitting the integral into the parts \(t\leq\mathrm{e}\) and \(t\geq\mathrm{e}\geq 1\). Splitting the integral on the right-hand side at \(u=1\) then yields
\[\mathbb{E}[X^{\alpha}\exp(-sX)]\leq\mathrm{e}^{\alpha}+C_{2}(\mathrm{e}^{-1}+ |\log s|)\leq(\mathrm{e}^{\alpha}+C_{2}(\mathrm{e}^{-1}+1))|\log s|\]
for any \(s\leq\mathrm{e}^{-1}\). On the other hand, we have
\[\mathbb{E}[X^{\alpha}\exp(-sX)]\geq\mathrm{e}^{-1}\mathbb{E}[X^{\alpha} \mathbbm{1}_{\{X\leq 1/s\}}],\]
which implies (b) with \(C_{2}:=\mathrm{e}C_{4}\).
In view of the discussion in the introduction, and as we will confirm in the course of this note, the properties listed in Proposition 1 cannot be complemented by a statement about the tails in general. However, it turns out that they are equivalent to a Pareto-type tail behaviour if we impose additional "structural" assumptions on the tails.
To this end, recall that a function \(f\colon[b,\infty)\to\mathbb{R}\) for some \(b\geq 0\) is called log-convex if \(\log(f)\) is convex on \([b,\infty)\). A non-negative random variable \(X\) is called log-convex if its tails \(\mathbb{P}(X\geq t)\) are a log-convex function of \(t\in[b,\infty)\) with \(b:=\operatorname{ess\,inf}X\). More generally, a real-valued random variable \(X\) is called log-convex if \(|X|\) is log-convex. In the same way, one may also define log-concave functions and random variables.
Clearly, Pareto random variables are log-convex. Note that sometimes, log-convexity of a (say, non-negative) random variable \(X\) is defined by requiring the tails of \(X\) to be log-convex on the full non-negative half-axis \([0,\infty)\), i. e. not only restricted to \(t\geq b=\operatorname{ess\,inf}X\). To see that the two definitions essentially lead to the same concept of log-convexity, one may replace \(X\) by \(X-b\) if necessary. For instance, if \(X\sim\mathrm{Par}(\alpha,b)\), it holds that \(2^{-\alpha}\mathbb{P}(X\geq t)\leq\mathbb{P}(X-b\geq t)\leq\mathbb{P}(X\geq t)\) for all \(t\geq 0\). In particular, similar relations also hold for the \(L^{p}\) norms of \(X\) and \(X-b\), noting that \(2^{-\alpha}\mathbb{E}[X^{p}]\leq\mathbb{E}[(X-b)^{p}]\leq\mathbb{E}[X^{p}]\) for any \(p>0\).
**Proposition 3**.: If \(t^{\alpha}\mathbb{P}(X\geq t)\) is log-convex on \(t\in[b,\infty)\) with \(b:=\operatorname{ess\,inf}X\), then the characterizations of Proposition 1 are also equivalent to the tail bound
\[\mathbb{P}(X\geq t)\leq\frac{C_{5}}{t^{\alpha}}\]
for \(t\geq\mathrm{e}\), where the constant \(C_{5}>0\) only differs from the constants \(C_{1}\) to \(C_{4}\) by \(\alpha\)-dependent factors. The same statement holds if \(t^{\alpha}\mathbb{P}(X\geq t)\) is log-concave and bounded from below by some \(\delta>0\).
In particular, the conditions of Proposition 3 hold for \(X\sim\mathrm{Par}(\alpha,b)\) (which even happens to be the only non-trivial example for the log-convex case as the proof demonstrates), so that we get back the correct tail behaviour this way. Assuming that \(t^{\alpha}\mathbb{P}(X\geq t)\geq\delta\) in the case of log-concavity has technical reasons, but one can easily see that it does not exclude any situations of interest. Indeed, otherwise we must have \(t^{\alpha}\mathbb{P}(X\geq t)\to 0\) as \(t\to\infty\), and by log-concavity the rate of decay must be so fast that \(X\) cannot be a heavy-tailed random variable.
Proof.: First recall that as already mentioned in the introduction, \(\mathbb{P}(X\geq t)\leq C_{5}/t^{\alpha}\) always implies part (a) of Proposition 1 with \(C_{1}=\alpha C_{5}\) without any further assumptions. (To be precise, here we assume that the tail bound holds for any \(t\geq 1\), but even if we can only access it for \(t\geq\mathrm{e}\) replacing \(C_{5}\) by \(\max(C_{5},\mathrm{e}^{\alpha})\) will lead to the same result.)
Conversely, set \(h(t)=t^{\alpha}\mathbb{P}(X\geq t)\) and assume part (a) of Proposition 1. As in (2), we have \(h(t)\leq C_{1}\mathrm{e}\log(t)\) by Chebyshev's inequality and optimization. In particular, \(\log(h(t))\) is upper-bounded by the concave function \(1+\log(C_{1})+\log(\log(t))\). Since \(\log(h(t))\) is convex by assumption, this implies that \(h(t)\) must be non-increasing, so that for any \(t\geq\mathrm{e}\), we have \(h(t)\leq C_{1}\mathrm{e}\) (in fact, \(h\) must even be constant).
If \(h(t)\) is log-concave, assume part (c) of Proposition 1. Fixing any \(r\geq\mathrm{e}\), we first consider the case where \(h(t_{n})\) is unbounded, i. e. there exists a monotone sequence \((t_{n})_{n\in\mathbb{N}}\) with \(t_{n}\to\infty\) and \(h(t_{n})\to\infty\). Additionally, we can assume that \(h(r)\leq h(t_{n})\). By log-concavity of \(h\), we have
\[h(u)=h(r(1-s)+st_{n})\geq h(r)^{1-s}h(t_{n})^{s}\geq\min\{h(r),h(t_{n})\}=h(r)\]
for all \(u=r(1-s)+st_{n}\) with some \(s\in[0,1]\), i. e. \(h\) is quasiconcave. Thus, it follows that
\[C_{3}\log(t_{n})\geq\int_{r}^{t_{n}}\frac{h(u)}{u}\,\mathrm{d}u\geq h(r)\log(t _{n}/r). \tag{4}\]
Dividing by \(\log(t_{n})\) and letting \(n\to\infty\) yields \(h(r)\leq C_{3}\).
Now assume \(h\) to be bounded by some constant \(K>1\). Using the log-concavity again gives
\[C_{3}\log(t)\geq\int_{r}^{t}\frac{h(u)}{u}\,\mathrm{d}u\geq h(r)\Big{(}\frac{ h(r)}{h(t)}\Big{)}^{\frac{r}{r-r}}\int_{r}^{t}\Big{(}\frac{h(t)}{h(r)}\Big{)}^{ \frac{u}{r-r}}\,\frac{\mathrm{d}u}{u}.\]
Since \(\delta\leq h(t)\leq K\), we find that
\[\lim_{t\to\infty}\Big{(}\frac{h(r)}{h(t)}\Big{)}^{\frac{r}{r-r}}=1. \tag{5}\]
Dividing by \(\log(t)\) and writing \(\beta(t)=\log(h(t)/h(r))\), we have
\[\frac{1}{\log(t)}\int_{r}^{t}\Big{(}\frac{h(t)}{h(r)}\Big{)}^{\frac{u}{r-r}}\, \frac{\mathrm{d}u}{u}=\frac{1}{\log(t)}\int_{\frac{r}{r-r}}^{\frac{t}{r-r}} \exp(\beta(t)u)\,\frac{\mathrm{d}u}{u},\]
so that using the boundedness of \(\beta(t)\) again, we find by partial integration that
\[\lim_{t\to\infty}\frac{1}{\log(t)}\int_{\frac{r}{r-r}}^{\frac{t}{ r-r}}\exp(\beta(t)u)\,\frac{\mathrm{d}u}{u}=\lim_{t\to\infty}\Big{(}\frac{ \log(u)\exp(\beta(t)u)}{\log(t)}\Big{|}^{\frac{t}{t-r}}_{\frac{r}{t-r}}\] \[\qquad\qquad\qquad-\frac{\beta(t)}{\log(t)}\int_{\frac{r}{t-r}}^{ \frac{t}{r-r}}\log(u)\exp(\beta(t)u)\Big{)}\,\mathrm{d}u=1,\]
where we have employed (5). Altogether, it follows that \(h(r)\leq C_{3}\).
### Examples
As we have seen in Proposition 3, in certain situations one may avoid the logarithmic error (2) which appears when combing Pareto-type moment bounds with the Chebyshev inequality. Next, we will present a number of distributions on the real line or, equivalently, random variables, which demonstrate that in general, this is not possible. Here we also consider situations where the tails satisfy certain regularity properties.
We first recall a number of basic definitions and results. A function \(L\colon(0,\infty)\to(0,\infty)\) is called slowly varying if for any \(a>0\),
\[\lim_{t\to\infty}\frac{L(at)}{L(t)}=1.\]
It is called regularly varying if for any \(a>0\),
\[h_{L}(a):=\lim_{t\to\infty}\frac{L(at)}{L(t)}\in(0,\infty).\]
By Karamata's characterization theorem, if \(g\colon(0,\infty)\to(0,\infty)\) is regularly varying, it has to be of the form \(g(t)=t^{\beta}L(t)\) for some \(\beta\in\mathbb{R}\) and some slowly varying function \(L\). In particular, \(h_{g}(a)=a^{\beta}\), and \(\beta\) is called the index of \(g\). For a reference, see [1, Ch. 1.3 & 1.4].
**Proposition 4**.: Fix any \(\alpha>0\), and let \(h\colon[0,\infty)\to[0,\infty)\) be any monotonously increasing function such that \(h(t)=\mathcal{O}(\log(t))\). Then, there exists a random variable \(X\) such that
\[t^{\alpha}\mathbb{P}(X\geq t)=\Theta(h(t)).\]
If \(h(t)=o(\log(t))\), then \(X\) can be chosen such that its tails are a regularly varying function of \(t\) with index \(-\alpha\).
Proof.: Consider functions of the form
\[g(t):=t^{-\alpha}L(t) \tag{6}\]
with \(L(t):=\exp(\int_{a}^{t}\varepsilon(s)s^{-1}\,\mathrm{d}s)\) for some \(a>0\) (for simplicity, we will set \(a=1\) in the sequel) and a bounded measurable function \(\varepsilon\). If \(g^{\prime}(t)\leq 0\) and \(g(t)\to 0\) as \(t\to\infty\), the function \(g\) gives rise to the tails of a random variable on, say, \([1,\infty)\), by setting \(\mathbb{P}(X\geq t):=g(t)\), noting that \(g(1)=1\). Assuming \(\varepsilon\) to be piecewise continuous, the condition \(g^{\prime}(t)\leq 0\) holds if and only if \(\varepsilon(t)\leq\alpha\), since we have
\[g^{\prime}(t)=(-\alpha+\varepsilon(t))t^{-1}g(t).\]
In particular, if \(\varepsilon\) is bounded away from \(\alpha\), say, \(\varepsilon(t)\leq\alpha/2\), we also have \(g(t)\to 0\) (indeed, note that in this case, \(0\leq g(t)\leq t^{-\alpha/2}\)).
To specify a choice of \(\varepsilon\), now define sequences \(a_{n}=\exp(n\exp(n))\) and \(b_{n}:=\exp(n\exp(n)+n)\). Moreover, let \(\gamma(n)\) be any function on the natural numbers such that \(\mathrm{e}^{-n}\leq\gamma(n)\leq 1\) which is bounded away from \(\alpha\) and which satisfies \(\exp(n\gamma(n))=\Theta(h(b_{n}))\). Here, the technical condition \(\gamma(n)\geq\mathrm{e}^{-n}\) can be assumed without loss of generality since already for \(\gamma(n)=\mathrm{e}^{-n}\), \(\exp(n\gamma(n))\) is decreasing, while \(h\) was assumed to be increasing. For instance, one may take \(\gamma(n)=\min\{1,\alpha/2,\max\{\log(h(b_{n}))/n,\mathrm{e}^{-n}\}\}\). Set
\[\varepsilon(t)=\begin{cases}\gamma(n)&\text{if }t\in[a_{n},b_{n}]\\ -\gamma(n)&\text{if }t\in[b_{n},b_{n}\exp(n)]\\ 0&\text{if }t\in(b_{n}\exp(n),a_{n+1}]\end{cases}. \tag{7}\]
Clearly, \(\varepsilon(t)\to 0\) if \(h(t)=o(\log(t))\), so that by Karamata's representation theorem, \(L\) is slowly varying and hence, \(g\) is regularly varying with index \(-\alpha\) in this case.
It is now easy to see that
\[\int_{1}^{t}\frac{\varepsilon(s)}{s}\,\mathrm{d}s=\begin{cases}\gamma(n)( \log(t)-\log(a_{n}))&\text{if }t\in[a_{n},b_{n}]\\ \gamma(n)(n+\log(b_{n})-\log(t))&\text{if }t\in(b_{n},b_{n}\exp(n))\\ 0&\text{if }t\in(b_{n}\exp(n),a_{n+1}]\end{cases},\]
and hence,
\[L(t)=\begin{cases}\frac{t^{\gamma(n)}}{a_{n}^{\gamma(n)}}&\text{if }t\in[a_{n},b_{n}] \\ \exp(n\gamma(n))\frac{b_{n}^{\gamma(n)}}{t^{\gamma(n)}}&\text{if }t\in(b_{n},b_{n}\exp(n)) \\ 1&\text{if }t\in(b_{n}\exp(n),a_{n+1}]\end{cases}.\]
In particular, \(b_{n}^{\alpha}g(b_{n})=L(b_{n})=\exp(n\gamma(n))=\Theta(h(b_{n}))\) by definition.
It remains to verify that the random variable \(X\) induced by the function \(g\) satisfies condition (c) of Proposition 1. To this end, we first consider \(r\in[a_{n},b_{n}]\) and note that \(\log(r)\geq\log(a_{n})=n\exp(n)\geq\log(b_{n})/2\). Thus, we may assume that \(r=b_{n}\). For any \(k\leq n\), we get
\[\int_{a_{k}}^{b_{k}}t^{\alpha-1}\mathbb{P}(X\geq t)\,\mathrm{d}t =\int_{a_{k}}^{b_{k}}\frac{L(t)}{t}\,\mathrm{d}t =a_{k}^{-\gamma(k)}\int_{a_{k}}^{b_{k}}t^{-1+\gamma(k)}\,\mathrm{ d}t\] \[=\gamma(k)^{-1}\Big{(}\Big{(}\frac{b_{k}}{a_{k}}\Big{)}^{\gamma( k)}-1\Big{)}\leq\gamma(k)^{-1}\exp(k\gamma(k)).\]
Summing this up to \(n\) gives a contribution of at most \(n\gamma(n)^{-1}\exp(n\gamma(n))\leq C\log(b_{n})\) for a suitable constant \(C>0\) (to see this, note that the slightly stronger inequality \(n\gamma(n)^{-1}\exp(n\gamma(n))\leq\log(a_{n})\) can be rewritten as \(n\gamma(n)-\log(\gamma(n))\leq n\) and use that \(\mathrm{e}^{-n}\leq\gamma(n)\leq 1\)). If \(r\in[b_{k},\exp(k)b_{k}]\), then again \(\log(r)\geq\log(b_{k})\geq\log(\exp(k)b_{k})/2\) and thus we may consider \(r=\exp(k)b_{k}\). Here we have
\[\int_{b_{k}}^{b_{k}\exp(k)}\frac{L(t)}{t}\,\mathrm{d}t=\exp(k\gamma(k))b_{k}^ {\gamma(k)}\int_{b_{k}}^{b_{k}\exp(k)}\frac{1}{t^{1+\gamma(k)}}\,\mathrm{d}t \leq\gamma(k)^{-1}\exp(k\gamma(k)),\]
so that we may argue similarly as above. Finally, if \(r\in[b_{k}\exp(k),a_{k+1}]\),
\[\int_{b_{k}\exp(k)}^{a_{k+1}}\frac{L(t)}{t}\,\mathrm{d}t=\int_{b_{k}\exp(k)}^ {a_{k+1}}\frac{1}{t}\,\mathrm{d}t\leq\log(a_{k+1})-\log(b_{k}).\]
Because of \(a_{k}\leq b_{k}\), summing up and telescoping yields a bound of \(\log(r)\) again.
As for the auxiliary function \(\gamma(n)\) from the proof of Proposition 4, simple examples like \(\gamma(n):=n^{\delta-1}\) for any \(\delta<1\) yield \(b_{n}^{\alpha}\mathbb{P}(X\geq b_{n})=\exp(n^{\delta}))\approx\exp((\log\log(b _{n}))^{\delta})\).
In view of Proposition 3, one may wonder whether in order to avoid the logarithmic error, it might already suffice to require \(X\) to be log-convex. It turns out that this is not true as we demonstrate in the following proposition, whose proof is based on the proof of Proposition 4 together with a smoothening argument.
**Proposition 5**.: In the situation of Proposition 4, the random variable \(X\) can be chosen to be log-convex.
Proof.: Consider a function \(g\) as in (6) and set \(h(t):=\log(g(t))\). Assuming \(\varepsilon(s)\) to be continuously differentiable, one can check in the same way as in the proof of Proposition 4 that \(h^{\prime\prime}(t)\geq 0\) is equivalent to \(g^{\prime\prime}(t)g(t)-(g^{\prime}(t))^{2}\geq 0\) and thus to
\[\varepsilon^{\prime}(t)+\frac{\alpha}{t}\geq\frac{\varepsilon(t)}{t}. \tag{8}\]
Therefore, once (8) holds, the corresponding random variable has log-convex tails. By continuity of \(h^{\prime}\), the same arguments remain valid if \(\varepsilon(t)\) is only piecewise continuously differentiable (with (8) for all continuity points of \(\varepsilon^{\prime}\)).
To construct such a function, we write \(\Lambda(s):=\min\{2s,2-2s\}\) for \(s\in[0,1]\) and replace (7) by the smoothed version
\[\varepsilon(t)=\begin{cases}\gamma(n)c_{n}^{-1}\Lambda(\frac{t-a_{n}}{b_{n}-a_{ n}})&\text{if }t\in[a_{n},b_{n}]\\ -\gamma(n)c_{n}^{-1}\Lambda(\frac{t-b_{n}}{b_{n}(\mathrm{e}^{a_{n}-1})})&\text{ if }t\in[b_{n},b_{n}\exp(n)]\\ 0&\text{if }t\in(b_{n}\exp(n),a_{n+1}]\end{cases}\]
with correction factor
\[c_{n}:=\int_{0}^{1}\frac{\Lambda(t)}{t+(\mathrm{e}^{n}-1)^{-1}}\,\mathrm{d}t= \int_{a_{n}}^{b_{n}}\Lambda(\tfrac{t-a_{n}}{b_{n}-a_{n}})t^{-1}\,\mathrm{d}t= \int_{b_{n}}^{b_{n}\exp(n)}\Lambda(\tfrac{t-b_{n}}{b_{n}(\mathrm{e}^{n}-1)})t^{- 1}\,\mathrm{d}t.\]
Note that \(c_{n}\) is monotonely increasing with
\[c_{1}\approx 0.480156,\qquad\lim_{n\to\infty}c_{n}=\int_{0}^{1}\frac{\Lambda(t )}{t}\,\mathrm{d}t=\log(4).\]
In particular, assuming without loss of generality that \(\gamma(n)\leq\alpha c_{1}/2\), as in the proof of Proposition 4 we have that \(g\) is monotonely decreasing with \(g(t)\to 0\), and \(\varepsilon(t)\to 0\) if \(h(t)=o(\log(t))\). Moreover, it also follows that
\[\int_{1}^{b_{n}}\varepsilon(s)s^{-1}\,\mathrm{d}s=\gamma(n)n,\qquad\int_{1}^{ b_{n}\exp(n)}\varepsilon(s)s^{-1}\,\mathrm{d}s=0,\]
so that the relation \(b_{n}^{\alpha}g(b_{n})=\exp(n\gamma(n))=\Theta(h(b_{n}))\) continues to hold. Furthermore, part (c) of Proposition 1 can be verified as in the proof of Proposition 4 by invoking the trivial bound \(\Lambda(t)\leq 1\).
It remains to check (8), which is trivial if \(\varepsilon^{\prime}(t)\geq 0\). For \(t\in((b_{n}-a_{n})/2,b_{n})\), (8) reads
\[\varepsilon(t)=\frac{\gamma(n)}{c_{n}}\Big{(}2-2\frac{t-a_{n}}{b_{n}-a_{n}} \Big{)}\leq\alpha-2\frac{\gamma(n)}{c_{n}}\frac{t}{b_{n}-a_{n}}=\alpha-t \varepsilon^{\prime}(t),\]
and the inequality in the middle is equivalent to
\[2\frac{\gamma(n)}{c_{n}}\frac{1}{\mathrm{e}^{n}-1}\leq\alpha-2\frac{\gamma(n) }{c_{n}}.\]
Recalling that \(\gamma(n)\leq\alpha c_{1}/2\) and that \(c_{n}\) is strictly increasing and bounded, this inequality is true for all large enough \(n\) (formally, one may replace \(\varepsilon(t)\) by \(0\) for small values of \(n\)). Similar arguments also hold in the remaining case of \(t\in(b_{n},b_{n}(1+\mathrm{e}^{n})/2)\), which is even simpler since \(\varepsilon(t)\leq 0\) in this range.
## 3. Tail inequalities for polynomial chaos
### Results
Typical applications of \(L^{p}\) norm inequalities include concentration results of higher order, in particular bounds for polynomials (or polynomial chaos) of some order \(d\) in independent random variables \(X_{1},\ldots,X_{n}\). Concentration results of this type have been shown in [10] (polynomials in Gaussian random variables), [14] (functions of sub-Gaussian random variables) and [17] (polynomials in \(\alpha\)-sub-exponential random variables), for example. All these papers in fact prove bounds on the respective \(L^{p}\) norms, from which concentration bounds are subsequently derived.
We may proceed similarly, using the following generalization of (2): given a random variable \(Z\) which satisfies the moment condition
\[\|Z\|_{L^{p}}\leq\gamma\Big{(}\frac{\alpha}{\alpha-p}\Big{)}^{\frac{\beta}{p}}\]
for any \(p\in(0,\alpha)\) and some \(\gamma,\beta>0\), a standard Chebyshev-type argument shows that
\[\mathbb{P}(|Z|\geq t)\leq(\alpha/\beta)^{\beta}\mathrm{e}^{\beta}\log(t/ \gamma)^{\beta}(\gamma/t)^{\alpha} \tag{9}\]
for any \(t>\gamma\mathrm{e}^{\beta/\alpha}\) (here one argues similarly as in (2), choosing \(p=\alpha-\beta/\log(t/\gamma)\)).
In the sequel, for a random vector \(X=(X_{1},\ldots,X_{n})\) with independent centered components (i. e., \(\mathbb{E}[X_{i}]=0\) for all \(i\)), we study functions of type
\[f_{d}(X):=f_{d,A}(X):=\sum_{i_{1},\ldots,i_{d}=1}^{n}a_{i_{1}\ldots i_{d}}X_{i _{1}}\cdots X_{i_{d}}, \tag{10}\]
where \(A=(a_{i_{1}\ldots i_{d}})\in\mathbb{R}^{n^{d}}\) is a tensor of real coefficients such that \(a_{i_{1}\ldots i_{d}}=0\) whenever \(i_{j}=i_{j^{\prime}}\) for some \(j\neq j^{\prime}\) (we also say that \(A\) has _generalized diagonal_\(0\)). These polynomials are linear in every component, hence they are also referred to as multilinear (or tetrahedral) polynomials.
For multilinear polynomials in heavy-tailed random variables, we may show the following elementary \(L^{p}\) (and concentration) bound, which shares some similarities with [1, Theorem 1.5] (whose proof it partially mimics). Here and all over the rest of this section, absolute constants with values in \((0,\infty)\) are typically denoted by \(C\), constants which depend on some quantities \(Q_{1},\ldots,Q_{m}\) only are denoted by \(C_{Q_{1},\ldots,Q_{m}}\), and the precise values of the constants may vary from line to line (especially in the course of the proofs).
**Proposition 6**.: Let \(X=(X_{1},\ldots,X_{n})\) be a random vector with independent centered components such that there exist \(\alpha>2\) and \(b>0\) with \(\mathbb{P}(|X_{i}|\geq t)\leq(b/t)^{\alpha}\) for any \(t\geq b\) and any \(i=1,\ldots,n\). Let \(A=(a_{i_{1}\ldots i_{d}})\) be a tensor with generalized diagonal \(0\). Then,
\[\|f_{d,A}\|_{p}\leq C_{d,\alpha}\|A\|_{\mathrm{HS}}b^{d}\Big{(}\frac{\alpha}{ \alpha-p}\Big{)}^{d/p}\]
for any \(p\in[2,\alpha)\). In particular,
\[\mathbb{P}(|f_{d,A}(X)|\geq t)\leq C_{d,\alpha}\log^{d}\Big{(}\frac{t}{\|A\|_{ \mathrm{HS}}b^{d}}\Big{)}\Big{(}\frac{\|A\|_{\mathrm{HS}}b^{d}}{t}\Big{)}^{\alpha}\]
for all \(t\geq C_{d,\alpha}\|A\|_{\mathrm{HS}}b^{d}\).
Proof.: In view of (9), it suffices to prove the \(L^{p}\) bound, and moreover, we may clearly assume \(b=1\). Let \(X^{(1)},\ldots,X^{(d)}\) be independent copies of the vector \(X\) and \((\varepsilon_{i}^{(j)})\), \(i\leq n\), \(j\leq d\), a set of i.i.d. Rademacher variables which are independent of the \((X^{(j)})_{j}\). First, by standard decoupling (cf. [1, Theorem 3.1.1]) and symmetrization inequalities ([1, Lemma 1.2.6] applied iteratively), we have
\[\Big{\|}\sum_{i_{1},\ldots,i_{d}}a_{i_{1}\ldots i_{d}}X_{i_{1}} \cdots X_{i_{d}}\Big{\|}_{p} \leq C_{d}\Big{\|}\sum_{i_{1},\ldots,i_{d}}a_{i_{1}\ldots i_{d}}X _{i_{1}}^{(1)}\cdots X_{i_{d}}^{(d)}\Big{\|}_{p}\] \[\leq C_{d}\Big{\|}\sum_{i_{1},\ldots,i_{d}}a_{i_{1}\ldots i_{d}} \varepsilon_{i_{1}}^{(1)}X_{i_{1}}^{(1)}\cdots\varepsilon_{i_{d}}^{(d)}X_{i_{ d}}^{(d)}\Big{\|}_{p}.\]
By Kwapien's contraction principle [1, Theorem 1] (more precisely, its decoupled analogue, which follows easily by iteration of Kwapien's result for linear forms conditionally on the other random variables), we furthermore obtain that
\[\Big{\|}\sum_{i_{1},\ldots,i_{d}}a_{i_{1}\ldots i_{d}}\varepsilon_{i_{1}}^{(1)} X_{i_{1}}^{(1)}\cdots\varepsilon_{i_{d}}^{(d)}X_{i_{d}}^{(d)}\Big{\|}_{p}\leq \Big{\|}\sum_{i_{1},\ldots,i_{d}}a_{i_{1}\ldots i_{d}}Y_{i_{1}}^{(1)}\cdots Y_ {i_{d}}^{(d)}\Big{\|}_{p}\]
for any \(p\geq 1\), where \(Y_{1},\ldots,Y_{n}\) is a set of i.i.d. random variables with symmetrized Pareto distribution \(X_{i}\sim\mathrm{Par}_{\mathrm{s}}(\alpha,1)\).
Let us now first assume \(d=1\). By [1, Theorem 1.1] applied to the symmetric log-convex random variables \(a_{1}Y_{1},\ldots,a_{n}Y_{n}\), we obtain
\[\Big{\|}\sum_{i=1}^{n}a_{i}Y_{i}\Big{\|}_{p} \leq C\Big{(}\Big{(}\sum_{i=1}^{n}|a_{i}|^{p}\|Y_{i}\|_{p}^{p} \Big{)}^{1/p}+\sqrt{p}\Big{(}\sum_{i=1}^{n}a_{i}^{2}\|Y_{i}\|_{2}^{2}\Big{)}^{1 /2}\Big{)}\] \[=C\Big{(}\Big{(}\sum_{i=1}^{n}|a_{i}|^{p}\Big{)}^{1/p}\Big{(} \frac{\alpha}{\alpha-p}\Big{)}^{1/p}+\sqrt{p}\Big{(}\sum_{i=1}^{n}a_{i}^{2} \Big{)}^{1/2}\Big{(}\frac{\alpha}{\alpha-2}\Big{)}^{1/2}\Big{)} \tag{11}\] \[\leq C_{\alpha}\Big{(}\sum_{i=1}^{n}a_{i}^{2}\Big{)}^{1/2}\Big{(} \frac{\alpha}{\alpha-p}\Big{)}^{1/p}=C_{\alpha}\|A\|_{\mathrm{HS}}\Big{(}\frac{ \alpha}{\alpha-p}\Big{)}^{1/p}\]
for any \(p\in[2,\alpha)\). This may be iterated to arrive at
\[\Big{\|}\sum_{i_{1},\ldots,i_{d}}a_{i_{1}\ldots i_{d}}Y_{i_{1}}^{(1)}\cdots Y_{i _{d}}^{(d)}\Big{\|}_{p}\leq C_{\alpha}^{d}\|A\|_{\mathrm{HS}}\Big{(}\frac{ \alpha}{\alpha-p}\Big{)}^{d/p} \tag{12}\]
To see this, assume that (12) holds up to order \(d-1\). It follows that
\[\Big{\|}\sum_{i_{1},\ldots,i_{d}}a_{i_{1}\ldots i_{d}}Y_{i_{1}}^{( 1)}\cdots Y_{i_{d}}^{(d)}\Big{\|}_{p}^{2}\] \[\leq C_{\alpha}^{2(d-1)}\Big{(}\frac{\alpha}{\alpha-p}\Big{)}^{2 (d-1)/p}\Big{\|}(\sum_{i_{1},\ldots,i_{d-1}}(\sum_{i_{d}=1}^{n}a_{i_{1}\ldots i _{d}}Y_{i_{d}}^{(d)})^{2})^{1/2}\Big{\|}_{p}^{2}\] \[=C_{\alpha}^{2(d-1)}\Big{(}\frac{\alpha}{\alpha-p}\Big{)}^{2(d-1 )/p}\Big{\|}\sum_{i_{1},\ldots,i_{d-1}}(\sum_{i_{d}=1}^{n}a_{i_{1}\ldots i_{d }}Y_{i_{d}}^{(d)})^{2}\Big{\|}_{p/2}\] \[\leq C_{\alpha}^{2(d-1)}\Big{(}\frac{\alpha}{\alpha-p}\Big{)}^{2 (d-1)/p}\sum_{i_{1},\ldots,i_{d-1}}\Big{\|}(\sum_{i_{d}=1}^{n}a_{i_{1}\ldots i _{d}}Y_{i_{d}}^{(d)})^{2}\Big{\|}_{p/2}\] \[=C_{\alpha}^{2(d-1)}\Big{(}\frac{\alpha}{\alpha-p}\Big{)}^{2(d-1 )/p}\sum_{i_{1},\ldots,i_{d-1}}\Big{\|}\sum_{i_{d}=1}^{n}a_{i_{1}\ldots i_{d}} Y_{i_{d}}^{(d)}\Big{\|}_{p}^{2}\] \[\leq C_{\alpha}^{2d}\Big{(}\frac{\alpha}{\alpha-p}\Big{)}^{2d/p} \sum_{i_{1},\ldots,i_{d-1}}\|a_{i_{1}\ldots i_{d-1}}\|_{2}^{2}=C_{\alpha}^{2d} \Big{(}\frac{\alpha}{\alpha-p}\Big{)}^{2d/p}\|A\|_{\mathrm{HS}}^{2},\]
using the induction in the first and the case of \(d=1\) in the last inequality and denoting by \(a_{i_{1}\ldots i_{d-1}}\). the vector in \(\mathbb{R}^{n}\) defined by fixing all the coordinates of the tensor \(A\) but the last one.
Note that the \(L^{p}\) estimates from [1] we used in the proof hold for \(p\geq 2\) only, so that we must assume \(\alpha>2\) in Proposition 6. Similarly, one may think of replacing the assumption \(\mathbb{P}(|X_{i}|\geq t)\leq(b/t)^{\alpha}\) by a condition on the \(L^{p}\) norms. However, in the proof we need stochastic dominance by a log-concave distribution. In particular, it is possible to reformulate Proposition 6 with the Pareto-type tails replaced by the tails of the random variables introduced in Proposition 5.
Proposition 6 yields concentration bounds which, up to logarithmic error and rescaling, do not essentially change as \(d\) increases. While this is not too surprising in view of (3), it remarkably differs from the behaviour of polynomial chaos in sub-Gaussian (and subexponential) random variables, where the tails get significantly heavier as \(d\) increases. For instance, the tails of a \(d\)-th order chaos in sub-Gaussian random variables will be \(2/d\)-subexponential (i. e. of order \(\exp(-t^{2/d})\)) for large \(t\).
Next, we show a version of Proposition 6 which also includes diagonal terms. Note that if \(X\sim\mathrm{Par}(\alpha,b)\) and \(k\in\mathbb{N}\), then \(X^{k}\sim\mathrm{Par}(\alpha/k,b^{k})\). As a consequence, unlike in the sub-Gaussian or sub-exponential case again, the tail behaviour depends on the order of the largest power to which a single random variable \(X_{i}\) is raised. More precisely, we may write \(A=A_{1}+\ldots+A_{d}\) with \(d\)-tensors \(A_{1},\ldots,A_{d}\) such that
\[(A_{k})_{i_{1}\ldots i_{d}}=\begin{cases}a_{i_{1}\ldots i_{d}}&\text{if }k= \max\{\nu\colon\exists\,i_{j_{1}}=\ldots=i_{j_{\nu}},j_{1},\ldots,j_{\nu}\text{ pw. different}\}\\ 0&\text{else.}\end{cases}\]
For instance, if \(A\) has generalized diagonal \(0\) as in Proposition 6, then \(A=A_{1}\).
Furthermore, if diagonal terms appear in a \(d\)-th order chaos, it is necessary to "recenter" it. This is already obvious for \(d=2\), where the functional under
consideration may be rewritten as
\[\sum_{i\neq j}a_{ij}(X_{i}-\mathbb{E}[X_{i}])(X_{j}-\mathbb{E}[X_{j}])+\sum_{i=1} ^{n}a_{ii}(X_{i}^{2}-\mathbb{E}[X_{i}^{2}]).\]
Similarly, in higher orders, we need to replace each of the monomials \(X_{i_{1}}^{k_{1}}\cdots X_{i_{m}}^{k_{m}}\) by \((X_{i_{1}}^{k_{1}}-\mathbb{E}[X_{i_{1}}^{k_{1}}])\cdots(X_{i_{m}}^{k_{m}}- \mathbb{E}[X_{i_{m}}^{k_{m}}])\). As a consequence, given any \(d\)-tensor \(A\in\mathbb{R}^{n^{d}}\), the natural generalization of (10) are functionals of the form
\[f_{d,A}(X):=\sum_{\nu=1}^{d}\sum_{\begin{subarray}{c}k_{1}\geq\ldots\geq k_{ \nu}\geq 1\\ k_{1}+\ldots+k_{\nu}=d\end{subarray}}\sum_{i_{1}\neq\ldots\neq i_{\nu}}\tilde{a }_{i_{1}\ldots i_{\nu}}^{k_{1}\ldots k_{\nu}}(X_{i_{1}}^{k_{1}}-\mathbb{E}[X_{ i_{1}}^{k_{1}}])\cdots(X_{i_{m}}^{k_{m}}-\mathbb{E}[X_{i_{m}}^{k_{m}}]). \tag{13}\]
Here, in terms of the tensor \(A\), we have
\[\tilde{a}_{i_{1}\ldots i_{\nu}}^{k_{1}\ldots k_{\nu}}=\sum a_{j_{1}\ldots j_{ d}},\]
where the sum extends over all \(d\)-tuples \(j_{1},\ldots,j_{d}\) in which every index \(i_{1},\ldots,i_{\nu}\) appears exactly \(k_{1},\ldots,k_{\nu}\) times. In particular, if \(a_{j_{1}\ldots j_{d}}\neq 0\), then it is a non-zero entry of \(A_{k_{1}}\) (recall that \(k_{1}\geq\ldots\geq k_{\nu}\)). In this sense, (13) centers and "regroups" the indexes according to how many _different_ indexes appear in \(i_{1},\ldots,i_{d}\) and to which powers they are raised.
**Proposition 7**.: Let \(X=(X_{1},\ldots,X_{n})\) be a random vector with independent centered components such that there exist \(\alpha>2\) and \(b>0\) with \(\mathbb{P}(|X_{i}|\geq t)\leq(b/t)^{\alpha}\) for any \(t\geq b\) and any \(i=1,\ldots,n\). Let \(A=(a_{i_{1}\ldots i_{d}})\in\mathbb{R}^{n^{d}}\) be a \(d\)-tensor and \(f_{d,A}\) the functional defined in (13). Then,
\[\|f_{d,A}(X)\|_{p} \leq C_{d,\alpha}\sum_{k=1}^{k^{*}}\lVert A_{k}\rVert_{\mathrm{ HS}}b^{d}\Big{(}\frac{\alpha/k}{\alpha/k-p}\Big{)}^{(d-k+1)/p} \tag{14}\] \[\leq C_{d,\alpha}\lVert A\rVert_{\mathrm{HS}}b^{d}\Big{(}\frac{ \alpha/k^{*}}{\alpha/k^{*}-p}\Big{)}^{(d-k^{*}+1)/p}\]
for any \(p\in[2,\alpha/k^{*})\), where \(k^{*}:=\max\{k\colon A_{k}\neq 0\}\). In particular,
\[\mathbb{P}(|f_{d,A}(X)|\geq t)\leq C_{d,\alpha}\max_{k=1,\ldots,k^{*}}\Big{(} \log^{d-k+1}\Big{(}\frac{t}{\lVert A_{k}\rVert_{\mathrm{HS}}b^{d}}\Big{)} \Big{(}\frac{\lVert A_{k}\rVert_{\mathrm{HS}}b^{d}}{t}\Big{)}^{\alpha/k}\Big{)}\]
for all \(t\geq C_{d,\alpha}b^{d}\max_{k=1,\ldots,k^{*}}\lVert A_{k}\rVert_{\mathrm{ HS}}\).
Note that Proposition 7 implicitly requires that \(\alpha/k^{*}>2\). To illustrate the result, let us consider the case of \(d=2\). Here, \(A=A_{1}+A_{2}=A^{\mathrm{od}}+A^{\mathrm{d}}\) for the off-diagonal and diagonal parts of \(A\), so that the tail bound given in Proposition 7 reads
\[\mathbb{P}(|f_{2,A}(X)|\geq t)\leq C_{\alpha}\max\Big{(} \log^{2}\Big{(}\frac{t}{\lVert A^{\mathrm{od}}\rVert_{\mathrm{HS}}b^{2}} \Big{)}\Big{(}\frac{\lVert A^{\mathrm{od}}\rVert_{\mathrm{HS}}b^{2}}{t}\Big{)} ^{\alpha}, \tag{15}\] \[\log\Big{(}\frac{t}{\lVert A^{\mathrm{d}}\rVert_{\mathrm{HS}}b^{2 }}\Big{)}\Big{(}\frac{\lVert A^{\mathrm{od}}\rVert_{\mathrm{HS}}b^{2}}{t} \Big{)}^{\alpha/2}\Big{)}.\]
As we will discuss in more detail in the following subsection, this can be regarded as a crude or preliminary Hanson-Wright type inequality for heavy-tailed random variables.
Proof.: We again assume \(b=1\). Note that \(\mathbb{P}(|X_{i_{j}}^{k_{j}}|\geq t)\leq t^{-\alpha/k_{j}}\) and therefore also \(\mathbb{P}(|X_{i_{j}}^{k_{j}}-\mathbb{E}[X_{i_{j}}^{k_{j}}]|\geq t)\leq C_{ \alpha,k_{j}}t^{-\alpha/k_{j}}\) for \(t\) sufficiently large (depending on \(\alpha\) and
\(k_{j}\)). Therefore, we may apply Proposition 6 with \(\nu\) instead of \(d\) and \(\alpha/k_{1}\) instead of \(\alpha\) to each of the summands
\[f_{\nu,\tilde{A}}^{k_{1},\ldots,k_{\nu}}(X):=\sum_{i_{1}\neq\ldots\neq i_{\nu}} \tilde{a}_{i_{1}\ldots i_{\nu}}^{k_{1}\ldots k_{\nu}}(X_{i_{1}}^{k_{1}}-\mathbb{ E}[X_{i_{1}}^{k_{1}}])\cdots(X_{i_{\nu}}^{k_{\nu}}-\mathbb{E}[X_{i_{\nu}}^{k_{\nu}}])\]
from the representation (13), where we suppress the dependency of \(\tilde{A}\) on \(k_{1},\ldots,k_{\nu}\) in our notation. Clearly, \(\|\tilde{A}\|_{\mathrm{HS}}\leq C_{d}\|A_{k_{1}}\|_{\mathrm{HS}}\) (recall that \(k_{1}=\max_{j}k_{j}\)), and hence,
\[\|f_{\nu,\tilde{A}}^{k_{1},\ldots,k_{\nu}}(X)\|_{p} \leq C_{d,\alpha}\|\tilde{A}\|_{\mathrm{HS}}\Big{(}\frac{\alpha/k _{1}}{\alpha/k_{1}-p}\Big{)}^{\nu/p}\] \[\leq C_{d,\alpha}\|A_{k_{1}}\|_{\mathrm{HS}}\Big{(}\frac{\alpha/ k_{1}}{\alpha/k_{1}-p}\Big{)}^{(d-k_{1}+1)/p}.\]
Summing up according to \(k_{1}=k\in\{1,\ldots,k^{*}\}\) we arrive at (14). To derive the tail bound, use that \(f_{d,A}=\sum_{k=1}^{k^{*}}f_{k,A_{k}}\), so that by union bound,
\[\mathbb{P}(|f_{d,A}(X)|\geq t)\leq\sum_{k=1}^{k^{*}}\mathbb{P}(|f_{k,A_{k}}(X) |\geq t/k^{*}).\]
As in Proposition 6, the claim now follows from the \(L^{p}\) estimates we derived above (for each \(k\)) in combination with (9).
### Discussion and open questions
For general \(d\geq 2\), the results from Propositions 6 and 7 seem new. However, alternate bounds for linear forms, i. e., the case of \(d=1\), can be deduced from existing literature. In particular, we shall consider the Fuk-Nagaev inequality which is due to [11], cf. also [10] and [14] (for a more recent partial refinement, see [1]). Given a sequence \(X_{1},\ldots,X_{n}\) of independent random variables with \(\mathbb{E}[X_{i}]=0\) and finite second moments, let \(S_{n}:=X_{1}+\ldots+X_{n}\), \(\sigma^{2}:=\mathbb{E}[X_{1}^{2}]+\ldots+\mathbb{E}[X_{n}^{2}]\), and
\[C_{p}(X):=\Big{(}\sum_{i=1}^{n}\mathbb{E}[\max(0,X_{i})^{p}]\Big{)}^{1/p}\]
for any \(p\geq 1\). Then, by [14, Corollary 1.8],
\[\mathbb{P}(S_{n}>t)\leq\Big{(}\frac{(p+2)C_{p}(X)}{pt}\Big{)}^{p}+\exp\Big{(} -\frac{2t^{2}}{(p+2)^{2}\mathrm{e}^{p}\sigma^{2}}\Big{)}\]
for any \(t>0\).
Now assume that \(X_{1}^{\prime},\ldots,X_{n}^{\prime}\) have \(L^{p}\) norms at most of Pareto type (1), say, with \(b=1\), and let \(X_{1}:=a_{1}X_{1}^{\prime},\ldots,a_{n}X_{n}^{\prime}\) for real-valued coefficients \(a_{1},\ldots,a_{n}\). Writing \(a=(a_{1},\ldots,a_{n})\) and \(\|a\|_{\ell^{p}}\) for its \(\ell^{p}\) norm, we clearly have
\[C_{p}(X)^{p}\leq\|a\|_{\ell^{p}}^{p}\frac{\alpha}{\alpha-p},\qquad\sigma^{2} \leq\|a\|_{\ell^{2}}^{2}\frac{\alpha}{\alpha-2}.\]
It follows that in this case,
\[\mathbb{P}(S_{n}>t)\leq\frac{(p+2)^{p}}{p^{p}}\frac{\alpha}{\alpha-p}\frac{\| a\|_{\ell^{p}}^{p}}{t^{p}}+\exp\Big{(}-\frac{2(\alpha-2)t^{2}}{(p+2)^{2} \mathrm{e}^{p}\|a\|_{\ell^{2}}^{2}\alpha}\Big{)}\]
for any \(p\in(2,\alpha)\) and any \(t>0\). Plugging in \(p=\alpha-1/\log(t/\|a\|_{\ell^{\alpha}})\) together with some elementary estimates thus leads to the bound
\[\mathbb{P}(S_{n}\geq t)\leq\frac{(\alpha+2)^{\alpha}}{\alpha^{\alpha}}\mathrm{e }\alpha\log\Big{(}\frac{t}{\|a\|_{\ell^{\alpha}}}\Big{)}\frac{\|a\|_{\ell^{ \alpha}}^{\alpha}}{t^{\alpha}}+\exp\Big{(}-\frac{2(\alpha-2)}{(\alpha+2)^{2} \mathrm{e}^{\alpha}\alpha}\cdot\frac{t^{2}}{\|a\|_{\ell^{2}}^{2}}\Big{)}.\]
The same holds for the lower tails \(\mathbb{P}(S_{n}\leq-t)\). The resulting (two-sided) inequality is obviously sharper than the bound given by Proposition 6 for \(d=1\) and \(b=1\), which reads
\[\mathbb{P}\Big{(}\Big{|}\sum_{i=1}^{n}a_{i}X_{i}\Big{|}\geq t\Big{)}\leq C_{ \alpha}\log(t/\|a\|_{\ell^{2}})\Big{(}\frac{\|a\|_{\ell^{2}}}{t}\Big{)}^{\alpha}\]
for all \(t\geq C_{\alpha}\|a\|_{\ell^{2}}\).
In fact, this result corresponds to what we heuristically get from (11) if we naively optimize each of the two summands of (11) on its own, recalling (1) and standard results about sub-Gaussian tail behavior like [25, Proposition 2.5.2] (in particularly pretending we may consider the second summand for any \(p\in[2,\infty)\)). However, this is an informal argument, and in practice we can only address the "sub-Gaussian" part for \(p\in[2,\alpha)\), where it is almost meaningless (one needs to access \(p\to\infty\) to obtain sub-Gaussian tail bounds). Yet, on a heuristical level the \(L^{p}\) bounds do seem to imply the Fuk-Nagaev results, and it is an interesting question whether this observation can be made rigorous.
Similar remarks hold for \(d\geq 2\). Here, the aforementioned results for Gaussian chaos or polynomials in sub-Gaussian or sub-exponential random variables identify a wealth of different tail levels which scale with different tensor-type norms of the coefficient tensor \(A\) (or more generally of the derivatives of the polynomial \(f\) under consideration). By contrast, Propositions 6 and 7 only involve much fewer levels of tail decay and always the Hilbert-Schmidt norm, which is the largest norm among the family of norms which appears in those results.
For instance, in the case of \(d=2\) the by now classical Hanson-Wright inequality (cf. e. g. [13, 14, 15]) states that if the \(X_{1},\ldots,X_{n}\) are independent sub-Gaussian random variables with mean \(0\) and variance \(1\),
\[\mathbb{P}(|f_{2,A}(X)-\mathbb{E}[f_{2,A}(X)]|\geq t)\leq 2\exp\Big{(}-\frac{ 1}{C}\min\Big{(}\Big{(}\frac{t}{\|A\|_{\mathrm{HS}}}\Big{)}^{2},\frac{t}{\|A \|_{\mathrm{op}}}\Big{)}\Big{)},\]
where \(\|A\|_{\mathrm{op}}\) is the operator norm of \(A\) and \(C>0\) is some constant which only depends on the sub-Gaussian norms of the \(X_{i}\). Analogues of the Hanson-Wright inequality for sub-exponential random vectors can be found in [1, 1]. While identifying two regimes of tail decay, the "heavy-tailed" Hanson-Wright type inequality (15) does not yet seem to be optimal in the sense of involving different norms (of the matrix \(A\)) and scaling regimes. One may invoke a sub-Gaussian part by treating the diagonal part with the Fuk-Nagaev result instead of Proposition 6, which yields the bound
\[\begin{split}&\mathbb{P}(|f_{2,A}(X)|\geq t)\leq C_{\alpha}\max \Big{(}\exp\Big{(}-C_{\alpha}^{\prime}\Big{(}\frac{t}{\|a^{\mathrm{d}}\|_{\ell ^{2}}b^{2}}\Big{)}^{2}\Big{)},\\ &\quad\log\Big{(}\frac{t}{\|a^{\mathrm{d}}\|_{\ell^{\alpha/2}}b^{ 2}}\Big{)}\Big{(}\frac{\|a^{\mathrm{d}}\|_{\ell^{\alpha/2}}b^{2}}{t}\Big{)}^{ \alpha/2},\log^{2}\Big{(}\frac{t}{\|A^{\mathrm{od}}\|_{\mathrm{HS}}b^{2}} \Big{)}\Big{(}\frac{\|A^{\mathrm{od}}\|_{\mathrm{HS}}b^{2}}{t}\Big{)}^{\alpha }\Big{)},\end{split} \tag{16}\]
where \(a^{\mathrm{d}}\) is the vector consisting of the diagonal entries of \(A\). However, even if (16) improves upon (15) it should still be non-optimal.
In principle, an obvious strategy would be to sharpen the proof of Proposition [13] by the results derived in [10] (similar to the methods used in [11]). However, even if we possibly arrive at \(L^{p}\) bounds of the form
\[\|f_{d,A}(X)\|_{L^{p}}\leq\sum_{i}C(i,\alpha,A)\varphi_{i,\alpha}(p)\]
for suitable coefficients \(C(i,\alpha,A)\) and \(p\)-dependent functions \(\varphi_{i,\alpha}(p)\), it is again hard to derive multilevel tail inequalities from them for similar reasons as sketched in the case of \(d=1\). Indeed, in the case of exponential-type tails, \(\varphi(p)\approx p^{\kappa}\) for suitable exponents \(\kappa>0\), while in the heavy-tailed situation
for \(\kappa\approx\alpha/i\). In particular, these inequalities will only hold for \(p\in[p,\alpha/d)\), so that we can only access \(p\) in a region where most of the \(L^{p}\) bounds are basically meaningless.
Note in passing that for this reason, the proof of the tail bound in Proposition 7 involves a union bound (thus splitting the functional into several parts), which is not necessary in the case of sub-exponential random variables. On the other hand, slightly generalizing the case of \(d=2\) in Proposition 7, observe that if \(X\) is a positive random variable such that
\[P(X\geq x)=\max\Big{(}\frac{b^{2\alpha}}{x^{2\alpha}},\frac{a^{\alpha}}{x^{ \alpha}}\Big{)} \tag{17}\]
for all \(x\geq b\), where \(0<a<b<\infty\) and \(\alpha>0\), we have
\[\mathbb{E}[X^{p}]\leq b^{p}\frac{2\alpha}{2\alpha-p}+a^{p}\frac{\alpha}{ \alpha-p}. \tag{18}\]
Indeed, an easy calculation yields
\[\mathbb{E}[X^{p}] =p\int_{0}^{b}x^{p-1}dx+p\int_{b}^{b^{2}/a}x^{p-1}\frac{b^{2 \alpha}}{x^{2\alpha}}dx+p\int_{b^{2}/a}^{\infty}x^{p-1}\frac{a^{\alpha}}{x^{ \alpha}}dx\] \[=b^{p}+\frac{pb^{2\alpha}}{p-2\alpha}\Big{(}\frac{b^{2p-4\alpha} }{a^{p-2\alpha}}-b^{-2\alpha}\Big{)}-\frac{pa^{\alpha}}{p-\alpha}\frac{b^{2p- 2\alpha}}{a^{p-\alpha}}\] \[=b^{p}\frac{2\alpha}{2\alpha-p}+a^{p}\frac{b^{2p-2\alpha}}{a^{2p -2\alpha}}\frac{p}{2\alpha-p}\frac{\alpha}{\alpha-p}\leq b^{p}\frac{2\alpha}{2 \alpha-p}+a^{p}\frac{\alpha}{\alpha-p}.\]
However, it is not clear whether (18) also implies (17) (maybe up to a logarithmic error), which could serve as a starting point for more precise tail bounds for chaos in heavy-tailed random variables.
|
2309.03245
|
Testing properties of distributions in the streaming model
|
We study distribution testing in the standard access model and the
conditional access model when the memory available to the testing algorithm is
bounded. In both scenarios, the samples appear in an online fashion and the
goal is to test the properties of distribution using an optimal number of
samples subject to a memory constraint on how many samples can be stored at a
given time. First, we provide a trade-off between the sample complexity and the
space complexity for testing identity when the samples are drawn according to
the conditional access oracle. We then show that we can learn a succinct
representation of a monotone distribution efficiently with a memory constraint
on the number of samples that are stored that is almost optimal. We also show
that the algorithm for monotone distributions can be extended to a larger class
of decomposable distributions.
|
Sampriti Roy, Yadu Vasudev
|
2023-09-06T10:53:29Z
|
http://arxiv.org/abs/2309.03245v1
|
# Testing properties of distributions in the streaming model
###### Abstract
We study distribution testing in the standard access model and the conditional access model when the memory available to the testing algorithm is bounded. In both scenarios, the samples appear in an online fashion and the goal is to test the properties of distribution using an optimal number of samples subject to a memory constraint on how many samples can be stored at a given time. First, we provide a trade-off between the sample complexity and the space complexity for testing identity when the samples are drawn according to the conditional access oracle. We then show that we can learn a succinct representation of a monotone distribution efficiently with a memory constraint on the number of samples that are stored that is almost optimal. We also show that the algorithm for monotone distributions can be extended to a larger class of decomposable distributions.
## 1 Introduction
Sublinear algorithms that analyze massive amounts of data are crucial in many applications currently. Understanding the underlying probability distribution that generates the data is important in this regard. In the field of distribution testing, a sub-field of property testing, the goal is to test whether a given unknown distribution has a property \(\mathcal{P}\) or is far from having the property \(\mathcal{P}\) (where the farness is defined with respect to total variation distance). Starting from the work of Goldreich and Ron ([1]), a vast literature of work has studied the problem of testing probability distributions for important properties like identity, closeness, support size as well as properties relating to the structure of the distribution like monotonicity, k-modality, and histograms among many others; see Canonne's survey ([1]) for an overview of the problems and results.
In the works of Canonne et al ([1]) and Chakraborty et al ([1]), distribution testing with conditional samples was studied. In this model, the algorithm can choose a subset of the support, and the samples of the distribution conditioned on this subset are generated. This allows adaptive sampling from the distribution and can give better sample complexity for a number of problems. In particular, ([1]) and ([1]) give testers for uniformity and other problems that use only a constant number of samples.
The natural complexity measure of interest is the number of samples of the underlying distribution that is necessary to test the property. In many cases, when data is large, it might be infeasible to store all the samples that are generated. A recent line of work has been to study the trade-off between the sample complexity and the space complexity of algorithms for learning and testing properties of distributions. This model can be equivalently thought of as a data stream of i.i.d
samples from an unknown distribution, with the constraint that you are allowed to store only a small subset of these samples at any point in time.
In this work, we study distribution testing problems in the standard model, and when the algorithm is allowed to condition on sets to better understand the trade-off between the sample complexity and size. In particular, we study identity testing and testing whether the unknown distribution is monotone. Our work borrows ideas from the recent work of Diakonikolas et al ([1]) and extends the ideas to these problems.
### Related work
Testing and estimating the properties of discrete distributions is well-studied in property testing; see ([1]) for a nice survey of recent results. In our work, we study property testing of discrete distributions under additional memory constraints wherein the algorithm does not have the resources to store all the samples that it obtains.
This line of work has received a lot of attention in recent times. Chien et al ([1]) propose a sample-space trade-off for testing any \((\epsilon,\delta)\)-weakly continuous properties, as defined by Valiant ([11]). Another work by Diakonikolas et al ([1]) studies the uniformity, identity, and closeness testing problems and presents trade-offs between the sample complexity and the space complexity of the tester. They use the idea of a _bipartite collision tester_ where instead of storing all the samples in the memory, the testing can be done by storing a subset of samples and counting the collisions between the stored set and the samples that come later. Another line of work ([1, 1]) focuses on the task of estimating the entropy of distributions from samples in the streaming model, where space is limited. In particular, ([1]) estimate the entropy of an unknown distribution \(D\) up to \(\pm\epsilon\) using constant space. Berg et al ([1]) study the uniformity testing problem in a slightly different model where the testing algorithm is modeled as a finite-state machine.
Property testing with memory constraints has also been studied in the setting of streaming algorithms as well. Streaming algorithms were first studied in a unified way starting from the seminal work of Alon et al ([1]) where the authors studied the problem of estimating frequency moments. There is a vast amount of literature available on streaming algorithms (see [14, 15]). Bathie et al ([1]) have studied property testing in the streaming model for testing regular languages. Czumaj et al ([16]) show that every constant-query testable property on bounded-degree graphs can be tested on a random-order stream with constant space. Since this line of work is not directly relevant to our work in this paper, we will not delve deeper into it here.
### Our results
In this work, we study the trade-off between sample complexity and space complexity in both the standard access model and the conditional access model. In the standard access model, a set of samples can be drawn independently from an unknown distribution. In the case of the conditional access model, a subset of the domain is given and samples can be drawn from an unknown distribution conditioned on the given set. This is similar to a streaming algorithm where the samples are presented to the algorithm, and the algorithm has a memory constraint of \(m\) bits; i.e., only up to \(m\) bits of samples can be stored in memory.
In the standard access model, which we will refer to as \(\mathsf{SAMP}\), we have a distribution \(D\) over the support \(\{1,2,\ldots,n\}\) and the element \(i\) is sampled with probability \(D(i)\). In the conditional access model, which we will refer to as \(\mathsf{COND}\), the algorithm can choose a set \(S\subseteq\{1,2,\ldots,n\}\) and will obtain samples from the conditional distribution over the set. I.e. the sample \(i\in S\) is returned with probability \(D(i)/D(S)\). In this work, we will work with the case when the conditioning is done on sets of size at most two - we will refer to this conditional oracle as \(\mathsf{PCOND}\) ([10]).
Our results are stated below.
* We propose a memory-efficient identity testing algorithm in the \(\mathsf{PCOND}\) model when the algorithm is restricted by the memory available to store the samples. We adapt the algorithm of Canonne et al ([10]) and reduce the memory requirement by using the \(\mathsf{CountMin}\) sketch ([12]) for storing the frequencies of the samples. The identity testing algorithm uses \(O(\log^{2}n\log\log n/m\epsilon^{2})\) samples from standard access model where \(\frac{\log n\sqrt{\log\log n}}{\epsilon}\leq m\leq\frac{\log^{2}n}{\epsilon}\) and an \(\tilde{O}(\log^{4}n/\epsilon^{4})\) samples from conditional access model and does the following, if \(D=D^{*}\), it returns \(\mathsf{Accept}\) with probability at least \(2/3\), and if \(d_{TV}(D,D^{*})\geq\epsilon\), it returns \(\mathsf{Reject}\) with probability at least \(2/3\). It uses only \(O(\frac{m}{\epsilon})\) bits of memory. We also observe that by applying oblivious decomposition [11], performing identity and closeness testing on monotone distributions over \([n]\) can be reduced to performing the corresponding tasks on arbitrary distributions over \([O(\log\left(n\epsilon\right)/\epsilon)]\). We use the streaming model based identity tester from ([11]) and obtain an \(O(\log\left(n\epsilon\right)\log\log\left(n\epsilon\right)/m\epsilon^{5})\) standard access query identity tester for monotone distributions where \(\log\log\left(n\epsilon\right)/\epsilon^{2}\leq m\leq(\log\left(n\epsilon \right)/\epsilon)^{9/10}\). Their closeness testing algorithm also implies a closeness tester for monotone distributions which uses \(O(\log\left(n\epsilon\right)\sqrt{\log\log\left(n\epsilon\right)}/\sqrt{m} \epsilon^{3})\) samples from standard access model, where \(\log\log(n\epsilon)\leq m\leq\tilde{\Theta}(min(\log\left(n\epsilon\right)/ \epsilon,\log^{2/3}\left(n\epsilon\right)/\epsilon^{2}))\). Both testers require \(m\) bits of memory.
* We adapt the idea of the _bipartite collision tester_ ([11]) and give an algorithm that uses \(O(\frac{n\log n}{m\epsilon^{8}})\) samples from \(\mathsf{SAMP}\) and tests if the distribution is monotone or far from being monotone. This algorithm requires only \(O(m)\) bits of memory for \(\log^{2}n/\epsilon^{6}\leq m\leq\sqrt{n}/\epsilon^{3}\). This upper bound is nearly tight since we observe that the lower bound for uniformity testing proved by Diakonikolas et al ([11]) applies to our setting as well. In particular, we show that the "no" distribution that is used in [11] is actually far from monotone, and hence the lower bound directly applies in our setting as well.
* We extend the idea of the previous algorithm for learning and testing a more general class of distribution called \((\gamma,L)\)-decomposable distribution, which includes monotone and \(k\)-modal distributions. Our algorithm takes \(O(\frac{nL\log\left(1/\epsilon\right)}{m\epsilon^{9}})\) samples from \(D\) and needs \(O(m)\) bits of memory where \(\log n/\epsilon^{4}\leq m\leq O(\sqrt{n\log n}/\epsilon^{3})\).
## 2 Notation and Preliminaries
Throughout this paper, we study distributions \(D\) that are supported over the set \(\{1,2,\ldots,n\}=[n]\). The notion of distance between distributions will be _total variation distance_ or _statistical distance_ which is defined as follows: for two distributions \(D_{1}\) and \(D_{2}\), the total variation distance, denoted
by \(d_{TV}(D_{1},D_{2})=\frac{1}{2}|D_{1}-D_{2}|_{1}=\frac{1}{2}\sum_{i\in[n]}|D_{1}(x )-D_{2}(x)|=max_{S\subseteq[n]}((D_{1}(S)-D_{2}(S))\). We will use \(\mathcal{U}\) to denote the uniform distribution over \([n]\). We use \(|.|_{1}\) for the \(\ell_{1}\) norm, \(||.||_{2}\) for the \(\ell_{2}\) norm.
Let \(D_{1}\) and \(D_{2}\) be two distributions over \([n]\), if \(d_{TV}(D_{1},D_{2})\leq\epsilon\), for some \(0\leq\epsilon\leq 1\), we say that \(D_{1}\) is \(\epsilon\) close to \(D_{2}\). Let \(\mathcal{D}\) be the set of all probability distributions supported on \([n]\). A property \(\mathcal{P}\) is a subset of \(\mathcal{D}\). We say that a distribution \(D\) is \(\epsilon\) far from \(\mathcal{P}\), if \(D\) is \(\epsilon\) far from all the distributions having the property \(\mathcal{P}\). I.e. \(d_{TV}(D,D^{\prime})>\epsilon\) for every \(D^{\prime}\in\mathcal{P}\).
We define the probability of self-collision of the distribution \(D\) by \(||D||_{2}\). For a set \(S\) of samples drawn from \(D\), \(\text{coll}(S)\) defines the pairwise collision count between them. Consider \(S_{1},S_{2}\subset S\), the _bipartite collision_ of \(D\) with respect to \(S\) is defined by \(\text{coll}(S_{1},S_{2})\) is the number of collision between \(S_{1}\) and \(S_{2}\).
We will be using the count of collisions among sample points to test closeness to uniformity. The following lemma connects the collision probability and the distance to uniformity.
**Lemma 2.1** ([3]).: _Let \(D\) be a distribution over \([n]\). If \(\max_{x}D(x)\leq(1+\epsilon).\min_{x}D(x)\) then \(||D||_{2}^{2}\leq(1+\epsilon^{2})/n\). If \(||D||_{2}^{2}\leq(1+\epsilon^{2})/n\) then \(d_{TV}(D,\mathcal{U})\leq\epsilon\)._
One way to test the properties of distributions is to first learn an explicit description of the distribution. We now define the notion of flattened and reduced distributions that will be useful towards this end.
**Definition 2.1** (Flattened and reduced distributions).: _Let \(D\) be a distribution over \([n]\), and there exists a set of partitions of the domain into \(\ell\) disjoint intervals, \(\mathcal{I}=\{I_{j}\}_{j=1}^{\ell}\). The flattened distribution \((D^{f})^{\mathcal{I}}\) corresponding to \(D\) and \(\mathcal{I}\) is a distribution over \([n]\) defined as follows : for \(j\in[\ell]\) and \(i\in I_{j}\); \((D^{f})^{\mathcal{I}}(i)=\frac{\sum_{t\in I_{j}}D(t)}{|I_{j}|}\). A reduced distribution \(D^{r}\) is defined over \([\ell]\) such that \(\forall i\in\ell,D^{r}(i)=D(I_{i})\)._
If a distribution \(D\) is \(\epsilon\) close to its flattened distribution according to some partition \(\{\mathcal{I}_{j}\}_{j=1}^{\ell}\), we refer \(D\) to be \((\epsilon,\ell)\)-flattened. We note that if a distribution is monotonically non-increasing, then its flattened distribution is also monotonically non-increasing but its reduced distribution is not necessarily the same.
The following folklore result shows that the empirical distribution is close to the actual distribution provided sufficient number of samples are taken.
**Lemma 2.2** (Folklore).: _Given a distribution \(D\) supported over \([n]\) and an interval partition \(\mathcal{I}=\{I_{1},...,I_{\ell}\}\), using \(S=O(\frac{\ell^{2}}{\epsilon^{2}}\log\ell)\) points from SAMP, we can obtain an empirical distribution \(\tilde{D}\) in the following way: \(\forall I_{j}\in\mathcal{I};\tilde{D}(I_{j})=\frac{\text{occ}(S,I_{j})}{|S|}\) (\(\text{occ}(S,I_{j})\) is the number of samples from \(S\) lies inside \(I_{j}\)) over \([\ell]\) such that for all interval \(I_{j}\), with probability at least \(2/3\), \(|D(I_{j})-\tilde{D}(I_{j})|\leq\frac{\epsilon}{\epsilon}\). Moreover, let the flattened distribution of \(D\) be \((D^{f})^{\mathcal{I}}\) and the flattened distribution of \(\tilde{D}\) be \((\tilde{D}^{f})^{\mathcal{I}}\), we can say that \(d_{TV}((D^{f})^{\mathcal{I}},(\tilde{D}^{f})^{\mathcal{I}})<\epsilon\)._
While designing a tester for monotonicity, we use the following theorem due to Birge ([10])
**Lemma 2.3** (Oblivious partitioning [10]).: _Let \(D\) be a non-increasing distribution over \([n]\) and \(\mathcal{I}=\{I_{1},...,I_{\ell}\}\) is an interval partitioning of \(D\) such that \(|I_{j}|=(1+\epsilon)^{j}\), for \(0<\epsilon<1\), then \(\mathcal{I}\) has the following properties,_
* \(\ell=O(\frac{1}{\epsilon}\log n\epsilon)\)
* _The flattened distribution corresponding to \(\mathcal{I}\), \((D^{f})^{\mathcal{I}}\) is \(\epsilon\)-close to \(D\), or \(D\) is \((\epsilon,\ell)\)-flattened._
Next, we describe a data structure called the CountMin sketch which is used to estimate the frequencies of elements in a one-pass stream. It was introduced by Cormode et al ([1]). As we are dealing with a one-pass streaming algorithm with a memory constraint, it would be important to store samples in less space. CountMin sketch uses hash functions to store frequencies of the stream elements in sublinear space and returns an estimate of the same.
**Definition 2.2** (CountMin sketch).: _A CountMin (CM) sketch with parameters \((\epsilon,\delta)\) is represented by a two-dimensional array counts with width \(w\) and depth \(d\): \(count[1,1],...,count[d,w]\). Set \(w=\frac{e}{\epsilon}\) and \(d=\log 1/\delta\). Each entry of the array is initially zero. Additionally, \(d\) hash functions \(h_{1},...,h_{d}:\{1,...,n\}\rightarrow\{1,...,w\}\) chosen uniformly at random from a pairwise-independent family. The space requirement for the count min sketch is \(wd\) words. The sketch can be queried for the frequency of an element from the universe \(\mathcal{U}\) of elements, and will return an estimate of its frequency._
The lemma below captures the fact that the frequency of any element \(x_{i}\) can be estimated from a CountMin sketch.
**Lemma 2.4** ([1]).: _Let \(\{x_{1},...,x_{S}\}\) be a stream of length \(S\) and \(f_{x_{i}}\) be the actual frequency of an element \(x_{i}\). Suppose \(\tilde{f}_{x_{i}}\) be the stored frequency in count min sketch, then the following is true with probability at least \((1-\delta)\), \(f_{x_{i}}\leq\tilde{f}_{x_{i}}\leq f_{x_{i}}+\epsilon S\)._
## 3 Identity testing in the streaming model
In this section, we discuss identity testing problems in the streaming settings. First, we start with testing monotone distribution where the stream consists of samples drawn from SAMP. Later we describe a identity tester that uses PCOND queries one at a time online fashion.
### Testing monotone distributions
We start with testing monotone distribution for identity and closeness. Both algorithms work in the following way: Partition the domain of the monotone distributions using the oblivious decomposition, get the reduced distribution over \(\ell=O(\log n)/\epsilon\), test identity and closeness for the reduced distributions. The lemma below captures the fact of why it is sufficient to test reduced distributions in the case of monotone distributions.
**Lemma 3.1** ([13]).: _Let \(\mathcal{I}=\{I_{1},...,I_{\ell}\}\) be a partition of \([n]\). Suppose \(D_{1}\) and \(D_{2}\) are two distributions over \([n]\) such that both of them are \((\epsilon,\ell)\) flattened according to \(\mathcal{I}\) and \((D_{1}^{r})^{\mathcal{I}}\), \((D_{2}^{r})^{\mathcal{I}}\) are the reduced distributions respectively, then the following happens, \(|d_{TV}(D_{1},D_{2})-d_{TV}((D_{1}^{r})^{\mathcal{I}},(D_{2}^{r})^{\mathcal{ I}})|\leq 2\epsilon\). Moreover, if \(D_{1}=D_{2}\), \((D_{1}^{r})^{\mathcal{I}}=(D_{2}^{r})^{\mathcal{I}}\)._
As monotone distributions are \((\epsilon,\ell)\)-flattened according to the oblivious decomposition, we can reduce the problem of testing a monotone distribution into testing the corresponding reduced distribution. For testing identity and closeness with a monotone distribution, it is feasible to run an
identity tester and closeness tester over the reduced distributions. We use the following theorem for using one pass streaming identity tester.
**Theorem 3.2** ( Streaming identity tester [1]).: _Let \(D^{*}\) be an explicit distribution over \([\ell]\) and \(D\) be an unknown distribution over \([\ell]\). There exists a single pass streaming identity tester that takes \(O(\ell\log\ell/m\epsilon^{4})\) samples from \(\mathsf{SAMP}\) oracle for \(\log\ell/\epsilon^{2}\leq m\leq\ell^{9/10}\) and does the following: if \(D=D^{*}\), with probability at least \(2/3\), the algorithm outputs \(\mathsf{Accept}\), if \(d_{TV}(D,D^{*})>\epsilon\), it outputs \(\mathsf{Reject}\) with probability at least \(2/3\)._
Similarly, for closeness testing, we have the following tester in one-pass streaming settings,
**Theorem 3.3** (Streaming closeness tester [1]).: _Let \(D_{1}\) and \(D_{2}\) be two unknown distributions over \([\ell]\). There exists a single pass streaming closeness tester that takes \(O(\ell\sqrt{\log\ell}/\sqrt{m}\epsilon^{2})\) samples from \(\mathsf{SAMP}\) oracle for \(\log\ell\leq m\leq\tilde{\Theta}(min(\ell,\ell^{2/3}/\epsilon^{4/3}))\) and does the following: if \(D_{1}=D_{2}\), with probability at least \(2/3\), the algorithm outputs \(\mathsf{Accept}\) if \(d_{TV}(D_{1},D_{2})>\epsilon\), it outputs \(\mathsf{Reject}\) with probability at least \(2/3\)._
For the sake of being self-contained, we describe the identity tester for monotone below,
```
Input : Sample access to an unknown monotone distribution \(D\), explicitly given monotone distribution \(D^{*}\), \(0<\epsilon\leq 1\), memory requirement \(\log\log\left(n\epsilon\right)/\epsilon^{2}\leq m\leq(\log n\epsilon/\epsilon) ^{9/10}\) Output : Accept if \(D=D^{*}\), \(\mathsf{Reject}\) if \(d_{TV}(D,D^{*})\geq 3\epsilon\)
1 Let \(\mathcal{I}=\{I_{1},...,I_{\ell}\}\) be the oblivious partitions of \(D\), \(D^{*}\) and \((D^{r})^{\mathcal{I}}\), \((D^{*r})^{\mathcal{I}}\) be the reduced distributions over \([\ell]\)
2 Sample \(O(\log n\epsilon\log\log\left(n\epsilon\right)/m\epsilon^{5})\) points from \((D^{r})^{\mathcal{I}}\)
3 Run Streaming identity tester according to Theorem 3.2 for \((D^{r})^{\mathcal{I}}\) and \((D^{*r})^{\mathcal{I}}\)
4ifStreaming identity tester acceptsthen
5 Accept
6else
7 Reject
```
**Algorithm 1**Testing identity monotone Streaming
We now build the correctness of the algorithm. The algorithm needs sample access from \((D^{r})^{\mathcal{I}}\) which can be obtained by sampling a point \(x\) according to \(D\) and then returning \(I_{j}\), where \(x\in I_{j}\). By replacing \(\ell=O(\frac{\log n\epsilon}{\epsilon})\) in Theorem 3.2, we get the sample complexity to be \(O(\log n\epsilon\log\left(\log\left(n\epsilon\right)/m\epsilon^{5}\right)\). If \(D=D^{*}\), the streaming identity tester outputs \(\mathsf{Accept}\). If \(d_{TV}(D,D^{*})\geq 3\epsilon\), by Lemma 3.1, \(d_{TV}((D^{r})^{\mathcal{I}},(D^{*r})^{\mathcal{I}})\geq\epsilon\) and the streaming identity tester outputs \(\mathsf{Reject}\). Hence, the algorithm is indeed correct.
The structure of the closeness testing algorithm in streaming settings for monotone distributions is similar to the identity tester except for the fact that in Line 3, instead of streaming identity tester we have to run streaming closeness tester according to Theorem 3.3. The samples from two unknown reduced distributions \((D_{1}^{r})^{\mathcal{I}}\) and \((D_{1}^{r})^{\mathcal{I}}\) can be obtained by drawing samples from \(D_{1}\) and \(D_{2}\) respectively and transforming them into the samples of the reduced distributions. By replacing \(\ell=O(\frac{\log\left(ne\right)}{\epsilon})\) in Theorem 3.3, we get the sample complexity to be \(O(\log\left(n\epsilon\right)\sqrt{\log\log\left(n\epsilon\right)}/\sqrt{m} \epsilon^{3})\), where \(\log\log\left(n\epsilon\right)\leq m\leq\tilde{\Theta}(min(\log\left(n\epsilon \right)/\epsilon,\log^{2/3}\left(n\epsilon\right)/\epsilon^{2}))\).
### Testing identity in the streaming model using PCOND
In this section, we revisit the identity testing problem using \(\mathsf{PCOND}\) queries: given sample access and \(\mathsf{PCOND}\) query access to an unknown distribution \(D\) we have to test whether \(D\) is identical to a fully specified distribution \(D^{*}\) or they are \(\epsilon\) far from each other. Canonne et al ([10]) address the problem and propose a \(\mathsf{PCOND}\) query-based identity tester. In their algorithm, the domain of \(D^{*}\) is divided into a set of "buckets" where the points are having almost the same weights. The algorithm samples \(\tilde{O}(\log^{2}n/poly(\epsilon))\) points from \(D\) and estimates the weight of each bucket. They prove if \(D\) and \(D^{*}\) are far then there exists at least one bucket where the weight of \(D^{*}\) and weight of \(\tilde{D}\) will differ. If not, then the algorithm runs a process called _Compare_ to estimate the ratio of the weight of each pair of points \((y,z)\) where \(y\) is taken from a set of samples drawn from \(D^{*}\) and \(z\) is taken from a set of samples according to \(D\). The following lemma is used to compare the weights of two points.
**Lemma 3.4** ([10]).: _Given as input two disjoint subsets of points \(X,Y\) together with parameters \(\eta\in(0,1],K\geq 1\) and \(\delta\in(0,\frac{1}{2}]\) as well as \(\mathsf{COND}\) query access to a distribution \(D\), there exists a procedure Compare which estimates the ratio of the weights of two sets and either outputs a value \(\rho>0\) or outputs High or Low and satisfies the following:_
* _If_ \(D(X)/K\leq D(Y)\leq K\cdot D(X)\) _then with probability at least_ \(1-\delta\) _the procedure outputs a value_ \(\rho\in[1-\eta,1+\eta]D(Y)/D(X)\)_;_
* _If_ \(D(Y)>K\cdot D(X)\) _then with probability at least_ \(1-\delta\) _the procedure outputs either High or a value_ \(\rho\in[1-\eta,1+\eta]D(Y)/D(X)\)_;_
* _If_ \(D(Y)<D(X)/K\) _then with probability at least_ \(1-\delta\) _the procedure outputs either Low or a value_ \(\rho\in[1-\eta,1+\eta]D(Y)/D(X)\)_._
_The procedure performs \(O(\frac{K\log 1/\delta}{\eta^{2}})\) conditional queries on the set \(X\cup Y\)._
However, for storing \(\tilde{O}(\log^{2}n/poly(\epsilon))\) samples for estimating the weights of the buckets, an \(\tilde{O}(\log^{3}n/poly(\epsilon))\) space is required considering each sampled point takes \(\log n\) bits of memory. As we are dealing with a memory constraint of \(m\) bits, for \(m<O(\log^{3}n)\), implementing the algorithm is not memory efficient. We use the main idea of Canonne et al ([10]), but instead of storing all samples, we use the \(\mathsf{CountMin}\) sketch data structure for storing the frequencies of the elements of the stream. Later, the frequencies are used to estimate the weight of each bucket. By choosing the parameters of the \(\mathsf{CountMin}\) sketch suitably, the total space required for our algorithm is at most \(O(m/\epsilon)\) bits. The main concept of our algorithm lies in the theorem below,
**Theorem 3.5** (Testing Identity [10]).: _There exists an identity tester that uses an \(\tilde{O}(\log^{4}n/\epsilon^{4})\)\(\mathsf{PCOND}\) queries and does the following: for every pair of distributions \(D,D^{*}\) over \([n]\), where \(D^{*}\) is fully specified, the algorithm outputs \(\mathsf{Accept}\) with probability at least \(2/3\) if \(D=D^{*}\) and outputs \(\mathsf{Reject}\) with probability at least \(2/3\) if \(d_{TV}(D,D^{*})\geq\epsilon\)._
Before moving into the algorithm, we define the _backetization_ technique according to ([10]). For an explicit distribution \(D^{*}\), the domain is divided into \(\ell\) buckets \(\mathcal{B}=\{B_{1},...,B_{\ell}\}\), where each bucket contains a set of points which satisfies \(B_{j}=\{i\in[n]:2^{j-1}\eta/n\leq D^{*}(i)\leq 2^{j}\eta/n\}\) and
\(B_{0}=\{i\in[n]:D^{*}(i)<\eta/n\}\), where \(\eta=\epsilon/c\) for \(c\) to be a constant. The number of buckets \(\ell=O(\lceil\log n/\eta+1\rceil+1)\).
We are now ready to present our \(\mathsf{PCOND}\) query-based one-pass streaming algorithm for identity testing. Our algorithm and the correctness borrow from ([1]) with the extra use of Count-Min sketches to improve the trade-off between the sample complexity and the space used.
**Theorem 3.6**.: _The algorithm \(\mathsf{p}\textsc{cond}\) identity testing streaming uses an \(O(\log^{2}n\log\log n/m\epsilon^{2})\) length stream of standard access query points and an \(\tilde{O}(\log^{4}n/\epsilon^{4})\) length of conditional stream and does the following, If \(D=D^{*}\), it returns \(\mathsf{Accept}\) with probability at least \(2/3\), and if \(d_{TV}(D,D^{*})\geq\epsilon\), it returns \(\mathsf{Reject}\) with probability at least \(2/3\). The memory requirement for the algorithm is \(O(\frac{m}{\epsilon})\) where \(\frac{\log n\sqrt{\log\log n}}{\epsilon}\leq m\leq\frac{\log^{2}n}{\epsilon}\)._
Proof.: **Completeness**: Suppose \(D=D^{*}\). We prove that the algorithm does not return \(\mathsf{Reject}\) in Line 6. Let \(\tilde{D}(B_{j})\) be the estimated weight of a bucket \(B_{j}\) where \(\tilde{D}(B_{j})=\frac{f_{B_{j}}}{S}\) for \(S=O(\log^{2}n\log\log n/m\epsilon^{2})\). An additive Chernoff bound [followed by a union bound over the buckets] shows that with high probability, \(\forall B_{j},|D(B_{j})-\tilde{D}(B_{j})|\leq\frac{\sqrt{m}\epsilon}{\log n}\). Using Lemma 2.4, with probability at least \(99/100\), for every element \(x_{i}\) in the stream, \(f_{x_{i}}\leq\tilde{f}_{x_{i}}\leq f_{x_{i}}+\frac{\epsilon S}{m}\). Summing over all the elements in a bucket \(B_{j}\), we get \(\tilde{f}_{B_{j}}-\frac{\epsilon}{m}S^{2}\leq f_{B_{j}}\leq\tilde{f}_{B_{j}}\). Substituting \(\tilde{D}(B_{j})=\frac{f_{B_{j}}}{S}\), we can see that \(\frac{f_{B_{j}}}{S}-\frac{\epsilon S}{m}\leq\tilde{D}(B_{j})\leq\frac{f_{B_{ j}}}{S}\). As \(D=D^{*}\), \(\tilde{D}(B_{j})\) is a good estimate of \(D^{*}(B_{j})\). Using \(|D^{*}(B_{j})-\tilde{D}(B_{j})|\leq\frac{\sqrt{m}\epsilon}{\log n}\), we get \(\frac{\tilde{f}_{B_{j}}}{S}-\frac{\epsilon S}{m}-\frac{\sqrt{m}\epsilon}{\log n }\leq D^{*}(B_{j})\leq\frac{\tilde{f}_{B_{j}}}{S}+\frac{\sqrt{m}\epsilon}{\log n}\). This can be written as \(D^{*}(B_{j})-\frac{\sqrt{m}\epsilon}{\log n}\leq\frac{\tilde{f}_{B_{j}}}{S} \leq D^{*}(B_{j})+\frac{\sqrt{m}\epsilon}{\log n}+\frac{\log^{2}n\log\log n}{ \epsilon m^{2}}\) by replacing \(S=O(\log^{2}n\log\log n/m\epsilon^{2})\). Hence, the algorithm will not output \(\mathsf{Reject}\) with high probability. As \(D=D^{*}\), for all pairs \((y_{k},z_{l})\) such that \(\frac{D^{*}(y_{k})}{D^{*}(z_{l})}\in[1/2,2]\), it follows from Lemma 3.4 that the estimated ratio of weights of each pair \((y_{k},z_{l})\) is less than \((1-\eta/2\ell)\frac{D^{*}(y_{k})}{D^{*}(z_{l})}\) [for \(\eta=\epsilon/6\)] with probability at most \(1/10s^{2}\). A union bound over all \(O(s^{2})\) pairs proves that with a probability of at least \(9/10\) the algorithm outputs \(\mathsf{Accept}\).
```
Input :SAMP andPCOND access to \(D\), an explicit distribution \(D^{*}\), parameters \(0<\epsilon\leq 1\), \(\eta=\epsilon/6\), \(\ell\) buckets of \(D^{*}\), space requirement \(O(m)\) bits \(\frac{\log n\sqrt{\log\log n}}{\epsilon}\leq m\leq\frac{\log^{2}n}{\epsilon}\) Output :Accept if \(D=D^{*}\),Reject if \(d_{TV}(D,D^{*})\geq\epsilon\)
1 Sample \(S=O(\frac{\log^{2}n\log\log n}{m\epsilon^{2}})\) points \(\{x_{1},...,x_{S}\}\) fromSAMP
2for(\(i=1\) to \(S\))do
3 Estimate the frequency of \(x_{i}\) usingCountMin sketch \((\frac{\epsilon}{m},\frac{1}{100})\) such that \(f_{x_{i}}\leq\tilde{f}_{x_{i}}\leq f_{x_{i}}+\frac{\epsilon}{m}S\)
4 Define the frequency of each bucket \(B_{j}\) to be \(f_{B_{j}}=\sum_{x_{i}\in B_{j}}f_{x_{i}}\), such that \(f_{B_{j}}\leq\tilde{f}_{B_{j}}\leq f_{B_{j}}+\frac{\epsilon}{m}S^{2}\)
5if\(\frac{f_{B_{j}}}{S}<D^{*}(B_{j})-\frac{\sqrt{m}\epsilon}{\log n}\) or \(\frac{\tilde{f}_{B_{j}}}{S}>D^{*}(B_{j})+\frac{\sqrt{m}\epsilon}{\log n}+\frac {\log^{2}n\log\log n}{\epsilon m^{2}}\)then
6 Reject and Exit
7 Select \(s=O(\ell/\epsilon)\) points \(\{y_{1},...,y_{s}\}\) from \(D^{*}\)
8for each \(y_{k}\in s\)do
9 Sample \(s\) points \(\{z_{1},...,z_{s}\}\) from \(D\) as a stream
10for each pair of points \((y_{k},z_{l})\) such that \(\frac{D^{*}(y_{k})}{D^{*}(z_{l})}\in[1/2,2]\)do
11 Run Compare \((y_{k},z_{l},\eta/4\ell,2,1/10s^{2}))\)
12ifCompare returns Low or a value smaller than \((1-\eta/2\ell)\frac{D^{*}(y_{k})}{D^{*}(z_{l})}\)then
13 Reject and Exit
14 Accept
```
**Algorithm 2**PCOND Identity Testing Streaming
**Soundness :** Let \(d_{TV}(D,D^{*})\geq\epsilon\). In this case, if one of the estimates of \(\tilde{f}_{B_{j}}\) passes Line 5, the algorithm outputs Reject. Let's assume that the estimates are correct with high probability. The rest of the analysis follows from ([14]), we give a brief outline of the proof for making it self-contained. Define high-weight and low-weight buckets in the following way, for \(\eta=\epsilon/6\), as follows: \(H_{j}=\{x\in B_{j}:D(x)>D^{*}(x)+\eta/\ell|B_{j}|\}\), and \(L_{j}=\{x\in B_{j}:D(x)\leq D^{*}(x)-\eta/\ell|B_{j}|\}\). It can be shown that at least one point will occur from the low-weight bucket while sampling \(s\) points in Line 7 and at least one point will come from the high-weight bucket while obtaining \(s\) points in Line 9. Using the definition of high-weight and low-weight buckets, there exists a pair \((y_{k},z_{l})\) such that \(D(y_{k})\leq(1-\eta/2\ell)D^{*}(y_{k})\) and \(D(z_{k})>(1+\eta/2\ell)D^{*}(z_{k})\). By Lemma 3.4, with probability at least \(1-1/10s^{2}\), _Compare_ will return low or a value at most \((1-\eta/2\ell)\frac{D^{*}(y_{k})}{D^{*}(z_{l})}\) in Line 12. Hence the algorithm outputs Reject with high probability.
We use CountMin sketch with parameters \((\frac{\epsilon}{m},\frac{1}{100})\) in our algorithm. Comparing it with \((\epsilon,\delta)\)CountMin sketch defined in ([12]), we set the width of the array to be \(w=em/\epsilon\) and depth \(d=\log 100\). So the space required for the algorithm is \(w.d\) words which imply \(O(\frac{m}{\epsilon})\) bits. For running the _Compare_ procedure, we are not using any extra space for storing samples. This is because for every element in \(\{y_{1},...,y_{s}\}\) we are sampling \(s\) length stream \(\{z_{1},...,z_{s}\}\) and running _Compare_ for each pair of points taken from each stream respectively. This leads to running compare process \(s^{2}\) times. A single run of compare works in the following way in the streaming settings, for a pair \((y_{k},z_{l})\), sample \(O(\log^{2}n/\epsilon^{2})\) points from \(D\) conditioned on \((y_{k},z_{l})\) and keep two counters for checking the number of times each of them appeared in the stream. Each round of _Compare_ process requires \(O(\log^{2}n/\epsilon^{2})\) length of the stream. Hence, the total stream length is \(\tilde{O}(\log^{4}n/\epsilon^{4})\).
Testing monotonicity
Monotonicity testing is a fundamental problem of distribution testing. Given sample access to an unknown distribution \(D\) over \([n]\), the task is to check whether \(D\) is a monotone (non-increasing) or \(\epsilon\) far from monotonicity. The problem was addressed by Batu et al ([3]) where they use the samples according to the standard access model. The algorithm divides the domain into half recursively until the number of collisions over an interval is less. A set of partitions is obtained this way and they are used to construct an empirical distribution by sampling another set of samples. Finally, the algorithm returns accept if the empirical distribution is close to monotone. The approach leads to an \(O(\sqrt{n}\log n/\epsilon^{4})\) SAMP query algorithm for this problem.
### Testing monotonicity using oblivious decomposition
In this section, we discuss monotonicity testing using oblivious decomposition. Unlike Batu et al, we use the oblivious partitions for the unknown distribution \(D\) and count the number of collisions over the intervals where enough sample lies. Our algorithm uses standard access queries to \(D\) and proceeds by examining whether the total weight of such intervals is high or low. The insight of the algorithm is to apply oblivious decomposition instead of constructing the partitions recursively. An upper bound of \(O(\sqrt{n}\log n/\epsilon^{4})\) is obtained that matches the query complexity of Batu et al. The high-level idea of the algorithm is as follows: let \(D\) be a monotone distribution and \(\mathcal{I}=\{I_{1},...,I_{\ell}\}\) be the oblivious decomposition for \(\ell=O(\log n/\epsilon_{1})\); where \(\epsilon_{1}=O(\epsilon^{2})\). If \((D^{f})^{\mathcal{I}}\) is the flattened distribution, by oblivious decomposition, \(d_{TV}(D,(D^{f})^{\mathcal{I}})\leq\epsilon^{2}\) which simplifies to \(\sum_{j=1}^{\ell}D(I_{j})d_{TV}(D_{I_{j}},\mathcal{U}_{I_{j}})\leq\epsilon^{2}\). We divide the partitions into two types, one, where the conditional distribution over \(I_{j}\) is close to uniformity, and another, where the same is far from uniformity. In the case of monotone distribution, if the conditional distributions for a set of intervals are far from uniformity, the total weight over those intervals can not be too high. We also observe that the collision counts for such intervals are pretty high when enough sample lies in it. So, the problem of testing monotonicity boils down to the task of finding such high collision intervals and estimating their total weight. For the weight estimation, we obviously require an empirical distribution \(\tilde{D}\) that can be constructed by sampling \(poly(\log n,1/\epsilon)\) samples. Later, the tester tests whether \(\tilde{D}\) is close to monotone to reveal the final decision. When there are enough samples lying inside an interval, the pairwise collision count between them can be used to estimate the collision probability. The following lemma establishes the fact formally,
**Lemma 4.1** ([3]).: _Let \(I\subset[n]\) be an interval of a distribution \(D\) over \([n]\), \(D_{I}\) be the conditional distribution of \(D\) on \(I\) and \(S_{I}\) be the set of samples lying in interval \(I\). Then,_
\[||D_{I}||_{2}^{2}-\frac{\epsilon^{2}}{64|I|}\leq\frac{coll(S_{I})}{\binom{|S_{ I}|}{2}}\leq||D_{I}||_{2}^{2}+\frac{\epsilon^{2}}{64|I|}\]
_with probability at least \(1-O(\log^{-3}n)\) provided that \(|S_{I}|\geq O(\sqrt{|I|}\log\log n/\epsilon^{4})\)._
An interval partition of \([n]\) produces two types of intervals, where the conditional distributions are close and far from uniformity respectively. A pairwise collision count between samples is used to detect such intervals. Given enough samples lie in an interval, if the conditional distribution is far from uniformity, the collision count will be high. Similarly, the collision probability is low for the intervals where the conditional distributions are close to uniformity.
**Lemma 4.2**.: _Let \(D\) be an unknown distribution over \([n]\) and \(I\subset[n]\) be an interval. Let \(S_{I}\) be the set of samples lying inside \(I\) such that \(|S_{I}|\geq O(\sqrt{|I|}\log\log n/\epsilon^{4})\), then the following happens_
* _If_ \(d_{TV}(D_{I},\mathcal{U}_{I})>\frac{\epsilon}{4}\)_, then_ \(\frac{coll(S_{I})}{\binom{|S_{I}|}{2}}>\frac{1}{|I|}+\frac{\epsilon^{2}}{64|I|}\)__
* _If_ \(d_{TV}(D_{I},\mathcal{U}_{I})\leq\frac{\epsilon}{4}\)_, then,_ \(\frac{coll(S_{I})}{\binom{|S_{I}|}{2}}\leq\frac{1+\epsilon^{2}/64}{|I|}+\frac {\epsilon^{2}}{16}\)__
Proof.: Let, \(d_{TV}(D_{I},\mathcal{U}_{I})>\frac{\epsilon}{4}\), squaring both sides, we get \((d_{TV}(D_{I},\mathcal{U}_{I}))^{2}>\frac{\epsilon^{2}}{16}>\frac{\epsilon^{2 }}{32}\). Using the fact that \(d_{TV}(D_{I},\mathcal{U}_{I})\leq\sqrt{|I|}\cdot||D_{I}-\mathcal{U}_{I}||_{2}\), we deduce \(|I|\cdot||D_{I}-\mathcal{U}_{I}||_{2}^{2}>\frac{\epsilon^{2}}{32}\). Simplifying the inequality, we get \(||D_{I}-\mathcal{U}_{I}||_{2}^{2}>\frac{\epsilon^{2}}{32|I|}\). Now, we obtain the following inequality by using \(||D_{I}-\mathcal{U}_{I}||_{2}^{2}=||D_{I}||_{2}^{2}-\frac{1}{|I|}\).
\[||D_{I}||_{2}^{2}-\frac{1}{|I|}>\frac{\epsilon^{2}}{32|I|}\] \[||D_{I}||_{2}^{2}>\frac{\epsilon^{2}}{32|I|}+\frac{1}{|I|}\]
Considering \(|S_{I}|\geq O(\sqrt{|I|}\log\log n/\epsilon^{4})\), by Lemma 4.1, \(||D_{I}||_{2}^{2}>\frac{\epsilon^{2}}{32|I|}+\frac{1}{|I|}\) implies the following,
\[\frac{coll(S_{I})}{\binom{|S_{I}|}{2}}+\frac{\epsilon^{2}}{64|I|} >\frac{\epsilon^{2}}{32|I|}+\frac{1}{|I|}\] \[\frac{coll(S_{I})}{\binom{|S_{I}|}{2}} >\frac{1}{|I|}+\frac{\epsilon^{2}}{64|I|}\]
Similarly, when \(d_{TV}(D_{I},\mathcal{U}_{I})\leq\frac{\epsilon}{4}\), we get \(||D_{I}||_{2}^{2}\leq\frac{\epsilon^{2}}{16}+\frac{1}{|I|}\). Given \(|S_{I}|\geq O(\sqrt{|I|}\log\log n/\epsilon^{4})\), by Lemma 4.1, \(\frac{coll(S_{I})}{\binom{|S_{I}|}{2}}\leq\frac{1+\epsilon^{2}/64}{|I|}+\frac {\epsilon^{2}}{16}\).
We are now ready to present the collision monotonicity tester.
**Theorem 4.3**.: _The algorithm collision monotonicity uses \(O(\frac{\sqrt{n}\log n\log\log n}{\epsilon^{8}})\) SAMP queries and outputs Accept with probability at least \(2/3\) if \(D\) is a monotone distribution and outputs Reject with probability at least \(2/3\) when \(D\) is not \(7\epsilon\)-close to monotone._
Proof.: We start by defining an interval to be high weight, if \(D(I_{j})\geq\epsilon^{2}/\log n\). As there are \(O(\frac{\log n}{\epsilon^{2}})\) partitions exist, there will be at least one such interval with \(D(I_{j})\geq\frac{\epsilon^{2}}{\log n}\). Moreover, an additive Chernoff bound shows that all such high-weight intervals will contain \(|S_{I_{j}}|\geq O(\sqrt{|I_{j}|}\log\log n/\epsilon^{4})\) samples while sampling \(O(\frac{\sqrt{n}\log n\log\log n}{\epsilon^{8}})\) points according to \(D\).
**Completeness :** Let \(D\) be monotone, then by oblivious partitioning, we have,
\(\sum_{j=1}^{\ell}\sum_{x\in I_{j}}|D(x)-\frac{D(I_{j})}{|I_{j}|}|\leq\epsilon_ {1}\). By simplifying, we get \(\sum_{j=1}^{\ell}D(I_{j})\sum_{x\in I_{j}}|\frac{D(x)}{D(I_{j})}-\frac{1}{|I_{j }|}|\leq\epsilon^{2}\) which implies \(\sum_{j=1}^{\ell}D(I_{j})d_{TV}(D_{I_{j}},\mathcal{U}_{I_{j}})\leq\epsilon^{2}\). Let \(J^{\prime}\) be the set of intervals where for all \(I_{j}\), \(d_{TV}(D_{I_{j}},\mathcal{U}_{I_{j}})>\frac{\epsilon}{4}\), then we have, \(\sum_{I_{j}\in J^{\prime}}D(I_{j})\leq 4\epsilon\). Let \(\hat{J}\) be the set of intervals where \(|S_{I_{j}}|\geq O(\sqrt{|I_{j}|}\log\log n/\epsilon^{4})\) and \(d_{TV}(D_{I_{j}},\mathcal{U}_{I_{j}})>\frac{\epsilon}{4}\). So, \(\hat{J}\subseteq J^{\prime}\). From Lemma 4.2, we know \(\hat{J}\) is the set of intervals where
```
Input : SAMP access to \(D\), \(\ell=O(\frac{1}{\epsilon_{1}}\log\left(n\epsilon_{1}+1\right))\) oblivious partitions \(\mathcal{I}=\{I_{1},..,I_{\ell}\}\) and error parameter \(\epsilon,\epsilon_{1}\in(0,1]\), where \(\epsilon_{1}=\epsilon^{2}\)
1 Sample \(T=O(\frac{1}{\epsilon^{6}}\log^{2}n\log\log n)\) points from SAMP
2 Get the empirical distribution \(\tilde{D}\) over \(\ell\)
3 Obtain an additional sample \(S=O(\frac{\sqrt{n}\log n\log\log n}{\epsilon^{8}})\) from SAMP
4 Let \(J\) be the set of intervals where the number of samples (in each interval \(I_{j}\)) is \(|S_{I_{j}}|\geq O(\sqrt{|I_{j}|}\log\log n/\epsilon^{4})\) and \(coll(S_{I_{j}})\geq(\frac{1+\epsilon^{2}/64}{|I_{j}|}+\frac{\epsilon^{2}}{16} )\binom{|S_{I_{j}}|}{2}\)
5if\(\sum_{I_{j}\in J}\tilde{D}(I_{j})>5\epsilon\)then
6 Reject and Exit
7 Define a flat distribution \((\tilde{D}^{f})^{\mathcal{I}}\) over \([n]\) using \(\tilde{D}\)
8 Output Accept if \((\tilde{D}^{f})^{\mathcal{I}}\) is \(2\epsilon\) close to a monotone distribution. Otherwise output Reject
```
**Algorithm 3**Collision Monotonicity
\(\frac{coll(S_{I_{j}})}{\binom{|S_{I_{j}}|}{2}}>\frac{1}{|I_{j}|}+\frac{ \epsilon^{2}}{64|I_{j}|}\). Let \(J\) be the set of intervals where \(|S_{I_{j}}|\geq O(\sqrt{|I_{j}|}\log\log n/\epsilon^{4})\) and \(\frac{coll(S_{I_{j}})}{\binom{|S_{I_{j}}|}{2}}\geq\frac{1+\epsilon^{2}/64}{|I_ {j}|}+\frac{\epsilon^{2}}{16}\), then \(J\subseteq\hat{J}\subseteq J^{\prime}\). We have already proved \(\sum_{I_{j}\in J^{\prime}}D(I_{j})\leq 4\epsilon\). So, we can conclude that \(\sum_{I_{j}\in J}D(I_{j})\leq 4\epsilon\).
When \(d_{TV}(D_{I_{j}},\mathcal{U}_{I_{j}})\leq\frac{\epsilon}{4}\), the algorithm does not sum over such \(D(I_{j})\) even if the number of samples \(|S_{I_{j}}|\geq O(\sqrt{|I_{j}|}\log\log n/\epsilon^{4})\) as \(\frac{coll(S_{I_{j}})}{\binom{|S_{I_{j}}|}{2}}\leq\frac{1+\epsilon^{2}/64}{|I_ {j}|}+\frac{\epsilon^{2}}{16}\) by Lemma 4.2. As a result, we can say \(\sum_{I_{j}\in J}D(I_{j})\leq 4\epsilon\) when \(D\) is monotone. The rest of the proof follows from the fact that we have a distribution \(\tilde{D}\) over \(\ell\) such that by Lemma 2.2\(\forall I_{j}\), \(|D(I_{j})-\tilde{D}(I_{j})|\leq\frac{\epsilon}{\ell}\). Substituting the value of \(\ell=O(\frac{1}{\epsilon^{2}}\log n)\), we get, \(\forall I_{j}\)\(D(I_{j})-\frac{\epsilon^{3}}{\log n}\leq\tilde{D}(I_{j})\leq D(I_{j})+ \frac{\epsilon^{3}}{\log n}\). Summing over all intervals \(J\) such that \(|S_{I_{j}}|\geq O(\sqrt{|I_{j}|}\log\log n/\epsilon^{4})\) and \(\frac{coll(S_{I_{j}})}{\binom{|S_{I_{j}}|}{2}}\geq\frac{1+\epsilon^{2}/64}{|I_ {j}|}+\frac{\epsilon^{2}}{16}\), we deduce, \(\sum_{I_{j}\in J}\tilde{D}(I_{j})\leq\sum_{I_{j}\in J}D(I_{j})+\sum_{I_{j}\in J }\frac{\epsilon^{3}}{\log n}\). We have proved that \(\sum_{I_{j}\in J}D(I_{j})\leq 4\epsilon\) when \(D\) is monotone. Using this fact, we obtain \(\sum_{I_{j}\in J}\tilde{D}(I_{j})\leq\sum_{I_{j}\in J}D(I_{j})+\epsilon\) which implies \(\sum_{I_{j}\in J}\tilde{D}(I_{j})\leq 5\epsilon\). Hence, the algorithm will NOT output Reject in Step 6.
By Birge's decomposition with parameter \(\epsilon_{1}=\epsilon^{2}\), we get \(d_{TV}(D,(D^{f})^{\mathcal{I}})\leq\epsilon^{2}\). By Lemma 2.2, we know \(d_{TV}((D^{f})^{\mathcal{I}},(\tilde{D}^{f})^{\mathcal{I}})<\epsilon\). Hence, by triangle inequality, we obtain \(d_{TV}(D,(\tilde{D}^{f})^{\mathcal{I}})<\epsilon+\epsilon_{1}<2\epsilon\). This implies that the flattened distribution \((\tilde{D}^{f})^{\mathcal{I}}\) is \(2\epsilon\) close to a monotone distribution as \(D\) is monotone. Hence, the algorithm will output Accept in Step 8.
**Soundness :** We will prove the contrapositive of the statement. Let the algorithm outputs Accept, then we need to prove that \(D\) is \(7\epsilon\) close to monotone. As the algorithm outputs accept, \(\sum_{I_{j}\in J}\tilde{D}(I_{j})\leq 5\epsilon\), where \(J\) is the set of intervals with \(|S_{I_{j}}|\geq O(\sqrt{|I_{j}|}\log\log n/\epsilon^{4})\) and \(coll(S_{I_{j}})\geq(\frac{1+\epsilon^{2}/64}{|I_{j}|}+\frac{\epsilon^{2}}{16} )\binom{|S_{I_{j}}|}{2}\). When \(\frac{coll(S_{I_{j}})}{\binom{|S_{I_{j}}|}{2}}\geq\frac{1+\epsilon^{2}/64}{|I_ {j}|}+\frac{\epsilon^{2}}{16}\) by Lemma 4.1, we get \(||D_{I_{j}}||_{2}^{2}+\frac{\epsilon^{2}}{64|I_{j}|}\geq\frac{1+\epsilon^{2}/64}{| I_{j}|}+\frac{\epsilon^{2}}{16}\). Using \(||D_{I}-\mathcal{U}_{I}||_{2}^{2}=||D_{I}||_{2}^{2}-\frac{1}{|I|}\), we obtain \(\frac{1}{|I_{j}|}+||D_{I_{j}}-\mathcal{U}_{I_{j}}||_{2}^{2}\geq\frac{1}{|I_{j}| }+\frac{\epsilon^{2}}{16}\). Finally we deduce \(d_{TV}(D_{I_{j}},\mathcal{U}_{I_{j}})\geq\frac{\epsilon}{4}\) by using the fact that \(d_{TV}(D_{I_{j}},\mathcal{U}_{I_{j}})\geq||D_{I_{j}}-\mathcal{U}_{I_{j}}||_{2}\).
Now, we calculate the distance between \(D\) and \((D^{f})^{\mathcal{I}}\) corresponding to \(\mathcal{I}\),
\[d_{TV}(D,(D^{f})^{\mathcal{I}}) =\frac{1}{2}\sum_{j=1}^{\ell}\sum_{x\in I_{j}}|D(x)-\frac{D(I_{j})}{ |I_{j}|}|\] \[=\frac{1}{2}\Big{[}\sum_{\frac{\epsilon}{4}\leq d_{TV}(D_{I_{j}} \mathcal{U}_{I_{j}})\leq 1}D(I_{j})d_{TV}(D_{I_{j}},\mathcal{U}_{I_{j}})+\sum_{d_{ TV}(D_{I_{j}}\mathcal{U}_{I_{j}})<\frac{\epsilon}{4}}D(I_{j})d_{TV}(D_{I_{j}}, \mathcal{U}_{I_{j}})\Big{]}\] \[<\frac{1}{2}\Big{[}\sum_{\frac{\epsilon}{4}\leq d_{TV}(D_{I_{j}} \mathcal{U}_{I_{j}})\leq 1}\Big{(}\tilde{D}(I_{j})+\frac{\epsilon^{3}}{\log n }\Big{)}+\frac{\epsilon}{4}\Big{]}\] \[<\frac{1}{2}\Big{[}6\epsilon+\frac{\epsilon}{4}\Big{]}\] \[<4\epsilon\]
The second inequality is using the fact that \(\forall I_{j},\tilde{D}(I_{j})\leq D(I_{j})+\frac{\epsilon^{3}}{\log n}\). As the algorithm outputs accept we have, \(\sum_{I_{j}\in J}\tilde{D}(I_{j})\leq 5\epsilon\). Summing over at most \(\ell=O(\log n/\epsilon^{2})\) intervals, we get the above result.
For the rest of the proof, we use the fact that \(d_{TV}((D^{f})^{\mathcal{I}},(\tilde{D}^{f})^{\mathcal{I}})<\epsilon\) by Lemma 2.2. Using triangle inequality, we get \(d_{TV}(D,(\tilde{D}^{f})^{\mathcal{I}})<5\epsilon\). As the algorithm outputs Accept, there exists a monotone distribution \(M\), such that \(d_{TV}((\tilde{D}^{f})^{\mathcal{I}},M)\leq 2\epsilon\). Again by applying triangle inequality, we have \(d_{TV}(D,M)<7\epsilon\). This completes the proof.
### Testing Monotonicity using Bipartite Collisions
In this section, we perform the monotonicity testing in a slightly different fashion which functions as the building block of a streaming-based monotonicity tester. Here, unlike counting pairwise collisions between the samples, we divide the samples into two sets and count the bipartite collisions between them. The idea of the bipartite collision tester is adapted from ([2]). A key Lemma 4.5 proves how the bipartite collision is used to estimate the collision probability. Given sample access to an unknown distribution \(D\) over \([n]\), first, we divide the domain according to the oblivious decomposition. We count the bipartite collisions inside the intervals where enough samples lie. If \(D\) is monotone, the total weight of high collision intervals can not be too high. We estimate the total weight of such intervals and the rest of the algorithm works similarly to the collision monotonicity tester presented in Algorithm 3. Prior to describing the algorithm, the lemma below clarifies the fact "enough samples" and the intervals holding them.
**Lemma 4.4**.: _Let \(D\) be a distribution over \([n]\), and \(\mathcal{I}=\{I_{1},...,I_{\ell}\}\) be an interval partitions of \([n]\). Let \(\mathcal{J}\subset\mathcal{I}\) be the set of intervals and for all \(I_{j}\in\mathcal{J}\), \(D(I_{j})\geq\epsilon_{1}/\log n\), where \(\epsilon_{1}=\epsilon^{2}\). If \(S=O(\frac{n\log n}{\epsilon^{8}})\) samples are drawn according to \(D\), then all \(I_{j}\in\mathcal{J}\) contain \(|S_{I_{j}}|\geq O(|I_{j}|/\epsilon^{4})\) samples._
Proof.: Fix an \(I_{j}\) and define a random variable, \(X_{i}=1\) if \(i^{th}\) sample is in \(I_{j}\) else \(0\). Let \(X=\sum_{i=1}^{S}X_{i}=S_{I_{j}}\). Then the expectation \(\mathbb{E}[X]=|S|\cdot D(I_{j})\geq\frac{|S|\epsilon_{1}}{\log n}\).
By Chernoff bound, we can see that \(Pr\Big{[}X<(1-\epsilon)\frac{|S|\epsilon_{1}}{\log n}\Big{]}=Pr\Big{[}X<(1- \epsilon)\mathbb{E}[X]\Big{]}\leq e^{-\epsilon^{2}\mathbb{E}[X]}\leq e^{- \epsilon^{2}\frac{|S|\epsilon^{2}}{\log n}}<\frac{\epsilon^{2}}{10\log n}\).
The last inequality is obtained from the fact that \(|S|=O(\frac{n\log n}{\epsilon^{8}})\) and using \(\frac{n}{\epsilon^{4}}>\log(10\log n/\epsilon^{2})\). Applying union bound over all \(\ell=O(\frac{\log n}{\epsilon_{1}})\) partitions, we can conclude that, \([\epsilon_{1}=\epsilon^{2}]\)\(\forall I_{j}\); such that \(D(I_{j})\geq\frac{\epsilon_{1}}{\log n}\) with probability at least \(9/10\), the following happens, \(S_{I_{j}}\geq(1-\epsilon)\frac{|S|\epsilon_{1}}{\log n}\geq(1-\epsilon)\frac{ n}{\epsilon^{0}}\geq O(|I_{j}|/\epsilon^{4})\)
The main intuition behind our algorithm is counting the bipartite collision between a set of samples. The next lemma, defines the necessary conditions for estimating collision probability using bipartite collision count.
**Lemma 4.5**.: _Let \(D\) be an unknown distribution over \([n]\) and \(S\) be the set of samples drawn according to SAMP. Let \(I\subset[n]\) be an interval and \(S_{I}\) be the set of points lying in the interval \(I\). Let \(S_{I}\) be divided into two disjoint sets \(S_{1}\) and \(S_{2}\); \(\{S_{1}\}\cup\{S_{2}\}=\{S_{I}\}\) such that \(|S_{1}||S_{2}|\geq O(|S_{I}|/\epsilon^{4})\), then with probability at least \(2/3\),_
\[||D_{I}||_{2}^{2}-\frac{\epsilon^{2}}{64|I|}\leq\frac{coll(S_{1},S_{2})}{|S_{1 }||S_{2}|}\leq||D_{I}||_{2}^{2}+\frac{\epsilon^{2}}{64|I|}.\]
Proof.: Define the random variable \(X_{i}=1\) if \(i^{th}\) sample in \(S_{1}\) is same as \(j^{th}\) sample in \(S_{2}\), \(0\) otherwise.
\[X =\sum_{(i,j)\in S_{1}\times S_{2}}X_{ij}=coll(S_{1},S_{2})\] \[\mathbb{E}[X] =|S_{1}|\cdot|S_{2}|\cdot||D_{I}||_{2}^{2}\]
Where \(||D_{I}||_{2}^{2}\) is collision probability. Let \(Y_{ij}=X_{ij}-\mathbb{E}[X_{ij}]=X_{ij}-||D_{I}||_{2}^{2}\).
\[Var[\sum_{(i,j)\in S_{1}\times S_{2}}X_{ij}] =\mathbb{E}\Big{[}(\sum_{(i,j)\in S_{1}\times S_{2}}Y_{ij})^{2} \Big{]}\] \[=\mathbb{E}\Big{[}\sum_{(i,j)\in S_{1}\times S_{2}}Y_{ij}^{2}+ \sum_{(i,j)\neq(k,l);|\{i,j,k,l\}|=3}Y_{ij}Y_{kl}\Big{]}\]
We calculate the following,
\[\mathbb{E}[Y_{ij}^{2}] =\mathbb{E}[X_{ij}^{2}]-2(\mathbb{E}[X_{ij}])^{2}+(\mathbb{E}[X_ {ij}])^{2}\] \[=||D_{I}||_{2}^{2}-||D_{I}||_{2}^{4}\] \[\mathbb{E}[Y_{ij}Y_{kl}] =\mathbb{E}\Big{[}(X_{ij}-||D_{I}||_{2}^{2})(X_{kl}-||D_{I}||_{2}^ {2})\Big{]}\] \[=\mathbb{E}\Big{[}X_{ij}X_{kl}\Big{]}-||D_{I}||_{2}^{2}(\mathbb{ E}[X_{ij}]+\mathbb{E}[X_{kl}])+||D_{I}||_{2}^{4}\] \[=\mathbb{E}\Big{[}X_{ij}X_{kl}\Big{]}-||D_{I}||_{2}^{4}\]
Now,
\[Var[\sum_{(i,j)\in S_{1}\times S_{2}}X_{ij}] =\sum_{(i,j)\in S_{1}\times S_{2}}(||D_{I}||_{2}^{2}-||D_{I}||_{2}^{4 })+\sum_{(i,j)\neq(k,l);|\{i,j,k,l\}|=3}(\mathbb{E}\Big{[}X_{ij}X_{kl}\Big{]}-|| D_{I}||_{2}^{4})\] \[=|S_{1}|.|S_{2}|(||D_{I}||_{2}^{2}-||D_{I}||_{2}^{4})+\sum_{(i,j);( k,j)\in S_{1}\times S_{2};i\neq k}\mathbb{E}\Big{[}X_{ij}X_{kj}\Big{]}\] \[+\sum_{(i,j);(i,l)\in S_{1}\times S_{2};j\neq l}\mathbb{E}\Big{[}X _{ij}X_{il}\Big{]}-\sum_{(i,j)\neq(k,l);|\{i,j,k,l\}|=3}||D_{I}||_{2}^{4}\] \[=|S_{1}|.|S_{2}|(||D_{I}||_{2}^{2}-||D_{I}||_{2}^{4})+|S_{2}| \binom{|S_{1}|}{2}||D_{I}||_{3}^{3}\] \[+|S_{1}|\binom{|S_{2}|}{2}||D_{I}||_{3}^{3}-\Big{(}|S_{2}|\binom{ |S_{1}|}{2}+|S_{1}|\binom{|S_{2}|}{2}\Big{)}||D_{I}||_{2}^{4}\] \[\leq|S_{1}||S_{2}|\Big{[}(||D_{I}||_{2}^{2}-||D_{I}||_{2}^{4})+(| S_{1}|+|S_{2}|)(||D_{I}||_{3}^{3}-||D_{I}||_{2}^{4})\Big{]}\]
Applying Chebyshev's inequality, we get,
\[Pr[|X-\mathbb{E}[X]|>\frac{\epsilon^{2}}{64|I|}|S_{1}||S_{2}|] \leq\frac{64^{2}Var[X]|I|^{2}}{\epsilon^{4}|S_{1}|^{2}|S_{2}|^{2}}\] \[\leq\frac{|S_{1}||S_{2}|\Big{[}(||D_{I}||_{2}^{2}-||D_{I}||_{2}^{ 4})+(|S_{1}|+|S_{2}|)(||D_{I}||_{3}^{3}-||D_{I}||_{2}^{4})\Big{]}64^{2}|I|^{2} }{\epsilon^{4}|S_{1}|^{2}|S_{2}|^{2}}\] \[\leq\frac{\Big{[}||D_{I}||_{2}^{2}-||D_{I}||_{2}^{4}+(|S_{1}|+|S_ {2}|)(||D_{I}||_{2}^{3}-||D_{I}||_{2}^{4})\Big{]}64^{2}|I|^{2}}{\epsilon^{4}| S_{1}|\cdot|S_{2}|}\] \[\leq\frac{\Big{[}||D_{I}||_{2}^{2}-||D_{I}||_{2}^{4}+(|S_{1}|+|S_ {2}|)(||D_{I}||_{2}^{2}-||D_{I}||_{2}^{4})\Big{]}64^{2}|I|^{2}}{\epsilon^{4}| S_{1}|\cdot|S_{2}|}\] \[\leq\frac{||D_{I}||_{2}^{2}\Big{[}1-||D_{I}||_{2}^{2}+(|S_{1}|+|S _{2}|)(1-||D_{I}||_{2}^{2})\Big{]}64^{2}|I|^{2}}{\epsilon^{4}|S_{1}|\cdot|S_{2}|}\] \[\leq\frac{||D_{I}||_{2}^{2}\Big{(}1-||D_{I}||_{2}^{2}\Big{)}\Big{(} 1+|S_{1}|+|S_{2}|\Big{)}64^{2}|I|^{2}}{\epsilon^{4}|S_{1}|\cdot|S_{2}|}\]
Where the third inequality uses the fact that \(||D_{I}||_{3}\leq||D_{I}||_{2}\) and the fourth inequality uses the fact that \(||D_{I}||_{2}^{3}\leq||D_{I}||_{2}^{2}\) as \(||D_{I}||_{2}\in(0,1]\). To make the probability \(<1/3\), we have,
\[|S_{1}|\cdot|S_{2}| \geq 3\times 64^{2}|I|^{2}\frac{1}{\epsilon^{4}}||D_{I}||_{2}^{2} \Big{(}1-||D_{I}||_{2}^{2}\Big{)}\Big{(}1+|S_{1}|+|S_{2}|\Big{)}\] \[\geq 3\times 64^{2}\frac{|I|^{2}}{\epsilon^{4}}||D_{I}||_{2}^{2} \frac{||D_{I}||_{2}^{2}}{100}\Big{(}|S_{1}|+|S_{2}|\Big{)}\] \[\geq 3\times 64^{2}\frac{1}{100\epsilon^{4}}\Big{(}|S_{1}|+|S_{2}| \Big{)}\] \[\geq O(\frac{S_{I}}{\epsilon^{4}})\]
In the second inequality we have used the fact that \((1-||D_{I}||_{2}^{2})\geq\frac{1}{100}||D_{I}||_{2}^{2}\) as \(||D_{I}||_{2}^{2}\leq\frac{100}{101}<1\). The third inequality is obtained from the fact that \(||D||_{2}^{2}\geq\frac{1}{|I|}\). The final inequality is obtained
from the fact that \(|S_{I}|=|S_{1}|+|S_{2}|\). Therefore, provided \(|S_{1}|\cdot|S_{2}|\geq O(\frac{|S_{I}|}{\epsilon^{4}})\), with probability at least \(2/3\), \(||D_{I}||_{2}^{2}-\frac{\epsilon^{2}}{64|I|}\leq\frac{\mathit{coll}(S_{1},S_{2} )}{|S_{1}||S_{2}|}\leq||D_{I}||_{2}^{2}+\frac{\epsilon^{2}}{64|I|}\).
The bipartite collision-based tester works by verifying the total weight of the intervals where the conditional distributions are far from uniformity. Let \(S_{I}\) be the set of samples inside an interval \(I\) and let it satisfy the condition of Lemma 4.5. The following lemma shows that bipartite collision count is used to detect such intervals.
**Lemma 4.6**.: _Let \(D\) be an unknown distribution over \([n]\) and \(I\subset[n]\) is an interval. Let \(S_{I}\) be the set of points lying in the interval \(I\) and \(S_{I}\) can be divided into two sets \(S_{1}\) and \(S_{2}\) such that \(|S_{1}||S_{2}|\geq O(|S_{I}|/\epsilon^{4})\), then the following happens with probability at least \(2/3\)_
* _If_ \(d_{TV}(D_{I},\mathcal{U}_{I})>\frac{\epsilon}{4}\)_, then_ \(\frac{\mathit{coll}(S_{1},S_{2})}{|S_{1}||S_{2}|}>\frac{1}{|I|}+\frac{\epsilon ^{2}}{64|I|}\)__
* _If_ \(d_{TV}(D_{I},\mathcal{U}_{I})\leq\frac{\epsilon}{4}\)_, then,_ \(\frac{\mathit{coll}(S_{1},S_{2})}{|S_{1}||S_{2}|}\leq\frac{1+\epsilon^{2}/64}{ |I|}+\frac{\epsilon^{2}}{16}\)__
Proof.: The proof is similar to the proof of Lemma 4.2. When \(d_{TV}(D_{I},\mathcal{U}_{I})>\frac{\epsilon}{4}\) we get \(||D_{I}||_{2}^{2}>\frac{\epsilon^{2}}{32|I|}+\frac{1}{|I|}\). Consider \(S_{I}\) is divided into two sets so that \(|S_{1}|\cdot|S_{2}|\geq O(|S_{I}|/\epsilon^{4})\), by Lemma 4.5 we obtain,
\[\frac{\mathit{coll}(S_{1},S_{2})}{|S_{1}||S_{2}|}+\frac{\epsilon ^{2}}{64|I|} >\frac{\epsilon^{2}}{32|I|}+\frac{1}{|I|}\] \[\frac{\mathit{coll}(S_{1},S_{2})}{|S_{1}||S_{2}|} >\frac{1}{|I|}+\frac{\epsilon^{2}}{64|I|}\]
Similarly, when \(d_{TV}(D_{I},\mathcal{U}_{I})\leq\frac{\epsilon}{4}\), we get \(||D_{I}||_{2}^{2}\leq\frac{\epsilon^{2}}{16}+\frac{1}{|I|}\). Given \(S_{I}\) can be divided into two sets such that \(|S_{1}|\cdot|S_{2}|\geq O(|S_{I}|/\epsilon^{4})\), by Lemma 4.5, \(\frac{\mathit{coll}(S_{1},S_{2})}{|S_{1}||S_{2}|}\leq\frac{1+\epsilon^{2}/64}{ |I|}+\frac{\epsilon^{2}}{16}\).
Now, we present the bipartite collision-based monotonicity tester.
**Theorem 4.7**.: _The algorithm bipartite collision monotonicity uses \(O(\frac{n\log n}{\epsilon^{8}})\) SAMP queries and outputs Accept with probability at least \(2/3\) if \(D\) is a monotone distribution and outputs Reject with probability at least \(2/3\) when \(D\) is not \(7\epsilon\)-close to monotone._
Proof.: While sampling \(O(n\log n/\epsilon^{8})\) points according to \(D\), an application of Chernoff bound shows that the intervals with \(D(I_{j})\geq\epsilon^{2}/\log n\) will contain at least \(S_{I_{j}}=O(|I_{j}|/\epsilon^{4})\) points. There will be at least one such interval with \(D(I_{j})\geq\epsilon^{2}/\log n\) as there are \(O(\log n/\epsilon^{2})\) partitions.
**Completeness :** Let \(D\) be monotone. By oblivious partitioning with parameter \(\epsilon_{1}=\epsilon^{2}\), we have \(\sum_{j=1}^{\ell}\sum_{x\in I_{j}}|D(x)-\frac{D(I_{j})}{|I_{j}|}|\leq\epsilon_ {1}\) which implies \(\sum_{j=1}^{\ell}D(I_{j})d_{TV}(D_{I_{j}},\mathcal{U}_{I_{j}})\leq\epsilon^{2}\). Let \(J^{\prime}\) be the set of intervals where for all \(I_{j}\), \(d_{TV}(D_{I_{j}},\mathcal{U}_{I_{j}})>\frac{\epsilon}{4}\), then \(\sum_{I_{j}\in J^{\prime}}D(I_{j})\leq 4\epsilon\).
Let \(\hat{J}\) is the set of intervals where \(|S_{1}||S_{2}|\geq O(|S_{I_{j}}|/\epsilon^{4})\) and \(d_{TV}(D_{I_{j}},\mathcal{U}_{I_{j}})>\frac{\epsilon}{4}\). So, \(\hat{J}\subseteq J^{\prime}\). From Lemma 4.6, we know \(\hat{J}\) is the set of intervals where \(\frac{\mathit{coll}(S_{1},S_{2})}{|S_{1}||S_{2}|}>\frac{1}{|I_{j}|}+\frac{ \epsilon^{2}}{64|I_{j}|}\). Let \(J\) be the set of intervals where \(|S_{1}||S_{2}|\geq O(|S_{I_{j}}|/\epsilon^{4})\) and \(\frac{\mathit{coll}(S_{1},S_{2})}{|S_{1}||S_{2}|}>\frac{1+\epsilon^{2}/64}{|I_ {j}|}+\frac{\epsilon^{2}}{16}\), then \(J\subseteq\hat{J}\subseteq J^{\prime}\). We know \(\sum_{I_{j}\in J^{\prime}}D(I_{j})\leq 4\epsilon\). So, we can conclude that \(\sum_{I_{j}\in J}D(I_{j})\leq 4\epsilon\).
When \(d_{TV}(D_{I_{j}},\mathcal{U}_{I_{j}})\leq\frac{\epsilon}{4}\), the algorithm does not sum over such \(D(I_{j})\) even if \(|S_{1}||S_{2}|\geq O(|S_{I_{j}}|/\epsilon^{4})\). This is because by Lemma 4.6 we know \(\frac{\mathit{coll}(S_{1},S_{2})}{|S_{1}||S_{2}|}\leq\frac{1+\epsilon^{2}/64}{| I_{j}|}+\frac{\epsilon^{2}}{16}\). As a result, we can say that when \(D\) is monotone \(\sum_{I_{j}\in J}D(I_{j})\leq 4\epsilon\).
We use the empirical distribution \(\tilde{D}\) and deduce that \(\sum_{I_{j}\in J}\tilde{D}(I_{j})\leq 5\epsilon\). Hence, the algorithm will NOT output Reject in Step 6. We also conclude as \(D\) is monotone, the flattened distribution \((\tilde{D}^{f})^{\mathcal{I}}\) is \(2\epsilon\) close to monotone and the algorithm will output Accept in Step 8.
**Soundness :** We will prove the contrapositive of the statement. Let the algorithm outputs Accept, then we need to prove that \(D\) is \(7\epsilon\) close to monotone.
As the algorithm accepts, \(\sum_{I_{j}\in J}\tilde{D}(I_{j})\leq 5\epsilon\), for the set of intervals \(J\) where \(|S_{1}||S_{2}|\geq O(|S_{I_{j}}|/\epsilon^{4})\) and \(\frac{\mathit{coll}(S_{1},S_{2})}{|S_{1}||S_{2}|}\geq(\frac{1+\epsilon^{2}/64}{ |I_{j}|}+\frac{\epsilon^{2}}{16})\). For all such intervals \(I_{j}\in J\) by Lemma 4.5, we obtain \(d_{TV}(D_{I_{j}},\mathcal{U}_{I_{j}})\geq\frac{\epsilon}{4}\).
Now, we calculate the distance between \(D\) and the flattened distribution and we get \(d_{TV}(D,(D^{f})^{\mathcal{I}})<4\epsilon\)
We also know from Lemma 2.2, \(d_{TV}((D^{f})^{\mathcal{I}},(\tilde{D}^{f})^{\mathcal{I}})<\epsilon\). By triangle inequality, \(d_{TV}(D,(\tilde{D}^{f})^{\mathcal{I}})<5\epsilon\). As the algorithm outputs accept, there exists a monotone distribution \(M\), such that \(d_{TV}(\tilde{D}^{f})^{\mathcal{I}},M)\leq 2\epsilon\). By triangle inequality, we have \(d_{TV}(D,M)<7\epsilon\).
### Testing Monotonicity in the streaming model
In this section, we present the monotonicity tester in the streaming settings. A set of samples is drawn according to the standard access model that is revealed online one at a time. The task is to test whether an unknown distribution is a monotone or \(\epsilon\) far from monotonicity. Also, there is a memory bound of \(m\) bits. We use the notion of bipartite collision monotonicity tester 4 discussed in the previous section. For satisfying the memory bound, we store an optimal number of samples
for such intervals and count bipartite collision between the stored samples and the remaining ones. We present the algorithm below,
```
Input : SAMP access to \(D\), \(\ell=O(\frac{1}{\epsilon_{1}}\log{(n\epsilon_{1}+1)})\) oblivious partitions \(\mathcal{I}=\{I_{1},..,I_{\ell}\}\) and error parameter \(\epsilon,\epsilon_{1}\in(0,1]\), where \(\epsilon_{1}=\epsilon^{2}\), memory requirement \(\log^{2}{n}/\epsilon^{6}\leq m\leq\sqrt{n}/\epsilon^{3}\)
1 Sample \(T=\tilde{O}(\frac{1}{\epsilon^{6}}\log^{2}{n})\) points from SAMP
2 Get the empirical distribution \(\tilde{D}\) over \(\ell\)
3 Obtain an additional sample \(S=O(\frac{n\log{n}}{m\epsilon^{6}})\) from SAMP
4 For each interval store the first set of \(S_{1}=O(\frac{m\epsilon^{2}}{\log^{2}{n}})\) samples in memory
5 Let \(J\) be the set of intervals, where for the next set of \(S_{2}=O(\frac{n}{m\epsilon^{4}})\) points, the following condition is satisfied, \(\frac{coll(S_{1},S_{2})}{|S_{1}||S_{2}|}\geq(\frac{1+\epsilon^{2}/64}{|I_{j}| }+\frac{\epsilon^{2}}{16})\)
6 Check if\(\sum_{I_{j}\in J}\tilde{D}(I_{j})>5\epsilon\)then
7 Reject and Exit
8 Define a flat distribution \((\tilde{D}^{f})^{\mathcal{I}}\) over \([n]\)
9 Output Accept if \((\tilde{D}^{f})^{\mathcal{I}}\) is \(2\epsilon\)-close to a monotone distribution. Otherwise output Reject
```
**Algorithm 5**Streaming Monotonicity
**Theorem 4.8**.: _The algorithm streaming monotonicity uses \(O(\frac{n\log{n}}{m\epsilon^{6}})\) SAMP queries and outputs Accept with probability at least \(2/3\) if \(D\) is a monotone distribution and outputs Reject with probability at least \(2/3\) when \(D\) is not \(7\epsilon\) close to monotone. It uses \(O(m)\) bits of memory for \(\log^{2}{n}/\epsilon^{6}\leq m\leq\sqrt{n}/\epsilon^{3}\)._
Proof.: As there are \(O(\frac{\log{n}}{\epsilon^{2}})\) partitions, there will be at least one interval with \(D(I_{j})\geq\frac{\epsilon^{2}}{\log{n}}\). An application of Chernoff bound shows that with high probability all such intervals contain \(|S_{I_{j}}|=O(n/m\epsilon^{4})\) points. In the algorithm, we divide \(S_{I_{j}}\) into two sets \(S_{1}\) and \(S_{2}\) such that for \(\log^{2}{n}/\epsilon^{6}\leq m\leq\sqrt{n}/\epsilon^{3}\), \(|S_{1}|+|S_{2}|=O(m\epsilon^{2}/\log^{2}{n})+O(n/m\epsilon^{4})=O(n/m\epsilon ^{4})\) and \(|S_{1}|.|S_{2}|=O(n/\epsilon^{2}\log^{2}{n})\geq O(n/m\epsilon^{8})=(1/ \epsilon^{4})|S_{I_{j}}|\). (The inequality is obtained by the fact that \(m\geq\log^{2}{n}/\epsilon^{6}\)). This implies that the condition of Lemma 4.5 is satisfied by these intervals and they are eligible for estimating the collision probability using bipartite collision count. The rest of the analysis follows from Theorem 4.7.
The algorithm uses \(O(m)\) bits of memory for implementation in a single-pass streaming model. For obtaining the empirical distribution \(\tilde{D}\), we will use one counter for each of the \(\ell\) intervals. When a sample \(x\) comes, if \(x\in I_{j}\), the corresponding counter for \(I_{j}\) will be incremented by \(1\). In the end, the counters will give the number of samples that fall in each of the intervals, and using those values we can explicitly obtain the distribution \(\tilde{D}\). Each counter takes \(O(\log{n})\) bits of memory. There are total \(\ell=(\log{n}/\epsilon^{2})\) counters. So, the memory requirement for this step is \(O(\log^{2}{n}/\epsilon^{2})<m\) bits. Also, using the distribution \(\tilde{D}\) we can obtain the flattened distribution \((\tilde{D}^{f})^{\mathcal{I}}\) without storing it explicitly. Hence, the Line 9 does not require any extra space for checking whether \((\tilde{D}^{f})^{\mathcal{I}}\) is \(2\epsilon\) close to monotone or not. For storing the first set of \(S_{1}=O(m\epsilon^{2}/\log^{2}{n})\) samples for an interval will take \(O(m\epsilon^{2}/\log{n})\) bits of memory. As we are storing \(S_{1}\) samples for all \(\ell=O(\log{n}/\epsilon^{2})\) intervals, it will take total \(O(m)\) bits of memory.
**Remark** If the input to the algorithm is a monotone distribution, then the streaming algorithm computes a distribution over the intervals \(\mathcal{I}\) such that the flattening is close to a monotone distribution. Since the number of intervals in the partition is \(O(\log n/\epsilon)\), the explicit description of the distribution can be succinctly stored.
We would also like to point out that the final step in the algorithm requires testing if the learnt distribution is close to some monotone distribution, and we have not explicitly bounded the space required for that.
#### 4.3.1 Lower bound for testing monotonicity
In this section, we prove the lower bound for monotonicity testing problem in the streaming settings. We start with the discussion of the uniformity testing lower bound by ([1]) in the streaming model and later we show how the same lower bound is applicable in our case.
**Theorem 4.9** (Uniformity testing lower bound in streaming framework [1]).: _Let \(\mathcal{A}\) be an algorithm which tests if a distribution \(D\) is uniform versus \(\epsilon\)-far from uniform with error probability \(1/3\), can access the samples in a single-pass streaming fashion using \(m\) bits of memory and \(S\) samples, then \(S.m=\Omega(n/\epsilon^{2})\). Furthermore, if \(S<n^{0.9}\) and \(m>S^{2}/n^{0.9}\) then \(S\cdot m=\Omega(n\log n/\epsilon^{4})\)._
The proof of the above lemma proceeds by choosing a random bit \(X\in\{0,1\}\), where \(X=0\) defines a _Yes_ instance (uniform distribution) and \(X=1\) defines a _No_ instance (\(\epsilon\)-far from uniform) and calculating the mutual information between \(X\) and the bits stored in the memory after seeing \(S\) samples. In their formulation, the _Yes_ instance is a uniform distribution over \(2n\) and the _No_ instance is obtained by pairing \((2i-1,2i)\) indices together and assigning values by tossing an \(\epsilon\)-biased coin. In particular, the _No_ distribution is obtained as follows, pair the indices as \(\{1,2\},\{3,4\},...,\{2n-1,2n\}\). Pick a bin \(\{2i-1,2i\}\) and for each bin a random bit \(Y_{i}\in\{\pm 1\}\) to assign the probabilities as,
\[(D(2i-1),D(2i))=\begin{cases}\frac{1+\epsilon}{2n},\frac{1-\epsilon}{2n}&\text {if }Y_{i}=1\\ \frac{1-\epsilon}{2n},\frac{1+\epsilon}{2n}&\text{if }Y_{i}=-1\end{cases}\]
It is straightforward that the _Yes_ distribution is a monotone distribution as well. We show that any distribution \(D\) from the _No_ instance set is \(O(\epsilon)\)-far from monotonicity. We start by choosing an \(\alpha\in(0,\epsilon/4)\) and defining a set of partitions \(\mathcal{I}=\{I_{1},...,I_{\ell}\}\) such that \(|I_{j}|=\lfloor(1+\alpha)^{j}\rfloor\) for \(1\leq j\leq\ell\). Let \((D^{f})^{\mathcal{I}}\) be the flattened distribution corresponding to \(\mathcal{I}\). We use the following lemma from ([1]) which reflects the fact if \(D\) is far from \((D^{f})^{\mathcal{I}}\), then \(D\) is also far from being monotone. In particular, we define the lemma as follows,
**Lemma 4.10** ([1]).: _Let \(D\) be a distribution over domain \([n]\) and \(\mathcal{I}=\{I_{1},...,I_{\ell}\}\) are the set of partitions defined obliviously with respect to a parameter \(\alpha\in(0,1)\) where \(\ell=O(\frac{1}{\alpha}\log n\alpha)\) and \(|I_{j}|=\lfloor(1+\alpha)^{j}\rfloor\). If \(D\) is \(\epsilon\)-close to monotone non-increasing, then \(d_{TV}(D,(D^{f})^{\mathcal{I}})\leq 2\epsilon+\alpha\) where \((D^{f})^{\mathcal{I}}\) is the flattened distribution of \(D\) with respect to \(\mathcal{I}\)._
Let, \(D\) be a distribution chosen randomly from the _No_ instance set. We have the following observation,
**Lemma 4.11**.: _Let \(\mathcal{I}=\{I_{1},...,I_{\ell}\}\) be the oblivious partitions of \(D\) with parameter \(\alpha\) such that \(|I_{j}|=\lfloor(1+\alpha)^{j}\rfloor\)._
* _If_ \(|I_{j}|\) _is odd, then_ \(\sum_{i\in I_{j}}|D(i)-\frac{D(I_{j})}{|I_{j}|}|=\frac{\epsilon}{2n}(|I_{j}|- \frac{1}{|I_{j}|})\)_._
* _If_ \(|I_{j}|\) _is even, then_ \(\sum_{i\in I_{j}}|D(i)-\frac{D(I_{j})}{|I_{j}|}|\geq\frac{\epsilon}{2n}(|I_{j}| -\frac{4}{|I_{j}|})\)_._
Proof.: If \(|I_{j}|\) is odd, it will contain \(k\) (any positive integer) number of bin where each bin is of form \((2x-1,2x)\) and an extra index \(i^{\prime}\) which can have the probability weight either \(\frac{1+\epsilon}{2n}\) or \(\frac{1-\epsilon}{2n}\). Let \(D(i^{\prime})=\frac{1+\epsilon}{2n}\). In this case, \(D(I_{j})=\frac{|I_{j}|}{2n}+\frac{\epsilon}{2n}\).
\[\sum_{i\in I_{j}}|D(i)-\frac{D(I_{j})}{|I_{j}|}| =\sum_{i\in I_{j}}|D(i)-\frac{1}{2n}-\frac{\epsilon}{2n|I_{j}|}|\] \[=\frac{\epsilon}{2n}(1-\frac{1}{|I_{j}|})\frac{|I_{j}|-1}{2}+ \frac{\epsilon}{2n}(1+\frac{1}{|I_{j}|})\frac{|I_{j}|-1}{2}+\frac{\epsilon}{2n }(1-\frac{1}{|I_{j}|})\] \[=\frac{\epsilon}{2n}(|I_{j}|-\frac{1}{|I_{j}|})\]
When \(D(i^{\prime})=\frac{1-\epsilon}{2n}\), similar calculation will follow.
If \(|I_{j}|\) is even, there are two possibilities, \((i)\)\(I_{j}\) consists of \(k\) (positive integer) bins. So, there will be equal number of \(\frac{1+\epsilon}{2n}\) and \(\frac{1-\epsilon}{2n}\) in \(I_{j}\) and \(D(I_{j})=\frac{|I_{j}|}{2n}\). In this case, it is straightforward to observe that \(\sum_{i\in I_{j}}|D(i)-\frac{D(I_{j})}{|I_{j}|}|=\frac{\epsilon|I_{j}|}{2n}\). Another case is, \((ii)\)\(I_{j}\) contains \(b_{p},...,b_{p+k-1}\) bins completely and \(i^{\prime}\in b_{p-1}\), and \(i^{\prime\prime}\in b_{p+k}\) where \(D(i^{\prime})=D(i^{\prime\prime})\); the case when \(D(i^{\prime})\neq D(i^{\prime\prime})\) will be similar to \((i)\) that we saw earlier. Let \(D(i^{\prime})=D(i^{\prime\prime})=\frac{1+\epsilon}{2n}\). In this case, \(D(I_{j})=\frac{|I_{j}|}{2n}+\frac{\epsilon}{n}\).
\[\sum_{i\in I_{j}}|D(i)-\frac{D(I_{j})}{|I_{j}|}| =\sum_{i\in I_{j}}|D(i)-\frac{1}{2n}-\frac{\epsilon}{n|I_{j}|}|\] \[=\frac{\epsilon}{2n}(1-\frac{1}{|I_{j}|})\frac{|I_{j}|-2}{2}+ \frac{\epsilon}{2n}(1+\frac{1}{|I_{j}|})\frac{|I_{j}|-2}{2}+\frac{\epsilon}{ n}(1-\frac{2}{|I_{j}|})\] \[=\frac{\epsilon}{2n}(|I_{j}|-\frac{4}{|I_{j}|})\]
Combining \((i)\) and \((ii)\), we say \(\sum_{i\in I_{j}}|D(i)-\frac{D(I_{j})}{|I_{j}|}|\geq\frac{\epsilon}{2n}(|I_{j}| -\frac{4}{|I_{j}|})\). Similar calculation will follow when \(D(i^{\prime})=D(i^{\prime\prime})=\frac{1-\epsilon}{2n}\).
In our case, we apply oblivious partitions on \(D\) (chosen randomly from the _No_ set) with respect to the parameter \(\alpha\) and conclude the following,
**Lemma 4.12**.: _Let \(D\) be a randomly chosen distribution from the No instance set, then \(D\) is \(\epsilon/4\)-far from any monotone non-increasing distribution._
Proof.: We calculate \(d_{TV}(D,(D^{f})^{\mathcal{I}})=\sum_{j=1}^{\ell}\sum_{i\in I_{j}}|D(i)-\frac{ D(I_{j})}{|I_{j}|}|=\sum_{|I_{j}|\text{is even}}\sum_{i\in I_{j}}|D(i)-\frac{D(I_{j})}{|I_{j}|}|+\sum_{|I_{j}| \text{is odd}}\sum_{i\in I_{j}}|D(i)-\frac{D(I_{j})}{|I_{j}|}|\). Each odd length interval contributes \(\sum_{i\in I_{j}}|D(i)-\frac{D(I_{j})}{|I_{j}|}|=\frac{\epsilon}{2n}(|I_{j}|- \frac{1}{|I_{j}|})\).
\(\frac{1}{|I_{j}|}\)) and each even length interval contributes \(\sum_{i\in I_{j}}|D(i)-\frac{D(I_{j})}{|I_{j}|}|\geq\frac{\epsilon}{2n}(|I_{j}|- \frac{4}{|I_{j}|})\) by using Lemma 4.11.
Hence, simplifying the distance, we get, \(d_{TV}(D,(D^{f})^{\mathcal{I}})\geq\sum_{|I_{j}|\text{is even}}\frac{\epsilon} {2n}(|I_{j}|-\frac{4}{|I_{j}|})+\sum_{|I_{j}|\text{is odd}}\frac{\epsilon}{2n}(|I_ {j}|-\frac{1}{|I_{j}|})\geq\frac{\epsilon}{2n}\sum_{I_{j}\in\ell}|I_{j}|-\frac{ \epsilon}{2n}\big{(}\sum_{|I_{j}|\text{is even}}\frac{4}{|I_{j}|}+\sum_{|I_{j}| \text{is odd}}\frac{1}{|I_{j}|}\big{)}\geq\epsilon-\frac{\epsilon}{2n}.5\ell \geq\frac{3\epsilon}{4}>2\frac{\epsilon}{4}+\alpha\). The third inequality is obtained by using the fact that \(|I_{j}|\geq 1\) and the fourth inequality considers \(\ell<n/10\). Now, by using the contra-positive of the Lemma 4.10, \(D\) is \(\epsilon/4\)-far from any monotone non-increasing distribution.
Therefore, the uniformity testing lower bound from [14] is applicable in our case for distinguishing monotone from \(\epsilon/4\)-far monotone. We formalize this in the theorem below.
**Theorem 4.13**.: _Let \(\mathcal{A}\) be an algorithm that tests if a distribution \(D\) is monotone versus \(\epsilon/4\)-far from monotonicity with error probability \(1/3\), can access the samples in a single-pass streaming fashion using \(m\) bits of memory and \(S\) samples, then \(S.m=\Omega(n/\epsilon^{2})\). Furthermore, if \(n^{0.34}/\epsilon^{8/3}+n^{0.1}/\epsilon^{4}\leq m\leq\sqrt{n}/\epsilon^{3}\), then \(S.m=\Omega(n\log n/\epsilon^{4})\)._
We obtain the above theorem as analogous to the Theorem 4.9 by showing that lower bound for uniformity implies lower bound for monotonicity in the streaming framework. In particular, the uniform distribution is monotone non-increasing by default and we show that a randomly chosen distribution from _No_ instance set is \(\epsilon/4\)-far from monotone no-increasing. Hence, the correctness of the above theorem follows directly from the Theorem 4.9.
## 5 Learning decomposable distributions in the streaming model
The algorithm and analysis from the previous section of monotone distributions extend to a more general class of structured distributions known as \((\gamma,L)\)-decomposable distributions. The class of \((\gamma,L)\)-decomposable distributions were first studied by Canonne et al ([14]), who gave a unified algorithm for testing monotonicity, k-modal, histograms, log-concave distributions since \((\gamma,L)\)-decomposable distributions contain these other classes. We will first recall the definition.
**Definition 5.1** (\((\gamma,L)\)-decomposable distribution [14]).: _A class \(\mathcal{C}\) of distributions is said to be \((\gamma,L)\)-decomposable, if for every \(D\in\mathcal{C}\), there exists an \(\ell\leq L\) and a partition \(\mathcal{I}=\{I_{1},..,I_{\ell}\}\) of \([n]\) into intervals such that for every interval \(I_{j}\in\mathcal{I}\) one of the following conditions hold._
* \(D(I_{j})\leq\frac{\gamma}{L}\)__
* \(max_{i\in I_{j}}D(i)\leq(1+\gamma)min_{i\in I_{j}}D(i)\)__
The following lemma shows that monotone distributions, in particular, are decomposable.
**Lemma 5.1** ([14]).: _For all \(\gamma>0\), the class of monotone distributions \(\mathcal{M}\) over \([n]\) is \((\gamma,L)\)-decomposable, where \(L=O(\frac{\log^{2}n}{\gamma})\)._
To obtain an algorithm with trade-offs between sample complexity and space complexity, we will start with the algorithm of Fischer et al ([10]) that improves the sample complexity of
[1]. We will describe an algorithm that obtains an explicit description of an unknown \((\gamma,L)\)-decomposable distribution. To that end, we start with the definition of an \((\eta,\gamma)\)-fine partition as defined in [11].
**Definition 5.2** (\((\eta,\gamma)\)-fine Partition).: _Let \(D\) be distribution over \([n]\) and \(\mathcal{I}=\{I_{1},...,I_{\ell}\}\) be an interval partition of \(D\). \(\mathcal{I}\) is said to be \((\eta,\gamma)\) fine partition if there exists \(\eta>0\), \(\gamma>0\) and a set \(H\subset\mathcal{I}\), such that \(H=\{I_{j}\in\mathcal{I}:D(I_{j})>\eta,|I_{j}|>1\}\) and \(\sum_{I_{j}\in H}D(I_{j})\leq\gamma\)._
A set of \((\eta,\gamma)\) partitions can be obtained in the following way: sample \(k=O(\frac{1}{\eta}\log 1/\gamma^{\delta})\) points from \(D\), sort them in increasing order \(\{x_{1}<x_{2}<...<x_{k}\}\) without repetition and set \(x_{0}=0\). For every point \(x_{j}\); \(1\leq j\leq k\) a singleton interval is added and for \(x_{j}>x_{j-1}+1\), an interval \([x_{j-1}+1,x_{j}-1]\) is added. Finally for \(x_{k}<n\), an interval \([x_{k},n]\) is also added. Precisely, the following theorem can be summarised:
**Theorem 5.2** ([11]).: _Let \(D\) be a distribution over \([n]\). For the parameters \(\eta>0,\gamma>0,\delta>0\), there exists an algorithm that uses \(O(\frac{1}{\eta}\log 1/\gamma^{\delta})\)SAMP queries and with probability at least \((1-\delta)\), finds a set of \((\eta,\gamma)\) fine partitions \(\mathcal{I}=\{I_{1},...,I_{r}\}\) of \(D\) where \(r=|\mathcal{I}|=O(\frac{1}{\eta}\log 1/\gamma^{\delta})\)._
After a set of \((\eta,\gamma)\) partitions is obtained, a _weakly tolerant interval uniformity tester_ is used to check how many of the intervals are far from uniformity. If a significant number of intervals are far from uniformity, then the obtained partitions can not be used for learning. Otherwise, the partitions are used to construct a distribution according to 2.2. The following theorem reflects the task of a _weakly tolerant interval uniformity tester_:
**Theorem 5.3** ([14]).: _Let \(D\) be a distribution over \([n]\). There exists an algorithm \(\mathcal{A}\) which takes the following as inputs:SAMP access to a distribution \(D\), an interval \(I\subset[n]\), a parameter \(m\) defined as the maximum size of an interval, error parameters \(0<\epsilon\leq 1\), \(0\leq\delta\leq 1\). The algorithm does the following:_
* _If_ \(|I|\leq m\)_,_ \(D(I)\geq\gamma\) _and_ \(bias(D\upharpoonright I)\leq\frac{\epsilon}{100}\)_, then the algorithm accepts with probability at least_ \(1-\delta\)_._
* _If_ \(|I|\leq m\)_,_ \(D(I)\geq\gamma\)_, and_ \(d_{TV}(D\upharpoonright I,U_{I})>\epsilon\)_, then the algorithm rejects with probability at least_ \(1-\delta\)_._
_In all other cases, the algorithm behaves arbitrarily. The algorithm requires \(O(\sqrt{m}\log{(1/\delta)}/\gamma\epsilon^{2})\) samples from \(D\)._
Now, we explain how we implement the above-mentioned ideas in the streaming settings. It is easy to observe that the Theorem 5.2 can be used as it is in the streaming settings without storing any samples. Consider a set of \(O(\frac{1}{\eta}\log 1/\gamma^{\delta})\) points that appear online as a stream one at a time. We can construct the singleton intervals online by looking at the sampled points. Later, we can add the intervals lying in between two singleton intervals. However, the weakly tolerant interval uniformity tester requires \(O(\sqrt{m}\log{(1/\delta)}/\gamma\epsilon^{2})\) samples ([14]) for each \(I_{j}\) where \(|I_{j}|\leq m\). This leads to the use of excessive memory storage. We observe that the weakly tolerant interval uniformity tester's task can be replaced by the use of bipartite collision count and we present the following algorithm, which is a small modification of [11].
The following theorem shows how bipartite collision count does the same task as that of the weakly tolerant interval uniformity tester.
**Theorem 5.4**.: _Let \(D\) be a distribution over \([n]\), \(I\subset[n]\) be an interval such that \(D(I)\geq\epsilon/r\). Let \(S_{I}\) be the set of samples that falls inside \(I\) while sampling \(S=O(\frac{nr}{m\epsilon^{8}})\) points according to \(D\). Consider \(S_{I}\) can be divided into two sets \(S_{1}\) and \(S_{2}\) such that \(|S_{1}|\cdot|S_{2}|\geq|S_{I}|/\epsilon^{4}\). Then the following happens with high probability,_
* _If_ \(bias(D\upharpoonright I)\leq\frac{\epsilon}{100}\)_, then_ \(\frac{coll(S_{1},S_{2})}{|S_{1}||S_{2}|}\leq\frac{1+\epsilon^{\prime}+ \epsilon^{2}/64}{|I_{j}|}\)_; where_ \(\epsilon^{\prime}=\frac{\epsilon^{2}}{10^{4}}\)__
* _If_ \(d_{TV}(D\upharpoonright I,U_{I})>\epsilon\)_, then_ \(\frac{coll(S_{1},S_{2})}{|S_{1}||S_{2}|}>\frac{1+63\epsilon^{2}/64}{|I|}\)__
Proof.: As \(D(I)\geq\epsilon/r\), an additive Chernoff bound shows that \(I\) will contain \(|S_{I}|=O(\frac{n}{m\epsilon^{5}})\) points out of \(S=O(\frac{nr}{m\epsilon^{8}})\) samples with probability at least \(2/3\). Also considering \(S_{1}=O(\frac{m}{\log n})\) and \(S_{2}=O(\frac{n}{m\epsilon^{5}})\), we get \(|S_{1}|\cdot|S_{2}|=O(n/\epsilon^{5}\log n)\geq O(\frac{n}{m\epsilon^{9}})\) for \(m\geq\log n/\epsilon^{4}\). This implies that \(|S_{1}|\cdot|S_{2}|\geq\frac{|S_{I}|}{\epsilon^{4}}\). Hence, by Lemma 4.5 the following happens with probability at least \(2/3\), \(||D_{I}||_{2}^{2}-\frac{\epsilon^{2}}{64|I|}\leq\frac{coll(S_{1},S_{2})}{|S_{1 }||S_{2}|}\leq||D_{I}||_{2}^{2}+\frac{\epsilon^{2}}{64|I|}\). Let \(bias(D\upharpoonright I)\leq\frac{\epsilon}{100}\) which implies that for \(x\in I\), \(max_{x\in I}D(x)\leq(1+\epsilon/100)min_{x\in I}D(x)\). Using the Lemma 2.1 we get, \(||D_{I}||_{2}^{2}\leq\frac{1+\epsilon^{2}/10^{4}}{|I|}\). Hence, we obtain, \(\frac{coll(S_{1},S_{2})}{|S_{1}||S_{2}|}<\frac{1+\epsilon^{\prime}+\epsilon^{ 2}/64}{|I|}\). Let \(d_{TV}(D\upharpoonright I,U_{I})>\epsilon\), by the Lemma 2.1, \(||D_{I}||_{2}^{2}>\frac{1+\epsilon^{2}}{|I|}\). As \(D(I)\geq\epsilon/r\), we already proved \(|S_{1}|\cdot|S_{2}|\geq\frac{|S_{I}|}{\epsilon^{4}}\). Applying Lemma 4.5, we get \(\frac{coll(S_{1},S_{2})}{|S_{1}||S_{2}|}>\frac{1+63\epsilon^{2}/64}{|I|}\).
The above theorem shows that the acceptance and rejection conditions of the weakly tolerant interval uniformity tester can be substituted by the bipartite collision count except from the fact that \(|I_{j}|\leq n/c\) is not examined. We check this just by adding an extra condition in our algorithm.
**Theorem 5.5**.: _The algorithm assessing a partition streaming takes a set of \((\eta,\gamma)\)-fine interval partitions \(\mathcal{I}=\{I_{1},...,I_{r}\}\) as input, uses \(S=O(\frac{nr}{m\epsilon^{8}})\) samples according to the standard access oracle and does the following when \(c\eta+\gamma\leq\epsilon\),_
* _Define_ \(\mathcal{G}_{\mathcal{I}}=\{I_{j}\in\mathcal{I}:bias(D\upharpoonright I_{j})\leq \frac{\epsilon}{100}\}\)_. If_ \(D(\cup_{I_{j}\in\mathcal{G}_{\mathcal{I}}})\geq 1-\epsilon\)_, then the algorithm outputs Accept with probability at least_ \(2/3\)_._
* _Define_ \(\mathcal{F}_{\mathcal{I}}=\{I_{j}\in\mathcal{I}:d_{TV}(D\upharpoonright I_{j},U_{I_{j}})>\epsilon\}\)_. If_ \(D(\cup_{I_{j}\in\mathcal{F}_{\mathcal{I}}})\geq 7\epsilon\)_, then the algorithm outputs Reject with probability at least_ \(2/3\)_._
_The memory requirement for the algorithm is \(O(m)\) bits, where \(\log n/\epsilon^{4}\leq m\leq O(\sqrt{n\log n}/\epsilon^{3})\)._
The correctness of the above theorem follows from [11], we give a brief outline as follows,
Proof.: Let us define the set \(\mathcal{N}_{\mathcal{I}}=\{I_{j}:|I_{j}|>n/c\;or,D(I_{j})<\epsilon/r\}\). It is observed that for a set of \((\eta,\gamma)\)-fine intervals where \(c\eta+\gamma\leq\epsilon\), \(D(\mathcal{N}_{\mathcal{I}})\geq 2\epsilon\). Hence, \(D(\mathcal{G}_{\mathcal{I}}\setminus\mathcal{N}_{\mathcal{I}})\geq(1-3\epsilon)\). As a result, out of \(O(1/\epsilon)\) iterations, at most \(4\epsilon k\) intervals are drawn from the desired set where \(|I_{j}|\leq\frac{n}{c}\) and \(D(I_{j})\geq\frac{\epsilon}{r}\) and \(bias(D\upharpoonright I_{j})\leq\frac{\epsilon}{100}\). Furthermore, these intervals can be caught by counting the number of bipartite collisions and they are indeed correct by Theorem 5.4. Hence, the algorithm outputs Accept.
By the definition of \(\mathcal{F}_{\mathcal{I}}\), it is easy to observe that \(D(\mathcal{F}_{\mathcal{I}}\setminus\mathcal{N}_{\mathcal{I}})\geq 5\epsilon\). As a result, out of \(O(1/\epsilon)\) iterations, more than \(4\epsilon k\) intervals are drawn from the desired set where \(|I_{j}|\leq\frac{n}{c}\) and \(D(I_{j})\geq\frac{\epsilon}{r}\) and \(d_{TV}(D\upharpoonright I_{j},U_{I_{j}})>\epsilon\). Furthermore, these intervals will be caught by Theorem 5.4 and the algorithm outputs Reject with high probability.
**Space complexity :** The first set of points \(T=O(\frac{1}{\epsilon^{4}}r^{2}\log r)\) is sampled for estimating weights of each interval \(D(I_{j})\). An additive Chernoff bound (followed by a union bound over \(r\)) shows that by using \(T\) samples, for all intervals \(I_{j}\), \(|D(I_{j})-\tilde{D}(I_{j})|\leq\epsilon^{2}/r\). Instead of storing all the samples, we use CountMin sketch with parameters \((\epsilon,\delta)\) to save the space. When the samples appear one at a time as stream of \(T\) elements, we store the frequencies of each element in the CountMin table. If \(f_{x}\) be the frequency of an element \(x\in T\), by Lemma 2.4, with probability at least \((1-\delta)\), \(f_{x}\leq\tilde{f}_{x}\leq f_{x}+\epsilon|T|\). We can get the frequency of an interval \(I_{j}\) by adding the frequencies of all the elements lying in \(I_{j}\), i.e \(f_{I_{j}}=\sum_{x\in I_{j}}f_{x}\). We observe that \(f_{I_{j}}\leq\tilde{f}_{I_{j}}\leq f_{I_{j}}+\epsilon|T|^{2}\). We also know that \(\tilde{D}(I_{j})=\frac{f_{I_{j}}}{|T|}\) and \(\tilde{D}(I_{j})\geq D(I_{j})-\epsilon^{2}/r\). By combining these, to check if \(D(I_{j})\geq\epsilon/r\), it would be sufficient to check if \(\frac{f_{I_{j}}}{|T|}\geq\epsilon/r-\epsilon^{2}/r\). The space used for this procedure is \(O(\epsilon\log 1/\delta)<m\) by the use of CountMin \((\epsilon,\delta)\). For the rest of the algorithm, we are storing the set \(|S_{1}|=O(\frac{m}{\log n})\) samples in memory. So, a total of \(|S_{1}|\cdot\log n=O(m)\) bits storage is required for the implementation of the algorithm.
Now, we describe the final learning algorithm for \((\gamma,L)\)-decomposable distribution in the one-pass streaming settings.
```
Input : SAMP access to \(D\) supported over \([n]\), parameters \(c=20,r=10^{5}L\log{(1/\epsilon)}/\epsilon\), error parameters \(\epsilon,\delta\), memory requirement \(\log n/\epsilon^{4}\leq m\leq O(\sqrt{n\log n}/\epsilon^{3})\) Output : An explicit distribution \((\tilde{D}^{f})^{\mathcal{I}}\)
1 Use Theorem 5.2 to obtain a set of \((\epsilon/2000L,\epsilon/2000)\) fine partitions of \([n]\)
2 Run algorithm assessing partition streaming
3ifit Rejectsthen
4 Reject
5else
6 Return the flattened distribution of \(\tilde{D}\), i.e., \((\tilde{D}^{f})^{\mathcal{I}}\)
```
**Algorithm 7**Learning \(L\)-decomposable Distribution Streaming
The correctness of the algorithm follows from Lemma 7.1 from ([11]). Our adaption of the lemma is as follows:
**Theorem 5.6**.: _If \(D\) is an \((\epsilon/2000,L)\)-decomposable distribution, then the algorithm learning \(L\)-decomposable distribution streaming outputs a distribution \((\tilde{D}^{f})^{\mathcal{I}}\) such that \(d_{TV}(D,(\tilde{D}^{f})^{\mathcal{I}})\leq\epsilon\) with probability at least \(1-\delta\). The algorithm requires \(O(\frac{nL\log{(1/\epsilon)}}{me^{9}})\) samples from \(D\) and needs \(O(m)\) bits of memory where \(\log n/\epsilon^{4}\leq m\leq O(\sqrt{n\log n}/\epsilon^{3})\)._
The above algorithm can be used as a subroutine for testing \((\gamma,L)\)-decomposable properties. Given an unknown distribution \(D\), we will use Algorithm 6 to learn an explicit description of the distribution and the test if it is \((\gamma,L)\)-decomposable. Once again, like in the monotone distribution case, we note that the final testing of the explicit description will require additional space that we haven't accounted in the earlier algorithm.
**Theorem 5.7**.: _Let \(\mathcal{C}\) be a \((\gamma,L)\)-decomposable property for \(L=L(\epsilon/4000,n)\). The algorithm testing \(L\)-decomposable properties streaming requires \(O(\frac{nL\log{(1/\epsilon)}}{me^{9}})\) samples from \(D\) and does the following,_
* _If_ \(D\) _satisfies_ \(\mathcal{C}\)_, it outputs Accept with probability at least_ \((1-\delta)\)__
* _If_ \(D\) _is_ \(2\epsilon\) _far form_ \(\mathcal{C}\)_, it outputs Reject with probability_ \((1-\delta)\)__
_The algorithm uses \(O(m)\) bits of memory where \(\log n/\epsilon^{4}\leq m\leq O(\sqrt{n\log n}/\epsilon^{3})\)._
Proof.: Let \(D\) satisfies \(\mathcal{C}\). The explicit distribution \((\tilde{D}^{f})^{\mathcal{I}}\) will be \(\epsilon\) close to \(\mathcal{C}\). Hence, the algorithm outputs Accept in this case. Similarly, when \(D\) is \(2\epsilon\) far from \(\mathcal{C}\), \((\tilde{D}^{f})^{\mathcal{I}}\) will be \(\epsilon\) far from \(\mathcal{C}\) and the algorithm outputs Reject.
The algorithm requires \(O(L/\epsilon)\) samples from \(D\) for finding the first set of \((\epsilon/2000L,\epsilon/2000)\) fine partitions by Theorem 5.2. However, an \(O(\frac{nL\log{(1/\epsilon)}}{me^{9}})\) samples required for performing the Algorithm 6 which gives the total sample complexity. The memory requirement for the algorithm is \(O(m)\) bits which is the same as required by the learning \(L\)- decomposable distribution in the streaming settings.
Conclusion
We give efficient algorithms for testing identity, monotonicity and \((\gamma,L)\)-decomposability in the streaming model. For a memory constraint \(m\), the number of samples required is a function of the support size \(n\) and the constraint \(m\). For monotonicity testing, our bounds are nearly optimal. We note that the trade-off that we achieve, and lower bounds work for certain parameters of the value \(m\). Furthermore, we have not tried to tighten the dependence of the bound on the parameter \(\epsilon\). One natural question to ask is if the dependence of sample complexity on \(m\) can be improved, and whether it can work for a larger range of values.
|
2303.01409
|
The field of moduli of varieties with a structure
|
If $X$ is a variety with an additional structure $\xi$, such as a marked
point, a divisor, a polarization, a group structure and so forth, then it is
possible to study whether the pair $(X,\xi)$ is defined over the field of
moduli. There exists a precise definition of ``algebraic structures'' which
covers essentially all of the obvious concrete examples. We prove several
formal results about algebraic structures. There are immediate applications to
the study of fields of moduli of curves and finite sets in $\mathbb{P}^{2}$,
but the results are completely general.
Fix $G$ a finite group of automorphisms of $X$, a $G$-structure is an
algebraic structure with automorphism group equal to $G$. First, we prove that
$G$-structures on $X$ are in a $1:1$ correspondence with twisted forms of
$X/G\dashrightarrow\mathcal{B} G$. Secondly we show that, under some
assumptions, every algebraic structure on $X$ is equivalent to the structure
given by some $0$-cycle. Third, we give a cohomological criterion for checking
the existence of $G$-structures not defined over the field of moduli. Fourth,
we identify geometric conditions about the action of $G$ on $X$ which ensure
that every $G$-structure is defined over the field of moduli.
|
Giulio Bresciani
|
2023-03-02T16:58:15Z
|
http://arxiv.org/abs/2303.01409v2
|
# The field of moduli of varieties with a structure
###### Abstract.
If \(X\) is a variety with an algebraic structure \(\xi\), such as a marked point, a divisor, a group structure and so forth, then it is possible to study whether the pair \((X,\xi)\) is defined over the field of moduli. There exists a precise definition of "algebraic structures" which covers essentially all of the obvious concrete examples. We prove several formal results about algebraic structures. There are immediate applications to the study of fields of moduli of curves and finite sets in \(\mathbb{P}^{2}\), but the results are completely general.
First, we give a way of classifying algebraic structures on \(X\) up to equivalence. Secondly we show that, under some assumptions, every algebraic structure on \(X\) is equivalent to the structure given by some \(0\)-cycle. Fix \(G\) a finite group of automorphisms of \(X\). Third, we give a cohomological criterion for checking the existence of algebraic structures not defined over the field of moduli with group of automorphisms equal to \(G\). Fourth, we identify geometric conditions regarding the action of \(G\) on \(X\) which ensure that every algebraic structure with group of automorphisms equal to \(G\) is defined over the field of moduli.
## 1. Introduction
Let \(K/k\) be a possibly infinite Galois extension of fields. Consider a variety \(X\) over \(K\) with some additional structure \(\xi\): for instance, \(\xi\) might be a preferred point \(x\in X(K)\), or a line bundle over \(X\), or a group scheme structure (see [1], SS5] for a precise definition of "structures").
Given a Galois element \(\sigma\in\operatorname{Gal}(K/k)\), consider the twist \(\sigma^{*}(X,\xi)\): if \(X\) is defined by polynomials, this corresponds to applying \(\sigma\) to the coefficients of the polynomials. It can be shown that the subgroup \(H\subset\operatorname{Gal}(K/k)\) of elements \(\sigma\) such that \(\sigma^{*}(X,\xi)\simeq(X,\xi)\) is open, and the field of elements of \(K\) fixed by \(H\) is called _the field of moduli_ of \((X,\xi)\). If \((X,\xi)\) descends to some subextension \(K/k^{\prime}/k\), then \(k^{\prime}\) contains the field of moduli. The following is the basic question.
**Question**.: Is \((X,\xi)\) defined over its field of moduli?
One of the oldest known results regarding this question is the fact that an elliptic curve \(E\) over \(\bar{\mathbb{Q}}\) is defined over \(\mathbb{Q}(j_{E})\) where \(j_{E}\) is the \(j\)-invariant of \(E\); this result predates the concept of field of moduli by several decades. Fields of moduli were introduced by Matsusaka [14] in 1958 and later clarified by Shimura [15], who also proved that a generic, principally polarized abelian variety of odd dimension in characteristic \(0\) is defined over its field of moduli [15]. They have been studied intensively over the years, mainly for curves and abelian varieties, see for instance [16][17][18][19][20][21][22][23].
One main reason why results are restricted to curves and abelian varieties is the lack of appropriate technology. In particular, a lot of results about curves rely on results by Debes, Douai and Emsalem contained in [1][1] which until
recently were only available in dimension 1. In our joint article with A. Vistoli [BVb], we have generalized and clarified their methods and results: there are now a general framework and general techniques for studying fields of moduli of varieties with a structure in arbitrary dimension. As an example of our new techniques, we reproved and generalized Shimura's result about abelian varieties in a much more theoretical fashion [BVb, Corollary 6.25].
The study of fields of moduli of higher dimensional varieties is thus a largely unexplored topic. As a first application of our joint work with Vistoli [BVb] to an open problem, in [Brec] [Bred] we study the field of moduli of curves and finite subsets of \(\mathbb{P}^{2}\). Among other things, we prove that every smooth plane curve of degree prime with 6 is defined over the field of moduli.
This brief note is an expansion of the general technology constructed in [BVb]. The results contained here are crucial for our work on \(\mathbb{P}^{2}\), but completely general in nature, and will be applied in more forthcoming works about fields of moduli of higher dimensional varieties.
## 2. Contents of the paper
In [BVb, SS5] we defined what is an _algebraic structure_ on a variety (more generally, an algebraic space): we needed a precise definition in order to formulate theorems about all sorts of structured spaces, such as spaces with a group structure, spaces with marked points, spaces with an effective divisor, spaces with a polarization and so forth. In our intentions it was nothing more than a unifying definition.
It turns out that studying algebraic structures as independent, abstract objects not necessarily tied to some geometric meaning leads downstream to insights about the original problem of fields of moduli of actual, geometric objects. Let us give an example.
In [Brec] we prove that, for the large majority of finite subgroups \(G\subset\operatorname{PGL}_{3}(\bar{\mathbb{Q}})\), _every_ algebraic structure on \(\mathbb{P}^{2}_{\mathbb{Q}}\) with automorphism group equal to \(G\) is defined over its field of moduli. This is completely independent of the geometric origin of the structure: the result holds for sets of points, for smooth curves, for cycles and so forth. The general techniques which we give here have nothing to do with \(\mathbb{P}^{2}\), though, so the same kind of analysis can be done for other varieties.
### Equivalent structures
For the rest of the paper, \(k\) is a field with separable closure \(K\), \(X\) is an integral algebraic space of finite type over \(k\), \(G\subset\operatorname{Aut}(X)\) is a finite group of automorphisms of \(X\) of order prime with \(\operatorname{char}k\). A \(G\)-structure is an algebraic structure \(\xi\) on \(X\) such that \(\underline{\operatorname{Aut}}(X,\xi)\subset\underline{\operatorname{Aut}}(X)\) is finite, etale and equal to \(G\).
There is a gerbe \(\mathscr{G}_{\xi}\) over \(k_{\xi}\) called the _residual gerbe_[BVb, SS3.1] which parametrizes twisted forms of \((X,\xi)\), and a universal family \(\mathscr{X}_{\xi}\to\mathscr{G}_{\xi}\) whose fibers are twists of \(X\)[BVb, SS5]. In particular, \(\xi\) is defined over \(k_{\xi}\) if and only if \(\mathscr{G}_{\xi}(k_{\xi})\neq\emptyset\). If \(\mathbf{X}_{\xi}\) is the coarse moduli space of \(\mathscr{X}_{\xi}\), then there is an induced rational map \(\mathbf{X}_{\xi}\dashrightarrow\mathscr{G}_{\xi}\).
We regard two \(G\)-structures as equivalent if they have the same field of moduli and isomorphic universal families over isomorphic residual gerbes. Loosely speaking, two \(G\)-structures are equivalent if they contain essentially the same data. For instance, the datum of two lines in \(\mathbb{P}^{2}\) is equivalent to the datum of a point plus two tangent directions.
### Twisted \(G\)-quotients
Our first result is a classification of algebraic structures up to equivalence.
**Definition 1**.: A _twisted \(G\)-quotient_ of \(X\) over \(k\) is a twisted form \(Y\dashrightarrow\mathscr{G}\) over \(k\) of \(X/G\dashrightarrow\mathscr{B}_{K}G\).
**Theorem 2**.: _The mapping \(\xi\mapsto(\mathbf{X}_{\xi}\dashrightarrow\mathscr{G}_{\xi})\) defines a one-to-one correspondence between \(G\)-structures on \(X\) and twisted \(G\)-quotients of \(X\) over a finite subextension of \(K/k\), up to equivalence._
Thanks to Theorem 2, one can study twisted \(G\)-quotient directly, forgetting about the original structure. If one can prove that, for a given \(G\) and for every twisted \(G\)-quotient \(Y\dashrightarrow\mathscr{G}\) the gerbe \(\mathscr{G}\) has a rational point, then we automatically get that every \(G\)-structure \(\xi\) is defined over its field of moduli, regardless of whether \(\xi\) was the datum of a point, a divisor, a group structure or anything else. In fact, this is the case for most finite subgroups of both \(\underline{\operatorname{Aut}}(\mathbb{P}^{1})=\operatorname{PGL}_{2}\)[Breb] and \(\underline{\operatorname{Aut}}(\mathbb{P}^{2})=\operatorname{PGL}_{3}\)[Brec].
### Intepretation as cycle-structures
If, on the other hand, we find a twisted \(G\)-quotient \(Y\dashrightarrow\mathscr{G}\) with \(\mathscr{G}(k)=\emptyset\), we can search for a meaningful structure whose associated twisted \(G\)-quotient is \(Y\dashrightarrow\mathscr{G}\). This is the technique we used to construct examples of finite subsets of \(\mathbb{P}^{1}\)[Breb] and of \(\mathbb{P}^{2}\)[Brec] not defined over their field of moduli. The next result says that, under some assumptions, it is always possible to give such an interpretation using \(0\)-cycles.
**Theorem 3**.: _Assume that \(\operatorname{char}k=0\), that \(\underline{\operatorname{Aut}}(X)\) is of finite type over \(k\), and that there exists some finite extension \(k^{\prime}/k\) with a model \(\mathfrak{X}^{\prime}\) over \(k^{\prime}\) such that \(\mathfrak{X}^{\prime}(k^{\prime})\) is dense._
_For every algebraic structure \(\xi\) on \(X\), there exists a \(0\)-cycle \(Z\) on \(X\) such that \((X,Z)\) is equivalent to \((X,\xi)\)._
### Cohomological criterion
We give a cohomological criterion to study the existence of \(G\)-structures not defined over their field of moduli. Suppose that \(K\) is separably closed, let \(\mathfrak{X}\) be an integral algebraic space of finite type over \(k\) with \(\mathfrak{X}_{K}=X\), and \(\mathfrak{G}\) a group scheme finite etale over \(k\) with a faithful action on \(\mathfrak{X}\) such that \(\mathfrak{G}_{K}=G\subset\underline{\operatorname{Aut}}(X)\). Let \(\mathfrak{N}\subset\underline{\operatorname{Aut}}(X)\) a subgroup sheaf which normalizes \(G\), and \(\mathfrak{Q}=\mathfrak{N}/\mathfrak{G}\) the quotient.
**Theorem 4**.: _If the natural map \(\operatorname{H}^{1}(k,\mathfrak{N})\to\operatorname{H}^{1}(k,\mathfrak{Q})\) is not surjective, then there exists a \(G\)-structure of \(X\) with field of moduli \(k\) which does not descend to \(k\). If \(\mathfrak{N}\) is the entire normalizer of \(\mathfrak{G}\), the converse holds._
### Rational points on twisted quotients
While Theorem 4 is mostly useful to construct counterexamples by exhibiting an actual element of \(\operatorname{H}^{1}(k,\mathfrak{Q})\) which does not lift to \(\mathfrak{N}\), it is harder to apply it in the other direction, namely showing that \(\operatorname{H}^{1}(k,\mathfrak{N})\to\operatorname{H}^{1}(k,\mathfrak{Q})\) is surjective when \(\mathfrak{N}\) is the entire normalizer of \(\mathfrak{G}\). Usually, there is a more fruitful strategy for trying to show that, for fixed \(G\), an arbitrary twisted \(G\)-quotient \(Y\dashrightarrow\mathscr{G}\) satisfies \(\mathscr{G}(k)\neq\emptyset\).
If we can find a smooth rational point \(y\in Y(k)\), since \(|G|\) is prime with \(\operatorname{char}p\) then by the Lang-Nishimura theorem for tame stacks [BvA, Theorem 4.1] we have that \(\mathscr{G}(k)\neq\emptyset\). Furthermore, the smoothness assumption can be relaxed, see [BVb, SS6][Brea]. It is then important to clarify under which conditions a closed subset \(Z\subset X\) descends to a rational point (or more generally to a subspace) of \(Y\) for every twisted \(G\)-quotient \(Y\dashrightarrow\mathscr{G}\). We identify and study such conditions in SS6.
## 3. Proof of Theorem 2
As we have said above, in our joint article [3, SS5] with A. Vistoli we gave a precise definition of what an "algebraic structure" on a variety (or algebraic space) is. The following Lemma 5 provides a shortcut for the definition.
**Lemma 5**.: _If \(K/k\) is an algebraic extension and \(X\) is an integral algebraic space of finite type over \(K\), mapping an algebraic structure \(\xi\) to its universal family \(\mathscr{X}_{\xi}\to\mathscr{G}_{\xi}\) defines a bijection between_
* _algebraic structures on_ \(X\) _up to equivalence, and_
* _pair of morphisms_ \(p:\operatorname{Spec}K\to\mathscr{G}\)_,_ \(\mathscr{X}\to\mathscr{G}\) _with an isomorphism_ \(\mathscr{X}\times_{\mathscr{G}}\operatorname{Spec}K\stackrel{{ \sim}}{{\to}}X\) _where_ \(\mathscr{G}\) _is a finite gerbe over a finite subextension of_ \(K/k\) _such that the induced action of_ \(\operatorname{\underline{Aut}}_{\mathscr{G}}(p)\) _on_ \(X\) _is faithful, up to equivalence._
Let \(G\subset\operatorname{\underline{Aut}}(X)\) be a finite etale group scheme of degree prime with \(\operatorname{char}p\). Theorem 2 is a direct consequence of Lemma 5 plus the following Proposition 6.
**Proposition 6**.: _Assume that \(K/k\) is separable, and let \(Y\dasharrow\mathscr{G}\) be a twisted \(G\)-quotient of \(X\) over \(k\). There exists an algebraic stack \(\mathscr{X}\) over \(k\) with a commutative diagram_
_such that \(Y\) is the coarse moduli space of \(\mathscr{X}\) and the base change of \(\mathscr{X}\to\mathscr{G}\) to \(K\) is \([X/G]\to\mathscr{B}_{K}G\)._
Proof.: If \(S\) is a scheme and \(S\to U/G\times\mathscr{B}_{K}G\) is a morphism, let \(T_{1}\to S\) be the \(G\)-torsor given by \(P\to\mathscr{B}_{K}G\), and \(T_{2}\to S\) the torsor obtained by pulling back \(U\to U/G\) along \(S\to U/G\), then we have a \(2\)-cartesian diagram
It follows that \(U/G\to U/G\times\mathscr{B}_{K}G\) is representable, finite and etale. On the other hand, \([X/G]\to X/G\times\mathscr{B}_{K}G\) is representable and finite: it is clearly quasi-finite, it is proper since \([X/G]\to X/G\) is proper and it is representable since \([X/G]\to\mathscr{B}_{K}G\) is representable. Furthermore, \([X/G]\) is normal since \(X\) is normal. It follows that \([X/G]\) is the relative normalization [10, SS5.1] of \(X/G\times\mathscr{B}_{K}G\) with respect to \(U/G\to X/G\times\mathscr{B}_{K}G\). Everything is of finite type, so there exists a finite extension \(k^{\prime}/k\) where everything is defined, and by the same argument the relative normalization and the quotient stack coincide on \(k^{\prime}\).
The open subset \(U/G\subset X/G\) descends to an open subset \(V\subset Y\) which coincides with the largest open subset where \(Y\dasharrow\mathscr{G}\) is defined, see for instance [1, Corollary A.2]. Define \(\mathscr{X}\to Y\times\mathscr{G}\) as the relative normalization of \(Y\times\mathscr{G}\) with respect to \(V\to Y\times\mathscr{G}\). Since \(k^{\prime}/k\) is finite and separable and the relative normalization commutes with smooth base change, we have that \(\mathscr{X}_{k^{\prime}}\) is the quotient stack, and hence \(\mathscr{X}_{K}\simeq[X/G]\) is the quotient stack too.
Because of Proposition 6, we are going to use the term "twisted \(G\)-quotient" interchangeably for twisted forms of both \(X/G\dasharrow\mathscr{B}_{K}G\) and \([X/G]\to\mathscr{B}_{K}G\).
## 4. Proof of Theorem 3
Let \(k\) be a field with separable closure \(K\), \(X\) an integral algebraic space of finite type over \(K\), \(G\subset\operatorname{Aut}_{K}(X)\) a finite group of automorphisms of \(X\) over \(K\) of order prime with \(\operatorname{char}k\). Consider the groups \(\operatorname{Aut}_{k}(X),\operatorname{Aut}_{K}(X)\) of automorphisms of \(X\) over \(k\) and \(K\) respectively. A \(k\)-automorphism of \(X\) induces a \(k\)-automorphism of \(K\subset\operatorname{H}^{0}(X,\mathscr{O})\), hence there is a short exact sequence
\[1\to\operatorname{Aut}_{K}(X)\to\operatorname{Aut}_{k}(X)\to\operatorname{ Gal}(K/k)\]
where the image of the right arrow is the Galois group of the field of moduli of \(X\).
Let \(Y\dashrightarrow\mathscr{G}\) be a twisted \(G\)-quotient of \(X\) over a finite subextension \(k^{\prime}/k\), we have an identification \(Y_{K}=X/G\) and a projection \(q:X\to X/G=Y_{K}\to Y\).
**Definition 7**.: The _distinctive subgroup_\(\mathscr{N}_{Y}\subset\operatorname{Aut}_{k}(X)\) of \(Y\) is the subgroup of automorphisms \(\phi\) of \(X\) over \(k\) such that the diagram
commutes. If \(\xi\) is an algebraic structure on \(X\), the distinctive subgroup of \(\xi\) is the distinctive subgroup of its compression, and we simply write \(\mathscr{N}_{\xi}\).
The proof of the following is straightforward.
**Lemma 8**.: _A closed subscheme \(Z\subset X\) descends to \(Y\) if and only if \(\phi(Z)=Z\) for every \(\phi\in\mathscr{N}_{Y}\). _
Write \(\mathscr{N}_{X/k,G}\subset\operatorname{Aut}_{k}(X)\) for the normalizer of \(G\) in \(\operatorname{Aut}_{k}(X)\).
**Lemma 9**.: _With notation as above, we have that \(G=\mathscr{N}_{Y}\cap\operatorname{Aut}_{K}(X)\), \(\mathscr{N}_{Y}/G\simeq\operatorname{Gal}(K/k^{\prime})\) and there are inclusions_
\[G\subset\mathscr{N}_{Y}\subset\mathscr{N}_{X/k,G}\subset\operatorname{Aut}_{ k}(X).\]
Proof.: If \(\pi:X\to X/G\) is the projection, the elements of the intersection \(\mathscr{N}_{Y}\cap\operatorname{Aut}_{K}(X)\) are the \(K\)-automorphisms \(\phi\) of \(X\) such that \(\pi\circ\phi=\pi\), i.e. the elements of \(G\). The kernel of the natural morphism \(\mathscr{N}_{Y}\to\operatorname{Gal}(K/k^{\prime})\) is \(\mathscr{N}_{Y}\cap\operatorname{Aut}_{K}(X)=G\), hence we need to show that \(\mathscr{N}_{Y}\to\operatorname{Gal}(K/k^{\prime})\) is surjective.
Let \(f:\mathscr{Y}\to\mathscr{G}\) be the morphism constructed in Proposition 6, and \(f_{K}:[X/G]=\mathscr{Y}_{K}\to\mathscr{G}\) the composition of \(f\) with the base change to \(K\). If \(\sigma\in\operatorname{Gal}(K/k^{\prime})\) is a Galois element, this induces a \(\sigma\)-linear automorphism \(\sigma^{*}\) of \([X/G]=\mathscr{Y}_{K}\). Since \(f_{K}\circ\sigma^{*}=f_{K}:[X/G]\to\mathscr{G}\), we get an isomorphism between the associated coverings, namely a \(\sigma\)-linear automorphism \(\phi\) of \(X\). Clearly, \(\phi\in\mathscr{N}_{Y}\) lifts \(\sigma\).
If \(\phi\in\mathscr{N}_{Y}\) and \(g\in G\), then clearly \(\phi^{-1}\circ g\circ\phi\in\mathscr{N}_{Y}\cap\operatorname{Aut}_{K}(X)=G\), so \(\phi\in\mathscr{N}_{X/k,G}\).
**Proposition 10**.: _Two algebraic structures \(\xi,\xi^{\prime}\) on \(X\) with etale automorphism groups are equivalent if and only if their distinctive subgroups are equal._
Proof.: The "only if" part is obvious, assume \(\mathscr{N}_{\xi}=\mathscr{N}_{\xi^{\prime}}\). The statement follows from the fact that \(\mathscr{X}_{\xi}\) is the quotient stack \([X/\mathscr{N}_{\xi}]\) with the obvious action, while \(\mathscr{G}_{\xi}\) is the quotient stack \([\operatorname{Spec}K/\mathscr{N}_{\xi}]\) with the action given by the natural homomorphism \(\mathscr{N}_{\xi}\to\operatorname{Gal}(K/k)\)
**Lemma 11**.: _With notation as above, let \(U\subset X\) be a \(\mathscr{N}_{Y}\)-invariant non-empty open subset, i.e. the inverse image of an open subset of \(Y\). Assume that there exists a finite subextension \(K/k^{\prime\prime}/k^{\prime}\) such that \(Y(k^{\prime\prime})\) is dense. For every \(\tau\in\operatorname{Aut}_{k}(X)\smallsetminus\mathscr{N}_{Y}\), there exists a \(\mathscr{N}_{Y}\)-invariant finite subset \(Z\subset U(K)\) such that \(\tau(Z)\neq Z\)._
Proof.: Let \(q:X\to X/G=Y_{K}\to Y\) be the composition, by hypothesis \(q\circ\tau\neq\tau\). Up to enlarging \(k^{\prime\prime}\), we may assume that \(k^{\prime\prime}/k\) is Galois. We may find a finite _subset_\(H\subset\mathscr{N}_{Y}\) such that \(H\to\operatorname{Gal}(k^{\prime\prime}/k^{\prime})\) is surjective. Notice that we cannot impose that \(H\) is a subgroup, but fortunately this is not necessary.
If \(x:\operatorname{Spec}K\to U\) is a point such that \(q\circ x:\operatorname{Spec}K\to Y\) is \(k^{\prime\prime}\)-rational and \(\phi\in\mathscr{N}_{Y}\), \(h\in H\subset\mathscr{N}_{Y}\) are elements with equal images in \(\operatorname{Gal}(k^{\prime\prime}/k^{\prime})\), then clearly
\[q\circ\phi\circ x=q\circ h\circ x:\operatorname{Spec}K\to Y.\]
Since \(\tau\) is not in \(\mathscr{N}_{Y}\), I claim that there exists a point \(x:\operatorname{Spec}K\to U\) whose image in \(Y\) is \(k^{\prime\prime}\)-rational and such that \(\tau\circ x\neq\phi\circ x\) for every \(\phi\in\mathscr{N}_{Y}\). If by contradiction this is false, then for every \(x\in U(K)\) whose image in \(Y\) is \(k^{\prime\prime}\)-rational we may choose \(h_{x}\in H\) such that \(q\circ\tau\circ x=q\circ h_{x}\circ x\). Since \(H\) is finite and \(k^{\prime\prime}\)-rational points are dense, this implies that there exists an element \(h\in H\) such that \(q\circ\tau=q\circ h\). Since \(h\in H\subset\mathscr{N}_{Y}\), then \(q\circ h=q\) and thus \(q\circ\tau=q\), which is absurd.
Let \(x:\operatorname{Spec}K\to U\) be a point whose image in \(Y\) is \(k^{\prime\prime}\)-rational and such that \(\tau\circ x\neq\phi\circ x\) for every \(\phi\in\mathscr{N}_{Y}\), and denote by \(Z\) its \(\mathscr{N}_{Y}\)-orbit. Clearly, \(Z\) is \(\mathscr{N}_{Y}\)-invariant and \(\tau(Z)\neq Z\) since \(\tau\circ x\in\tau(Z)\) and \(\tau\circ x\not\in Z\).
Let us show how we may regard a cycle \(Z\) as an algebraic structure on \(X\). If \(C\subset X\) is a reduced, irreducible closed subscheme and \(S\) is a scheme over \(k\), a twisted form of \(C\) over \(S\) is a flat morphism \(\mathscr{X}\to S\) locally of finite type with a closed subscheme \(\mathscr{C}\subset\mathscr{X}\) such that, for some etale covering \(S^{\prime}\to S\), we have \((\mathscr{X}_{S^{\prime},K},\mathscr{C}_{S^{\prime},K})\simeq(X\times S^{ \prime}_{K},C\times S^{\prime}_{K})\). If \(Z=\sum_{i}n_{i}C_{i}\) is a cycle, a twisted form of \(Z\) over \(S\) is a flat morphism \(\mathscr{X}\to S\) locally of finite type and a formal sum \(\sum_{i}n_{i}\mathscr{C}_{i}\) where \(\mathscr{C}_{i}\subset\mathscr{X}\) is a twist of \(C_{i}\).
The residual gerbe \(\mathscr{G}_{Z}\) is the functor \(S\mapsto\){twisted forms of \(Z\)}; if \(\operatorname{Aut}(X,Z)\) is finite then \(\mathscr{G}_{Z}\) is a Deligne-Mumford stack which is a gerbe and \(\mathscr{X}_{Z}\to\mathscr{G}_{Z}\) is the corresponding universal family given by Yoneda's lemma. The coarse moduli space of \(\mathscr{G}_{Z}\) is the spectrum of the field of moduli \(k_{Z}\) of \(Z\).
Another way of constructing \(\mathscr{X}_{Z}\to\mathscr{G}_{Z}\) is the following. By construction, \(\mathscr{N}_{Z}\) is an extension of \(\operatorname{Gal}(K/k_{Z})\) by \(\mathscr{N}_{Z}\cap\operatorname{Aut}_{K}(X)=\operatorname{Aut}_{K}(X,Z)\). We have an induced action of \(\mathscr{N}_{Z}\) on \(\operatorname{Spec}K\) with the natural projection \(\mathscr{N}_{Z}\subset\operatorname{Aut}_{k}(\mathbb{P}^{2}_{K})\to \operatorname{Gal}(K/k)\), and the finite etale gerbe \(\mathscr{G}_{Z}\) is the quotient stack \([\operatorname{Spec}K/\mathscr{N}_{Z}]\): the natural map \(\operatorname{Spec}K\to\mathscr{G}_{Z}\) associated with the trivial twist of \(Z\) is a pro-etale, Galois covering with Galois group equal to \(\mathscr{N}_{Z}\). Similarly, we can view \(\mathscr{P}_{Z}\) as the quotient stack \([\mathbb{P}^{2}_{K}/\mathscr{N}_{Z}]\).
**Corollary 12**.: _If \(Z\) is a cycle on \(X\), then \(\mathscr{N}_{Z}\subset\operatorname{Aut}_{k}(X)\) is the subgroup of \(k\)-linear automorphisms \(\phi\) of \(X\) such that \(\phi^{*}Z=Z\). _
Let us now prove Theorem 3. Thanks to Proposition 10 and since we are in characteristic \(0\), it is enough to find a \(0\)-cycle \(Z\) on \(X\) such that \(\mathscr{N}_{Z}=\mathscr{N}_{\xi}\).
Let \(Z_{0}\) be the empty \(0\)-cycle, and define a \(\mathscr{N}_{\xi}\)-invariant cycle \(Z_{i}\) by recursion as follows. If \(\mathscr{N}_{Z_{i}}=\mathscr{N}_{\xi}\), then \(Z_{i+1}=Z_{i}\). Otherwise, choose \(\tau\in\mathscr{N}_{Z_{i}}\smallsetminus\mathscr{N}_{\xi}\) and define
\[Z_{i+1}=Z_{i}+(i+1)Z_{\tau},\]
where \(Z_{\tau}\subset X(K)\) is a \(\mathscr{N}_{\xi}\)-invariant subset such that \(\tau(Z_{\tau})\neq Z_{\tau}\) and such that the support of \(Z_{\tau}\) is disjoint from the support of \(Z_{i}\), it exists by Lemma 11. By construction, the coefficients of \(Z_{i}\) are at most \(i\) while the coefficients of \(Z_{i+1}\) are equal to \(i+1\). This implies that
\[\mathscr{N}_{Z_{i+1}}=\mathscr{N}_{Z_{i}}\cap\mathscr{N}_{Z_{\tau}},\]
in particular \(\tau\not\in\mathscr{N}_{Z_{i+1}}\) and hence \(\mathscr{N}_{Z_{i+1}}\subsetneq\mathscr{N}_{Z_{i}}\) if \(\mathscr{N}_{Z_{i}}\neq\mathscr{N}_{\xi}\).
The group \(\operatorname{Aut}_{k}(X)\) is an extension of \(\operatorname{Gal}(K/k)\) by \(\operatorname{Aut}_{K}(X)=\operatorname{\underline{Aut}}(X)(K)\). The former is a compact group with respect to the pro-finite topology, while the latter is compact with respect to the Zariski topology since \(\operatorname{\underline{Aut}}(X)\) is noetherian. Notice that the subgroups \(\mathscr{N}_{Z_{i}}\cap\operatorname{Aut}_{K}(X)=\operatorname{Aut}_{K}(X,Z _{i})\) and \(\operatorname{im}(\mathscr{N}_{Z_{i}}\to\operatorname{Gal}(K/k))= \operatorname{Gal}(K/k_{Z_{i}})\) are both closed. This implies that the chain \(\mathscr{N}_{Z_{i}}\supseteq\mathscr{N}_{Z_{i}}\) is eventually stable, i.e. \(\mathscr{N}_{Z_{i}}=\mathscr{N}_{\xi}\) for some \(i>>0\). This concludes the proof of Theorem 3.
## 5. Proof of Theorem 4
Assume that \(K\) is separably closed. Let \(\mathfrak{G}\) be a finite, etale group scheme over \(k\) of degree prime with \(\operatorname{char}k\) which acts faithfully on an algebraic space \(\mathfrak{X}\) of finite type over \(k\). Write \(X=\mathfrak{X}_{K}\), \(G=\mathfrak{G}_{k}\). We are going to show that twisted \(G\)-quotients of \(X\) are obtained by twisting \(\mathfrak{X}/\mathfrak{G}\dashrightarrow\mathscr{B}_{k}\mathfrak{G}\) with some torsor.
Let \(\operatorname{\underline{Aut}}(\mathfrak{X})\) be the sheaf of automorphisms of \(\mathfrak{X}\). If \(\mathfrak{X}\) is projective, this is representable, but we don't need this assumption. Let \(\mathfrak{N}\subset\operatorname{\underline{Aut}}(\mathfrak{X})\) be a subgroup sheaf which normalizes \(\mathfrak{G}\), and write \(\mathfrak{Q}\) for the quotient \(\mathfrak{N}/\mathfrak{G}\).
**Lemma 13**.: _If \(\mathfrak{N}\) is the entire normalizer of \(\mathfrak{G}\), the sheaf of groups \(\mathfrak{Q}\) is isomorphic to the fppf sheaf \(\mathscr{F}\) of automorphisms of \(\mathfrak{X}/\mathfrak{G}\) for which, fppf locally, there exists an automorphism of \(\mathscr{B}_{k}\mathfrak{G}\) making the obvious diagram \(2\)-commutative._
Notice that the particular automorphism of \(\mathscr{B}_{k}\mathfrak{G}\) is _not_ part of the definition of \(\mathscr{F}\), we only require existence.
Proof.: There is an obvious injective map \(\mathfrak{Q}\to\mathscr{F}\), let us show that it is surjective. If \(S\) is a scheme over \(k\) and \(f\) is an automorphism of \(\mathfrak{X}_{S}/\mathfrak{G}_{S}\) in \(\mathscr{F}(S)\), the hypothesis implies that there exists an fppf covering \(S^{\prime}\to S\) with an automorphism \(h\) of \(\mathfrak{X}_{S^{\prime}}\) which lifts \(f_{S^{\prime}}\). The fact that \(h\) lifts an automorphism of \(\mathfrak{X}_{S^{\prime}}/\mathfrak{G}_{S^{\prime}}\) implies that \(h\in\mathfrak{N}(S^{\prime})\). While the class of \(h\) is not well defined and depends on a choice, its image \(q_{0}\in\mathfrak{Q}(S^{\prime})\) is unique. The uniqueness of \(q_{0}\) implies that it descends to an element \(q\in\mathfrak{Q}(S)\). It is straightforward to check that the image of \(q\) in \(\mathscr{F}\) is \(f\).
Let \(T\to\operatorname{Spec}k\) is a \(\mathfrak{Q}\)-torsor over \(k\). There is are actions of \(\mathfrak{Q}\) on both \([\mathfrak{X}/\mathfrak{G}]\) and on \(\mathscr{B}_{k}\mathfrak{G}=[\operatorname{Spec}k/\mathfrak{G}]\), and using Romagny's theory of group actions of stacks [10] it is possible to define twists
\[[\mathfrak{X}/\mathfrak{G}]\times^{\mathfrak{Q}}T=[([\mathfrak{X}/\mathfrak{G }]\times T)/\mathfrak{Q}],\ \mathscr{B}_{k}\mathfrak{G}\times^{\mathfrak{Q}}T=[(\mathscr{B}_{k} \mathfrak{G}\times T)/\mathfrak{Q}],\]
with an induced morphism
\[[\mathfrak{X}/\mathfrak{G}]\times^{\mathfrak{Q}}T\to\mathscr{B}_{k} \mathfrak{G}\times^{\mathfrak{Q}}T\]
which is a twisted \(G\)-quotient. Our case is particularly simple, though, we may sidestep the general theory of group actions of stacks with the following direct definitions.
Given a scheme \(S\) over \(k\), \(\mathscr{B}_{k}\mathfrak{G}\times^{\mathfrak{Q}}T(S)\) is the groupoid of \(\mathfrak{N}\)-torsors \(P\to S\) with an \(\mathfrak{N}\)-equivariant morphism \(P\to T\), while \([\mathfrak{X}/\mathfrak{G}]\times^{\mathfrak{Q}}T(S)\) is the groupoid of \(\mathfrak{N}\)-torsors \(P\to S\) with an \(\mathfrak{N}\)-equivariant morphism \(P\to T\) and a \(\mathfrak{G}\)-equivariant morphism \(P\to\mathfrak{X}\). There is an obvious forgetful functor \([\mathfrak{X}/\mathfrak{G}]\times^{\mathfrak{Q}}T\to\mathscr{B}_{k}\mathfrak{ G}\times^{\mathfrak{Q}}T\).
By definition, \(\mathscr{B}_{k}\mathfrak{G}\times^{\mathfrak{Q}}T\) is the gerbe of liftings of \(T\) to \(\mathfrak{N}\). If \(T\) is trivial, it is immediate to check that \([\mathfrak{X}/\mathfrak{G}]\times^{\mathfrak{Q}}T\simeq[X/\mathfrak{G}]\), hence in general \([\mathfrak{X}/\mathfrak{G}]\times^{\mathfrak{Q}}T\) is a twisted form of \([X/\mathfrak{G}]\). Theorem 4 is a direct consequence of the following Proposition 14.
**Proposition 14**.: _Assume that \(\mathfrak{N}\) is the entire normalizer of \(\mathfrak{G}\). Every twisted \(G\)-quotient is the coarse moduli space of a twist_
\[[\mathfrak{X}/\mathfrak{G}]\times^{\mathfrak{Q}}T\to\mathscr{B}_{k}\mathfrak{ G}\times^{\mathfrak{Q}}T\]
_for some \(\mathfrak{Q}\)-torsor \(T\to\operatorname{Spec}k\)._
Proof.: Let \(Y\dashrightarrow\mathscr{G}\) be a twisted \(G\)-quotient over \(k\). Define a category fibered in sets \(T\) as follows. If \(S\) is a scheme over \(k\), \(T(S)\) is the set of isomorphisms
\[\mathfrak{X}_{S}/\mathfrak{G}_{S}\xrightarrow{\sim}Y_{S}\]
for which there exists an fppf covering \(S^{\prime}\to S\) and a \(2\)-commutative diagram
where the vertical arrows are isomorphisms. We stress that the fppf covering and the \(2\)-commutative diagram are _not_ part of the datum, we are only selecting the isomorphisms for which such a diagram exists. It is clear that \(T\) is a subsheaf of the sheaf of isomorphisms between \(\mathfrak{X}/\mathfrak{G}\) and \(Y\). The action of \(\mathfrak{Q}\) on \(\mathfrak{X}/\mathfrak{G}\) induces an action on \(T\), and Lemma 13 implies that \(T\) is a \(\mathfrak{Q}\)-torsor. It is straightforward to check that the given \(G\)-quotient is the twist of \(\mathfrak{X}/\mathfrak{G}\dashrightarrow\mathscr{B}_{k}\mathfrak{G}\) by \(T\).
Sometimes, it is interesting to study whether the structure actually descends to \(\mathfrak{X}\), as opposed to a twist of \(\mathfrak{X}\). For instance, if \(\mathfrak{X}=\mathbb{P}^{1}_{k}\) and the structure is a divisor \(D\subset\mathbb{P}^{1}_{K}\), one may want to study whether \(D\) descends to \(\mathbb{P}^{1}_{k}\)[10] [20], but Theorem 2 only tells us whether \(D\) descends to a Brauer-Severi variety of dimension \(1\). Similarly, one might want to study if the embedding of a curve in \(\mathbb{P}^{2}\) is defined over the field of moduli, and not only if the curve embeds in a Brauer-Severi surface over the field of moduli.
**Definition 15**.: Let \(X,G\) be as above, and \(\mathfrak{X}\) a model of \(X\) over \(k\). A twisted \(G\)-quotient \(\mathscr{X}\to\mathscr{G}\) of \(X\) over \(k\) is \(\mathfrak{X}\)_-neutral_ if there exists a \(2\)-cartesian diagram
such that the induced embedding \(\underline{\operatorname{Aut}}_{\mathscr{G}}(p)\subset\underline{ \operatorname{Aut}}(\mathfrak{X})\) identifies \(\underline{\operatorname{Aut}}_{\mathscr{G}}(p)(K)\) with \(G\).
Consider the quotient sheaf \(\underline{\operatorname{Aut}}(\mathfrak{X})/\mathfrak{N}\). Taking the fibers of \(\underline{\operatorname{Aut}}(\mathfrak{X})\to\underline{\operatorname{Aut}}( \mathfrak{X})/\mathfrak{N}\) defines a function \((\underline{\operatorname{Aut}}(\mathfrak{X})/\mathfrak{N})(k)\to\operatorname {H}^{1}(k,\mathfrak{N})\). Notice that these are sets, not groups. Still, they have a preferred object and it makes sense to consider the kernel
\[\operatorname{K}=\ker(\operatorname{H}^{1}(k,\mathfrak{N})\to\operatorname {H}^{1}(k,\underline{\operatorname{Aut}}(X))),\]
which is the image of \((\underline{\operatorname{Aut}}(\mathfrak{X})/\mathfrak{N})(k)\), i.e. we have a long exact sequence for non-abelian cohomology.
The following version of Theorem 4 is, again, a direct consequence of Proposition 14
**Theorem 16**.: _If the composition_
\[(\underline{\operatorname{Aut}}(\mathfrak{X})/\mathfrak{N})(k)\twoheadrightarrow \operatorname{K}\hookrightarrow\operatorname{H}^{1}(k,\mathfrak{N})\to \operatorname{H}^{1}(k,\mathfrak{O})\]
_is not surjective, there exists a non-\(\mathfrak{X}\)-neutral twisted \(G\)-quotient of \(X\) over \(k\). If \(\mathfrak{N}\) is the entire normalizer of \(\mathfrak{G}\) in \(\underline{\operatorname{Aut}}(\mathfrak{X})\) the converse holds._
## 6. Distinguished subsets
Let \(k\) be a field with separable closure \(K\), \(X\) an integral algebraic space of finite type over \(K\), \(G\subset\operatorname{Aut}_{K}(X)\) a finite group of automorphisms of \(X\) of order prime with \(\operatorname{char}k\). As we have seen above, \(G\)-structures with field of moduli \(k\) correspond to twisted \(G\)-quotients \(Y\dashrightarrow\mathscr{G}\) of \(X\) over \(k\), and the structure is defined over the field of moduli if and only if \(\mathscr{G}(k)\neq\emptyset\).
If \(y\in Y(k)\) is a smooth rational point, then by the Lang-Nishimura theorem for tame stacks [3, Theorem 4.1] we have that \(\mathscr{G}(k)\neq\emptyset\) and hence the \(G\)-structure is defined over \(k\). The smoothness assumption on \(y\) can be relaxed, see [3, SS6], [1]. Because of this, it is important to have a framework to construct rational points (or, more generally, subspaces) of arbitrary twisted \(G\)-quotients. Recall that \(\mathscr{N}_{X/k,G}\subset\operatorname{Aut}_{k}(X)\) is the subgroup of \(k\)-linear automorphisms of \(X\) which normalize \(G\).
**Definition 17**.: A closed subset \(Z\subset X\) is a _distinguished subset_ if, for every \(\tau\in\mathscr{N}_{X/k,G}\), \(\tau(Z)=Z\).
A distinguished subset is \(G\)-invariant since \(G\subset\mathscr{N}_{X/k,G}\), but the converse is false. Write \(\pi\) for the projection \(X\to X/G\).
**Lemma 18**.: _Let \(Z\subset X\) be a distinguished subset and \(Y\dashrightarrow\mathscr{G}\) a twisted \(G\)-quotient over \(k\). Then \(\pi(Z)\subset X/G\) descends to a closed subset of \(Y\)._
Proof.: Follows directly from Lemma 8 and Lemma 9.
Conjugation defines an homomorphism \(\mathscr{N}_{X/k,G}\to\operatorname{Aut}(G)\), let \(\mathscr{A}_{X/k,G}\subset\operatorname{Aut}(G)\) be its image. Clearly, \(\mathscr{A}_{X/k,G}\) contains every inner automorphism of \(G\).
While distinguished subsets are naturally defined in terms of the group \(\mathscr{N}_{X/k,G}\subset\operatorname{Aut}_{k}(X)\), it is often sufficient to have knowledge about \(\mathscr{A}_{X/k,G}\subset\operatorname{Aut}(G)\). Let us give some examples of this.
**Example 19**.: If \(H\subset G\) is a subgroup and \(Z\subset X\) a subspace, we say that \(H\) stabilizes (resp. fixes) \(Z\) if the elements of \(H\) restrict to automorphisms (resp. to the identity) on \(Z\).
Let \(H\subset G\) be a subgroup, and let \(S_{H}\) be the set of subgroups of \(G\) of the form \(\phi(H)\) for some \(\phi\in\mathscr{A}_{X/k,G}\). The following are distinguished subsets.
* The union and the intersection of the fixed loci of elements of \(S_{H}\).
* Let \(n\) be an integer, and suppose that \(H\) stabilizes (resp. fixes) a finite number of irreducible closed subsets of dimension \(n\), write \(C_{n,H}\) for their union and similarly \(C_{n,H^{\prime}}\) for \(H^{\prime}\in S_{H}\). Then \(\bigcup_{H^{\prime}\in S_{H}}C_{n,H}\) and \(\bigcap_{H^{\prime}\in S_{H}}C_{n,H}\) are distinguished subsets.
* Suppose that \(X\) is smooth and proper and that it has a line bundle \(L\) whose class in the Neron-Severi group is \(\operatorname{Aut}_{k}(X)\)-invariant, e.g. \(X=\mathbb{P}^{n}\) and \(L=\mathscr{O}(1)\). Then we can repeat the previous point but restricted to irreducible closed subset of a fixed degree \(d\).
Because of Example 19 and the many other similar examples that can be given, we want to understand the group \(\mathscr{A}_{X/k,G}\subset\operatorname{Aut}(G)\). We have inclusions
\[\operatorname{Inn}(G)\subset\mathscr{A}_{X/K,G}\subset\mathscr{A}_{X/k,G} \subset\operatorname{Aut}(G).\]
Usually, the subgroup \(\mathscr{A}_{X/K,G}\) is easy to understand, since it is defined in terms of "geometric" \(K\)-automorphisms of \(X\) and it is normal in \(\mathscr{A}_{X/k,G}\). It remains to study its cokernel.
Suppose that \(X,G\) descend to \(\mathfrak{X},\mathfrak{G}\) over \(k\) with \(\mathfrak{G}\) a group scheme, and that the action of \(G\) on \(X\) descends to an action of \(\mathfrak{G}\) on \(\mathfrak{X}\). In particular, we have an action of \(\operatorname{Gal}(K/k)\) on \(X\) and hence a section \(\operatorname{Gal}(K/k)\to\operatorname{Aut}_{k}(X)\) of \(\operatorname{Aut}_{k}(X)\to\operatorname{Gal}(K/k)\). Furthermore, we have an action \(\operatorname{Gal}(K/k)\to\operatorname{Aut}(G)\), \(\sigma\mapsto\phi_{\sigma}\) on \(G=\mathfrak{G}(K)\).
**Proposition 20**.: _The image of \(\operatorname{Gal}(K/k)\to\operatorname{Aut}_{k}(X)\) is contained in \(\mathscr{N}_{X/k,G}\), and \(\mathscr{N}_{X/k,G}\) is generated by \(\mathscr{N}_{X/K,G}\) and \(\operatorname{Gal}(K/k)\)._
Proof.: Choose an element \(\sigma\in\operatorname{Gal}(K/k)\), let \(\phi_{\sigma}\) be the induced automorphism of \(G=\mathfrak{G}(K)\). Since the action \(\rho:G\times X\to X\) descends to an action \(\mathfrak{G}\times\mathfrak{X}\to\mathfrak{X}\) over \(k\), we have a commutative diagram
where the vertical arrows are \(\sigma\)-equivariant. It follows that \(\sigma^{*}:X\to X\) is \(\phi_{\sigma}\)-equivariant, and hence \(\sigma^{*}\in\mathscr{N}_{X/k,G}\).
Now consider an element \(\tau\in\mathscr{N}_{X/k,G}\subset\operatorname{Aut}_{k}(X)\) normalizing \(G\), and let \(\sigma\) be its image in \(\operatorname{Gal}(K/k)\). Since \(\sigma^{*}\in\mathscr{N}_{X/k,G}\), then \(\tau\circ\sigma^{*-1}\) is an element of \(\mathscr{N}_{X/k,G}\) which maps to the identity in \(\operatorname{Gal}(K/k)\), i.e. \(\tau\circ\sigma^{*-1}\in\mathscr{N}_{X/K,G}\).
**Corollary 21**.: _The subgroup \(\mathscr{A}_{X/k,G}\subset\operatorname{Aut}(G)\) is generated by \(\mathscr{A}_{X/K,G}\) and by the image of \(\operatorname{Gal}(K/k)\to\operatorname{Aut}(G)\). In particular, if \(\mathfrak{G}\) is an inner form of \(G\) then \(\mathscr{A}_{X/k,G}=\mathscr{A}_{X/K,G}\)._
|
2302.04880
|
HI filaments as potential compass needles? Comparing the magnetic field
structure of the Small Magellanic Cloud to the orientation of GASKAP-HI
filaments
|
High-spatial-resolution HI observations have led to the realisation that the
nearby (within few hundreds of parsecs) Galactic atomic filamentary structures
are aligned with the ambient magnetic field. Enabled by the high quality data
from the Australian Square Kilometre Array Pathfinder (ASKAP) radio telescope
for the Galactic ASKAP HI (GASKAP-HI) survey, we investigate the potential
magnetic alignment of the $\gtrsim 10\,{\rm pc}$-scale HI filaments in the
Small Magellanic Cloud (SMC). Using the Rolling Hough Transform (RHT) technique
that automatically identifies filamentary structures, combined with our newly
devised ray-tracing algorithm that compares the HI and starlight polarisation
data, we find that the HI filaments in the northeastern end of the SMC main
body ("Bar" region) and the transition area between the main body and the tidal
feature ("Wing" region) appear preferentially aligned with the magnetic field
traced by starlight polarisation. Meanwhile, the remaining SMC volume lacks
starlight polarisation data of sufficient quality to draw any conclusions. This
suggests for the first time that filamentary HI structures can be magnetically
aligned across a large spatial volume ($\gtrsim\,{\rm kpc}$) outside of the
Milky Way. In addition, we generate maps of the preferred orientation of HI
filaments throughout the entire SMC, revealing the highly complex gaseous
structures of the galaxy likely shaped by a combination of the intrinsic
internal gas dynamics, tidal interactions, and star formation feedback
processes. These maps can further be compared with future measurements of the
magnetic structures in other regions of the SMC.
|
Y. K. Ma, N. M. McClure-Griffiths, S. E. Clark, S. J. Gibson, J. Th. van Loon, J. D. Soler, M. E. Putman, J. M. Dickey, M. -Y. Lee, K. E. Jameson, L. Uscanga, J. Dempsey, H. Dénes, C. Lynn, N. M. Pingel
|
2023-02-09T19:00:01Z
|
http://arxiv.org/abs/2302.04880v1
|
Hi filaments as potential compass needles? Comparing the magnetic field structure of the Small Magellanic Cloud to the orientation of GASKAP-Hi filaments
###### Abstract
High-spatial-resolution Hi observations have led to the realisation that the nearby (within few hundreds of parsecs) Galactic atomic filamentary structures are aligned with the ambient magnetic field. Enabled by the high quality data from the Australian Square Kilometre Array Pathfinder (ASKAP) radio telescope for the Galactic ASKAP Hi (GASKAP-Hi) survey, we investigate the potential magnetic alignment of the \(\gtrsim 10\) pc-scale Hi filaments in the Small Magellanic Cloud (SMC). Using the Rolling Hough Transform (RHT) technique that automatically identifies filamentary structures, combined with our newly devised ray-tracing algorithm that compares the Hi and starlight polarisation data, we find that the Hi filaments in the northeastern end of the SMC main body ("Bar" region) and the transition area between the main body and the tidal feature ("Wing" region) appear preferentially aligned with the magnetic field traced by starlight polarisation. Meanwhile, the remaining SMC volume lacks starlight polarisation data of sufficient quality to draw any conclusions. This suggests for the first time that filamentary Hi structures can be magnetically aligned across a large spatial volume (\(\gtrsim\) kpc) outside of the Milky Way. In addition, we generate maps of the preferred orientation of Hi filaments throughout the entire SMC, revealing the highly complex gaseous structures of the galaxy likely shaped by a combination of the intrinsic internal gas dynamics, tidal interactions, and star formation feedback processes. These maps can further be compared with future measurements of the magnetic structures in other regions of the SMC.
keywords: ISM: magnetic fields - ISM: structure - galaxies: ISM - Magellanic Clouds - galaxies: magnetic fields - radio lines: ISM
## 1 Introduction
The \(\mu\)G-strength magnetic fields in galaxies affect nearly all aspects of galactic astrophysics (e.g., Beck & Wielebinski, 2013; Beck, 2016), including the propagation of cosmic rays (Aab et al., 2015; Seta et al., 2018), the rate at which stars form (Price and Bate, 2008; Federrath and Klessen, 2012; Birnboim et al., 2015; Krumholz and Federrath, 2019), the stellar initial mass function (Krumholz and Federrath, 2019; Sharda et al., 2020; Mathew and Federrath, 2021), the large-scale gas dynamics (Beck et al., 2005; Kim and Stone, 2012), and possibly even the rotation curves of galaxies (Chan and Del Popolo, 2022; Khademi et al., 2022, however see also Elstner et al., 2014). Detailed mapping of
the magnetic field strengths and structures in galaxies is challenging, but important for a full understanding of the astrophysical processes above. In addition, it has wide applicabilities such as tracing gas flows (e.g., Beck et al., 1999; Heald, 2012), disentangling the 3D structures of galaxies (e.g., Panopoulou et al., 2021), and furthering our fundamental understanding in the origin and evolution of the magnetic fields in galaxies (e.g., Beck, 2016; Federrath, 2016).
The linear polarisation of starlight is amongst the first phenomena utilised to measure the magnetic fields in galaxies (Hiltner, 1951). While starlight is generally intrinsically unpolarised, the intervening dust in the interstellar medium (ISM) can induce linear polarisation in the observed starlight (Hall, 1949; Hiltner, 1949). The magnetic moment vector of an asymmetric dust grain is aligned to the ambient magnetic field via the radiative torque alignment effect (Hoang and Lazarian, 2014), forcing the long axes of the dust particles to be perpendicular to the magnetic field direction. From this, the preferential extinction along the long axis of the dust grains leads to the linear polarisation signal parallel to the plane-of-sky magnetic field orientation (e.g., Andersson et al., 2015; Hoang and Lazarian, 2016). Meanwhile, the same dust grains can re-emit in the infrared and sub-millimetre wavelengths, with the emission also linearly polarised but with the polarisation plane being perpendicular to the magnetic field instead (e.g., Hildebrand, 1988; Planck Collaboration, 2015; Lopez-Rodriguez et al., 2022). These two methods can be exploited to probe the plane-of-sky magnetic fields in the colder phases of the ISM. For the line-of-sight component of the magnetic field, one can utilise the rotation measure (RM) of background polarised radio continuum sources (e.g., Ma et al., 2020; Tahani et al., 2022), or the polarised Zeeman-splitting measurements (e.g., with Hi absorption, Heiles and Troland, 2005; or with OH masers, Ogbodo et al., 2020).
The linear polarisation state is commonly described by the Stokes \(Q\) and \(U\) parameters defined as
\[Q =\mathrm{PI}\cdot\cos(2\theta), \tag{1}\] \[U =\mathrm{PI}\cdot\sin(2\theta), \tag{2}\]
where \(\mathrm{PI}\) and \(\theta\) are the polarised intensity and the polarisation angle, respectively. We follow the convention of the International Astronomical Union (IAU) on the \(\theta\), which measures the polarisation \(E\)-vector from north through east (Contopoulos and Jappel, 1974). We further define, in line with the literature, fractional Stokes \(q\) and \(u\) parameters as
\[q=Q/I, \tag{3}\] \[u=U/I, \tag{4}\]
where \(I\) is the total intensity (or, Stokes \(I\)) of the emission.
High spatial resolution observations have revealed that the Hi gas in the Milky Way is organised into highly filamentary structures (e.g., McClure-Griffiths et al., 2006; Clark et al., 2014; Martin et al., 2015; Kalberla et al., 2016; Blagrave et al., 2017; Soler et al., 2020; Skalidis et al., 2022; Campbell et al., 2022; Soler et al., 2022; Syed et al., 2022). Upon comparisons with starlight and dust polarisation data, it has been found that the elongation of these slender (with presumed widths of \(\lesssim 0.1\,\mathrm{pc}\)) Hi filaments is often aligned with their ambient magnetic field orientations (McClure-Griffiths et al., 2006; Clark et al., 2014, 2015; Martin et al., 2015; Kalberla et al., 2016; Clark and Hensley, 2019, see Skalidis et al., 2022 for a counter-example). However, it remains unclear whether such magnetic alignment is common within the entirety of the Milky Way as well as amongst galaxies with different astrophysical conditions, as the studies above focused on the neighbourhood around the Sun only (within a few hundreds of parsecs). The limitation is imposed by a combination of the paucity of starlight polarisation data throughout the Galactic volume, the angular resolution of the Hi as well as dust polarisation data, and the complexity of studying the Milky Way from within.
From simulations, it has been suggested that filamentary Hi structures can be formed by turbulence, shocks, or thermal instabilities, with the role of the magnetic field still under debate (e.g., Hennebelle, 2013; Federrath, 2016; Inoue and Inutsuka, 2016; Villagran and Gazol, 2018; Gazol and Villagran, 2021). In fact, various numerical studies have led to results ranging from no preferred orientation of the Hi filaments with respect to the magnetic field (Federrath, 2016), to the filaments preferentially oriented parallel (Inoue and Inutsuka, 2016; Villagran and Gazol, 2018) or perpendicular (Gazol and Villagran, 2021) to the magnetic field. Extending the observational study of the relative orientation between magnetic fields and Hi filamentary structures to nearby galaxies is therefore crucial, as the simpler external perspective will allow us to verify, despite the very different spatial scales probed, if the magnetically aligned Hi filaments are a general trend across a vast galactic volume. The main hurdle to achieving this is obtaining Hi data of sufficient quality, specifically the spatial resolution, velocity resolution, and sensitivity.
Apart from improving our understanding of the physical nature of Hi filaments as discussed above, the alignment of the filaments with the ambient magnetic fields, if established, will open up the possibility of using the Hi data as a tomographic probe of the magnetic field. This is because the plane-of-sky magnetic field orientation can then be dissected across pseudo-distance separated by the radial velocity (e.g., Clark and Hensley, 2019). It also allows the study of magnetic field tangling along the line of sight (Clark, 2018).
At a distance of about \(62\,\mathrm{kpc}\)(e.g. Scowcroft et al., 2016; Graczyk et al., 2020), the Small Magellanic Cloud (SMC) is one of the closest galaxies from us. Its proximity makes it among the best targets for the investigation of the relative orientation between magnetic fields and Hi filaments. The SMC is a low-mass (\(M_{\star}=3\times 10^{8}\,M_{\odot}\); Skiba et al., 2012), gas-rich (\(M_{\mathrm{HI}}=4\times 10^{8}\,M_{\odot}\); Brins et al., 2005), low-metallicity (\(Z\approx 0.004\approx 0.3\,Z_{\odot}\); Choudhury et al., 2018) irregular galaxy undergoing an episode of enhanced star formation (\(\approx 0.26\,M_{\odot}\,\mathrm{yr}^{-1}\); see Massana et al., 2022). The galaxy consists of two major components (see, e.g., Gordon et al., 2011): the main body called "the Bar" which is unrelated to an actual galactic bar, and a peripheral feature called "the Wing" which is believed to have formed by tidal interactions with the Large Magellanic Cloud (LMC; Besla et al., 2012). The tidal forces are believed to have also created the gaseous bridge connecting the two Magellanic Clouds (Besla et al., 2012), aptly named the Magellanic Bridge. The overall 3D structures of both the gaseous and stellar components of the SMC are highly complex, and remain poorly understood (see, e.g., Di Teodoro et al., 2019; Murray et al., 2019; Tarton et al., 2021, and references therein).
The SMC has previously been observed and studied using the Australia Telescope Compact Array (ATCA) in Hi emission (Staveley-Smith et al., 1997). The angular resolution of these data (\(1\aas@@fstack{\prime}6\approx 30\,\mathrm{pc}\)) is a drastic improvement over those of single dish observations, leading to the distinct identification of numerous shell structures throughout the galaxy (Staveley-Smith et al., 1997; Stanimirovic et al., 1999). With the Australian Square Kilometre Array Pathfinder (ASKAP) radio telescope (Hotan et al., 2021), the SMC was observed in Hi during the commissioning phase with 16 antennas (McClure-Griffiths et al., 2018), and recently with the full 36 antennas array as part of the pilot observations for the GASKAP-Hi survey (Pingel et al., 2022, see Dickey et al., 2013 for a description of the GASKAP survey). The latter pilot survey data have clearly revealed the highly filamentary structures of the SMC (see Section 2.1), enabling our study here
regarding the links between these Hi structures and the associated ambient magnetic field.
Apart from the early studies observing the polarised synchrotron emission from within the SMC (Haynes et al., 1986; Loiseau et al., 1987), the magnetic field of the SMC was first explored in great detail by Mao et al. (2008), using both RM of polarised background extragalactic radio sources (EGSs) and polarised stars within the SMC. An extensive starlight polarisation catalogue of the SMC was made available by Lobo Gomes et al. (2015), leading to their study of the plane-of-sky magnetic field in the northeastern end of the SMC Bar, the SMC Wing, and the start of the Magellanic Bridge (see Section 2.2). Recently, the line-of-sight magnetic field has been revisited with RM values from new ATCA observations of EGSs (Livingston et al., 2022). The current picture of the galactic-scale magnetic field in the SMC consists of:
* A coherent magnetic field along the line-of-sight (\(\approx 0.2\)-\(0.3\,\mu\)G) directed away from the observer across the entire galaxy;
* Two trends in the plane-of-sky magnetic field orientation (\(\approx 0.9\)-\(1.6\,\mu\)G), one aligned with the elongation of the SMC Bar, and the other along the direction towards the SMC Wing and the Magellanic Bridge; and
* A turbulent magnetic field component that dominates in strength (\(\approx 1.5\)-\(5.0\,\mu\)G) over the ordered / coherent counterparts, by a factor of \(\approx 1.5\) in the plane-of-sky and \(\approx 10\) along the line of sight.
However, the current spatial coverage of the data (both EGS RM and starlight polarisation) remain too coarse to construct a detailed map of the magnetic structure of the SMC.
Are the Hi filaments in the SMC preferentially aligned to the ambient magnetic field similar to the case in the solar neighbourhood, despite the vastly different astrophysical characteristics (e.g., metallicity, mass, star formation rate, and tidal influences) and spatial scales probed (\(\approx 0.1\) pc in the Milky Way; \(\approx 9\) pc in the SMC)? How are the 3D Hi structures linked to the different astrophysical processes occurring in the SMC, including its overall magnetic field structure? Motivated by these questions, we investigate in this work the relative orientation between Hi structures in the SMC as traced by the new GASKAP-Hi data and the magnetic fields traced by starlight polarisation reported by Lobo Gomes et al. (2015).
This paper is organised as follows. We describe the data and the associated processing required for our study in Section 2, and devise a new ray-tracing algorithm that enables our careful comparison between the Hi and starlight polarisation data as outlined in Section 3. In Section 4, we (1) evaluate whether the SMC Hi filaments are magnetically aligned, (2) test whether the GASKAP-Hi data can trace the small-scale turbulent magnetic field, and (3) present the plane-of-sky magnetic field structure of the SMC as traced by Hi filaments. We discuss the implications of our work in Section 5, and conclude our study in Section 6.
## 2 Data and Data Processing
### Hi filaments from GASKAP
We use new GASKAP-Hi data of the SMC for this study (Pingel et al., 2022). The 20.9-hour ASKAP data were taken in December 2019 during Phase I of the Pilot Survey, and were combined with single-dish data from the Parkes Galactic All-Sky Survey (GASS; McClure-Griffiths et al., 2009). The resulting data cube presents an unprecedented view of the Hi emission of the SMC (see Figure 1), with the highest combination of angular resolution (synthesised beam of 30''), velocity resolution (\(0.98\,\mathrm{km\,s^{-1}}\)), and sensitivity (\(1.1\,\mathrm{K}\) per channel).
It is immediately apparent that the SMC exhibits a vast network of filamentary structures throughout the entire galaxy. We proceed to apply the Rolling Hough Transform1 (RHT; Clark et al., 2014) algorithm to the GASKAP-Hi cube to automatically locate these filaments. Other algorithms that have been used in the literature for the study of elongated structures include the Hessian analysis (e.g., Polychroni et al., 2013; Kalberla et al., 2016) and the anisotropic wavelet analysis (e.g., Partickev et al., 2006; Frick et al., 2016). While the former has been shown to lead to comparable results as the RHT (Soler et al., 2020), the differences of the latter with the RHT have not been explored in details, and is beyond the scope of this work.
Footnote 1: Available on [https://github.com/soclark/RHT](https://github.com/soclark/RHT).
In particular, we apply the convolutional RHT algorithm (see BI-CEP/Keck Collaboration et al., 2022, for details) which is a significant improvement in the computational efficiency. For each 2D image, the RHT first performs an unsharp mask procedure, subtracting from the image a smoothed version of itself. The smoothing is done by convolving the image with a circular top-hat function with radius \(R_{\mathrm{sm}}\). Next, a bitmask is created by checking the value of the resulting difference map - True if the pixel value is greater than zero, and False otherwise. This bitmask can be regarded as a map of small-scale structures, including potential filaments, edges of structures, etc. Finally, the algorithm "rolls" through each pixel in the bitmask image and quantifies the distribution of surrounding linear structure. This is done by extracting a circular window with diameter \(D_{W}\) around each pixel, and applying a Hough transform (Hough, 1962) to the bitmasked data in the window, with the sampling done through the centre of the circular window only (i.e., \(\rho=0\) in the formulation of Duda and Hart, 1972). A simplified explanation of the operation here is that we sample straight lines passing through the centre pixel of the circular window, each with different \(\theta\) ranging from \(0^{\circ}\) to \(180^{\circ}\). For each of the straight lines, the fraction of True-valued pixels has been evaluated and compared with the threshold parameter. If the computed fraction exceeds threshold, the fraction value subtracted by the threshold is written to the final output 4D-hypercube (with the axes being the two spatial coordinates, velocity, and \(\theta\)). Otherwise, zero is written to the hypercube instead. In other words, a non-zero pixel in the RHT 4D-hypercube means a filament with orientation \(\theta\) passes through the 3D location (position-position-velocity) of the corresponding pixel.
The original RHT algorithm outlined above performs well for Galactic sky regions and velocity ranges where the emission is ubiquitous (e.g., Clark et al., 2014; Jelic et al., 2018; Campbell et al., 2022). However, this is far from the case for the SMC in Hi, for which the presence of emission is highly dependent on the location and the radial velocity (Stanimirovic et al., 1999; McClure-Griffiths et al., 2018; Di Teodoro et al., 2019; Pingel et al., 2022). Upon application of this original RHT to the new GASKAP-Hi cube of the SMC, we find that it can sometimes erroneously identify filamentary structures in very low signal-to-noise sky areas. This prompts us to implement an intensity cutoff procedure in the RHT algorithm - the bitmask formation step above would additionally compare the Hi intensity of the input image with a determined cutoff value, and will set the bitmask pixel value as False if the intensity is lower than the cutoff. For our application here, we adopt a cutoff value of (\(5.7\,\mathrm{K}/P_{\mathrm{PB}}\)), where the \(5.7\,\mathrm{K}\) corresponds to five times the rms noise near the centre of the images, and \(P_{\mathrm{PB}}\) is the primary beam attenuation level.
We apply the modified RHT algorithm2 independently to each of the 223 velocity channels3 from 40.91 to \(257.85\,\rm{km\,s^{-1}}\), with the three RHT parameters set as \(R_{\rm{sm}}=12\,\rm{px}=25\,\rm{pc}\), \(D_{W}=83\,\rm{px}=175\,\rm{pc}\), and \(\tt{threshold}=0.7\). The conversions to physical scales above assume a distance of \(62\,\rm{kpc}\) to the SMC (e.g., Scowcroft et al., 2016; Graczyk et al., 2020) with a pixel scale of \(7\arcsec\) for the GASKAP-Hi data (Pingel et al., 2022). A sample of the RHT output is illustrated in Figure 2. Our choice of \(R_{\rm{sm}}\), in units of the synthesised beam, is similar to that of Clark et al. (2014) with Galactic Arecibo L-Band Feed Array Hi (GALFA-Hi; Peek et al., 2011) data (2.8 for us here compared to their 2.5). Meanwhile, our chosen \(D_{W}\) in units of \(R_{\rm{sm}}\), which determines the aspect ratios of the identified filamentary structures, is about 7, again similar to the choice of Clark et al. (2014) of 10. Finally, our choice of threshold is identical to Clark et al. (2014). To ensure that our results are not critically dependent on the RHT parameter choice, we repeat our analysis using different sets of parameters, reported in Appendix A.
Footnote 2: This can be toggled on by using the cutoff_mask parameter in the convolutional RHT algorithm.
Footnote 3: The radial velocities presented throughout this work are with respect to the local standard of rest (LSR).
In this study, we do not count the number of Hi filaments identified, since the RHT algorithm only reports whether a pixel is part of a filamentary structure, but not group the many pixels together as a filament. The quantification of the number of filaments in the SMC will require additional algorithms that take into account the spatial and radial-velocity coherence of the RHT output, which is beyond the scope of this work.
Finally, we note that the GASKAP-Hi SMC maps are in orthographic projection, and given the large angular extent of the maps,
Figure 1: The Hi peak intensity image of the SMC from the GASKAP-Hi Pilot Survey I observations (Pingel et al., 2022), highlighting the vast network of filamentary structures in this galaxy. The locations of the 20 Lobo Gomes et al. (2015) starlight polarisation fields considered in this study are each shown as a square with the size reflecting the true field of view of \(8\times 8\,\rm{sq}\). arcmin. The approximate spatial division between the Bar and the Wing regions of the SMC is outlined by the grey dotted line, and the Magellanic Bridge is situated outside of the covered sky area in the direction indicated by the arrow to the lower left.
sky curvature is apparent (see Figure 1). This means that the vertical axis of the map is in general not parallel to the sky north-south axis. As RHT operates on the maps' cartesian grid, there can be angle offsets between the output hypercube's \(\theta\)-axis and the sky \(\theta\). This has been corrected for in our analysis throughout this paper.
### Starlight polarisation data
To trace the plane-of-sky magnetic field orientation in the SMC, we use the Lobo Gomes et al. (2015) starlight polarisation catalogue derived from a \(V\)-band optical survey towards the SMC using the Cerro Tololo Inter-American Observatory (CTIO). The survey has covered a total of 28 fields in the northeastern Bar and the Wing of the SMC, as well as part of the Magellanic Bridge, with a field of view of \(8\times 8\) sq. arcmin each. The polarisation properties of 7,207 stars have been reported, with the foreground polarisation contribution of the Milky Way determined and subtracted in Stokes \(qu\) space by making use of the polarised starlight from Galactic stars in the same sky area. To compare with our GASKAP-Hi data of the SMC, we focus on the 20 starlight fields in the SMC only (Figure 1), encompassing a total of 5,999 stars with detected linear polarisation.
In Lobo Gomes et al. (2015), the preferred orientation(s) of the starlight polarisation angle (\(\theta_{\star}\)) of each of their fields was obtained by fitting a single- or double-component Gaussian function to the histogram of \(\theta_{\star}\). In other words, they have only used the angle information of the starlight polarisation vector (in Stokes \(qu\) plane), without taking the polarisation fraction (\(p_{\star}\)) into account. Here, we re-analyse the starlight polarisation data with a full vector approach as outlined below.
Consider that the SMC is permeated by a magnetic field composed of two components in superposition - a large-scale magnetic field with a coherence length \(\gg 100\) pc, and a small-scale isotropic magnetic field with a coherence length \(\lesssim 100\) pc (e.g., Beck, 2016, see also Livingston et al., 2022). As each of the Lobo Gomes et al. (2015) fields spans \(\approx 150\) pc across in the plane of sky, the two magnetic field components will leave different imprints on the observed starlight polarisation when we consider each starlight field as a whole. On the Stokes \(qu\) plane, all stars start at the origin (\(q=u=0\)) since they are intrinsically unpolarised. The large-scale magnetic field in the intervening volume shifts all stars coherently in a single direction in the Stokes \(qu\) plane as determined by its magnetic field orientation, while the small-scale magnetic field scatters the stars isotropically in the Stokes \(qu\) plane.
In light of the expected effects of the two magnetic field components on the observed starlight polarisation, we re-analyse the Lobo Gomes et al. (2015) data accordingly. The large-scale magnetic field
Figure 3: Results from our re-analysis of the Lobo Gomes et al. (2015) starlight polarisation data. Green line segments are directed along the \(\overline{\theta}_{\star}\) that trace the plane-of-sky magnetic field orientation for fields that we find a coherent starlight polarisation angle, while red crosses mark fields that do not exhibit a coherent starlight polarisation angle. The background image shows the same but zoomed in GASKAP-Hi peak intensity map as in Figure 1.
Figure 2: Illustration of the automatically identified filaments using RHT. Left panel shows a zoomed-in image of the central area of the GASKAP-Hi SMC map (Pingel et al., 2022) at \(\nu_{\rm LSR}=133.74\) km s\({}^{-1}\), and the right panel shows in cubehelix colour scheme (Green, 2011) the corresponding RHT back-projection map, where any non-zero pixels are regarded as a filament in our study.
contribution is evaluated by the vector mean in Stokes \(qu\) space:
\[\overline{q}_{\star} =\frac{1}{N_{\star}}\sum_{i}^{N_{\star}}q_{i},\text{ and} \tag{5}\] \[\overline{u}_{\star} =\frac{1}{N_{\star}}\sum_{i}^{N_{\star}}u_{i}, \tag{6}\]
where \(i\) is the index for the \(N_{\star}\) stars within each of the starlight fields. This can be further converted to \(\overline{p}_{\star}\) and \(\overline{\theta}_{\star}\) by
\[\bar{p}_{\star} =\sqrt{\tilde{q}_{\star}^{2}+\tilde{a}_{\star}^{2}},\text{ and} \tag{7}\] \[\tilde{\theta}_{\star} =0.5\tan^{-1}(\tilde{a}_{\star}/\tilde{q}_{\star}). \tag{8}\]
Meanwhile, the effect of the small-scale magnetic field is captured through the 2D standard deviation (\(\sigma_{p\star}=\sqrt{\sigma_{q\star}\cdot\sigma_{u\star}}\)) of the star sample in Stokes \(qu\) plane, with \(\sigma_{q\star}\) and \(\sigma_{u\star}\) being the 1D standard deviation of Stokes \(q\) and \(u\), respectively. The uncertainties in \(\overline{p}_{\star}\), \(\overline{\theta}_{\star}\), and \(\sigma_{p\star}\) are estimated by bootstrapping - for each starlight field we correspondingly draw with replacement \(N_{\star}\) stars, and obtain the values of the three parameters as above. This process is repeated \(10^{6}\) times, and the standard deviations out of the \(10^{6}\) values are taken as the uncertainty values of the three parameters4. The values of \(\tilde{\theta}_{\star}\), \(\tilde{p}_{\star}\), and \(\sigma_{p\star}\) of each field, as well as the number of SMC stars per field (\(N_{\star}\)) are all listed in Table 1, with the corresponding 2D histograms shown in Figure 1 under Appendix C. Finally, the sky distribution of \(\overline{\theta}_{\star}\) is shown in Figure 3.
Footnote 4: We repeat this bootstrapping for ten times, each time resampling \(10^{6}\) times as stated, and find that the resulting uncertainties are always almost identical, meaning that these obtained uncertainty values have certainly converged.
We deem the resulting \(\tilde{\theta}_{\star}\) of seven out of the total of 20 fields as uncertain, since their signal-to-noise ratios of \(\tilde{p}_{\star}\) are low (\(<3\)). All these uncertain values are placed in parentheses in Table 1.
We compare our newly obtained \(\tilde{\theta}_{\star}\) of each field with the corresponding results from Lobo Gomes et al. (2015). Since their approach can yield up to two polarisation angles for each starlight field, we identify the primary polarisation component for such case, defined as the listed Gaussian component with the highest peak in their \(\theta_{\star}\) histogram. The resulting angles are labelled as \(\theta_{\text{LG15}}\) and listed in Table 1. In almost all fields, the values of our \(\tilde{\theta}_{\star}\) show good agreement with \(\theta_{\text{LG15}}\) (to within \(10^{\circ}\)), with the only exceptions being field 7 (angle difference of \(26^{\circ}\pm 6^{\circ}\)) and field 12 (angle difference of \(11^{\circ}\pm 7^{\circ}\)).
### 3D dust extinction data
To model the extinction effect experienced by starlight through the SMC (Section 3), we require 3D information of SMC dust extinction. Yanchulova Merica-Jones et al. (2021) derived a relation between the dust extinction (\(A_{V}\)) and the hydrogen column density (\(N_{\text{H}}\)) for the southwestern end of the SMC Bar region as
\[\frac{A_{V}}{N_{\text{H}}}=\frac{A_{V}}{N_{\text{H}}+2N_{\text{H}_{2}}}=3.2-4. 2\times 10^{-23}\text{ mag}\,\text{cm}^{2}\,\text{H}^{-1}, \tag{9}\]
where \(N_{\text{H}_{2}}\) and \(N_{\text{H}_{2}}\) are the column densities for atomic and molecular hydrogen, respectively. While we have the full 3D (position-position-velocity) information for Hi from our new GASKAP-Hi observations covering the entire SMC, the same for H\({}_{2}\) is not available.
We therefore attempt to convert the 2D \(N_{\text{H}_{2}}\) map from Jameson et al. (2016), obtained through _Herschel_ observations of dust emission, to an approximate 3D distribution of H\({}_{2}\) throughout the SMC. To achieve this, we first obtain a 2D \(N_{\text{H}_{2}}\) map from the GASKAP-Hi data. From this, we compute a molecular-to-atomic hydrogen column density ratio map (\(N_{\text{H}_{2}}/N_{\text{H}}\)), and subsequently apply it to each velocity slice of the GASKAP-Hi cube to obtain the 3D H\({}_{2}\) cube. In other words, we assume that the Hi and H\({}_{2}\) number densities are correlated, which is generally not the case (e.g., Wannier et al., 1983; Lee et al., 2012). However, we point out that the exact details of the implementation of the H\({}_{2}\) data likely will not significantly affect our results here, as we find that the SMC is dominated by Hi, with the median H\({}_{2}\)-to-Hi column density ratio being a mere 0.06.
Finally, we apply Equation 9 to the Hi and H\({}_{2}\) cubes to obtain the 3D dust extinction cube of the SMC, with the middle of the quoted range (i.e., \(3.7\times 10^{-23}\) mag cm\({}^{2}\) H\({}^{-1}\)) adopted as the applied value. Each velocity slice of this cube is a map of extinction (in units of mag) that \(V\)-band starlight is subjected to while traversing through the corresponding volume.
## 3 Ray tracing of starlight polarisation
We proceed to perform a careful comparison between the orientation of Hi filaments (Section 2.1) and the magnetic field traced by starlight polarisation (Section 2.2). For this, we devise a ray-tracing analysis of starlight polarisation, with the effect of diminishing starlight intensity due to dust extinction (Section 2.3) taken into account. Our goal here is to obtain the expected linear polarisation signature of each of the Lobo Gomes et al. (2015) stars, assuming that the Hi filaments in the SMC are indeed aligned with the ambient magnetic fields that are also experienced by the dust grains imprinting linear polarisation signals in the observed starlight. This assumption will be confirmed if we find a match between the expected (from ray tracing) and the observed starlight polarisation. In essence, we use the locations of polarised SMC stars reported in Lobo Gomes et al. (2015), and send the starlight through the GASKAP-Hi cube. When the starlight is intercepted by Hi filaments, linear polarisation signal along the filament orientation is added to it accordingly5. Note that the results from the ray-tracing analysis here are a representation of the Hi data, and the ray tracing is done (instead of averaging all spatial pixels in the Hi data) to completely remove the possibility of sampling bias imposed by the positions where polarised stars were found in Lobo Gomes et al. (2015). Furthermore, we adopt this ray-tracing approach instead of directly comparing the orientation angles of the filaments and starlight polarisation (e.g., McClure-Griffiths et al., 2006; Clark et al., 2014) since, for our case here studying the SMC, the observed starlight often traverses through multiple Hi filaments along the sightline. The contributions by these filaments are correctly combined by our ray-tracing analysis. The details of the ray tracing are described below.
Footnote 5: The preferential extinction of starlight along the polarisation plane perpendicular to the magnetic field will lead to a net polarisation signal added along the magnetic field orientation.
First, we need to determine the 3D positions where we place the stars within the GASKAP-Hi cube. While the plane-of-sky locations (i.e., in right ascension and declination) of the stars can be directly adopted from the Lobo Gomes et al. (2015) catalogue, the choice along the velocity axis is less straightforward. Putting the stars on the far side of the cube may not be a good choice, since this would be assuming that all of the polarised SMC stars are physically behind all the gas in the SMC. Instead, we calculate, for each of the 20 starlight polarisation fields, the Hi intensity weighted mean velocity (\(v_{\text{mean}}\))
and place the corresponding polarised SMC stars there. The values of \(v_{\rm mean}\) are listed in Table 1.
The above choice of 3D stellar positions involves two key assumptions. The first one being that for any given line of sight, the Hi velocity has a monotonic trend with the macroscopic physical distance. While effects such as the gas dynamics of small Hi clouds and turbulence can break this monotonic trend within a velocity range of \(\sim 1\)-\(10\,{\rm km\,s^{-1}}\), we require the Hi velocity to follow the macroscopic physical distance monotonically for the ray-tracing experiment to be a good analogy with attenuation along the line of sight. The second assumption is that the Hi velocity profile traces stellar density across the physical distances corresponding to the associated Hi velocities. This ensures that using \(v_{\rm mean}\) as the ray-tracing starting point for a large (\(\gtrsim 100\)) number of stars will give statistically meaningful results. We further attempt using the radial velocity measurements from the _Gaia_ DR3 (Gaia Collaboration et al., 2022) instead of \(v_{\rm mean}\) as the stars' positions along the line of sight for the 57 cross-matched stars, as reported in Appendix B.
Our next step is to direct straight from \(v_{\rm mean}\) through the higher (\(v\geq v_{\rm mean}\)) and lower (\(v<v_{\rm mean}\)) velocity portions of the Hi cube independently. The former (latter) case would imply that the higher (lower) velocity Hi gas is physically closer to us, since the gas as well as the associated dust causing the stellar extinction needs to intercept the traversing starlight to cause the observed linear polarisation. While many optical and ultraviolet absorption line studies have suggested that the lower velocity gas component of the SMC is physically closer to us (e.g., Mathewson et al., 1986; Danforth et al., 2002; Welty et al., 2012), we do not make this assumption a-priori. In addition, despite being astrophysically unrealistic (see above), we also perform the ray-tracing analysis through the entire SMC Hi cube (\(40.91\)-\(256.85\,{\rm km\,s^{-1}}\)) for both cases of starting from the lower and higher velocity ends for completeness6.
Footnote 6: Since our ray-tracing algorithm takes into account the extinction of starlight during the traversal along the line of sight (see below), the results do not only depend on the velocity range considered, but also the direction along the velocity axis that the starlight propagates through.
For each step through the radial velocity axis, we check individually for each of the stars if the corresponding starlight is being intercepted by any Hi filaments. If so, we add starlight polarisation signal accordingly as follows, taking into account the possibility of overlapping filaments with different orientations (\(\theta_{i}\)) at a single velocity step. The added linear polarisation at velocity \(v\) is expressed in Stokes \(QU\) space as
\[Q(v)=\frac{F(v)\cdot I(v)}{n}\cdot\sum_{i}^{n}\cos 2\theta_{i},\,{\rm and} \tag{10}\]
\[U(v)=\frac{F(v)\cdot I(v)}{n}\cdot\sum_{i}^{n}\sin 2\theta_{i}, \tag{11}\]
where \(F(v)\) is the (unitless) attenuated fractional starlight flux density due to dust extinction (see next paragraph), \(I(v)\) is the Hi intensity, and the summation index \(i\) goes through the list of the \(n\) intercepting filaments, all evaluated at the sky position of the star at velocity \(v\). This operation does not only give the correct orientation of the polarisation signal to be added, but also accounts for the depolarisation effect among multiple filaments. For example, consider the extreme case of two orthogonal intervening filaments, which are expected to cancel out one another and add no linear polarisation signal to the traversing starlight. Our scheme above would correctly yield \(Q(v)=U(v)=0\). Finally, the added polarisation signal (barring the depolarisation effect above) is proportional to the Hi intensity, since we expect the amount of extinction leading to the observed polarisation to be proportional to the dust and gas column densities.
The attenuated fractional starlight flux density \(F(v)\) introduced in the above paragraph incorporates the amount of dust extinction sustained by the starlight over its journey up till \(v\). This term is
\begin{table}
\begin{tabular}{c c c c c c} \hline Field & \(\theta_{LG15}\) & \(\tilde{\theta}_{\bullet}\) & \(\tilde{\rho}_{\bullet}\) & \(\sigma_{P\bullet}\) & \(N_{\star}\) & \(v_{\rm mean}\) \\ No. & (deg) & (deg) & (\%) & (\%) & (km s\({}^{-1}\)) \\ \hline
[MISSING_PAGE_POST]
(75.0\pm 3.2\) & \(72.7\pm 10.4\) & \(0.38\pm 0.12\) & \(1.01\pm 0.10\) & 63 & 159.22 \\ \hline \end{tabular} NOTE – Parameters that are deemed uncertain, as described in the text, are placed in parentheses.
\end{table}
Table 1: Observables of each of the Lobo Gomes et al. (2015) starlight polarisation fields covered by the new GASKAP-Hi SMC field
necessary since the polarised intensity added at each velocity step should be proportional to the starlight flux density as it traverses through the same velocity step. The value of \(F(v)\) can be obtained by first summing the \(A_{V}\) velocity cube (Section 2.3) from the starting velocity of the starlight (\(v_{\rm mean}\), 40.91, or 256.85 km s\({}^{-1}\), depending on where the stars are placed along the velocity axis) to the velocity channel right before \(v\). The summed \(A_{V}\) is then converted from magnitude to flux density, with the intrinsic starlight flux density defined to be unity (since only the proportionality matters here). These all are captured by the following equation:
\[\log_{10}F(v_{i})=-\frac{2}{5}\sum_{j=0}^{i-1}A_{V}(v_{j}), \tag{12}\]
where the summation index \(j\) goes through each of the relevant velocity channels, with \(v_{0}\) corresponding to the starting velocity of the starlight, and \(v_{i}\) being the velocity step where the starlight flux density is being evaluated.
From the above four runs of our ray-tracing experiment with different starting velocities and velocity ranges considered, we correspondingly obtain four sets of the expected linear polarisation signal from the 5,999 SMC stars. We note that all stars in all cases are intercepted by at least one Hi filament, and in most cases by multiple. We extract the per-field polarisation behaviour from these four cases of ray tracing by following the identical procedures as we did to the Lobo Gomes et al. (2015) data in Section 2.2. At this stage, the Stokes \(Q\) and \(U\) values are in units of K km s\({}^{-1}\) since they are brightness temperature summed across velocity channels. We convert them to Stokes \(q\) and \(u\) in units of % by applying a conversion factor \(C\) (in units of % K\({}^{-1}\) km\({}^{-1}\) s), such that the obtained ray-traced \(\tilde{p}_{\rm H}\) values here exactly match the observed \(\tilde{p}_{\rm\pi}\) values on a per-field basis (see next paragraph for a more detailed discussion). The resulting \(\tilde{q}_{\rm H}\), \(\tilde{p}_{\rm HI}\), \(\sigma_{p\rm HI}\), and \(C\) values are listed in Table 2, with the corresponding 2D histograms shown in Figures 23-54 under Appendix C. The subscript "Hi" is chosen here to stress again that the ray-traced starlight polarisation results are a representation of the Hi data.
Our application of the conversion factor \(C\) to each of the combinations of the four ray-tracing cases and 20 starlight fields forces the ray-traced \(\tilde{p}_{\rm H}\) values to match the observed \(\tilde{p}_{\rm\pi}\) obtained from a re-analysis of the Lobo Gomes et al. (2015) data (Section 2.2). The
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline & & \multicolumn{3}{c}{Low Velocity Range (\(v<v_{\rm mean}\))} & \multicolumn{3}{c}{High Velocity Range (\(v\geq v_{\rm mean}\))} & \multicolumn{3}{c}{\(C\)} \\ Field & \(\tilde{\theta}_{\rm H}\) & \(\tilde{\rho}_{\rm H}\) & \(\sigma_{p\rm HI}\) & \(\Delta\theta\) & \(C\) & \(\tilde{\theta}_{\rm H}\) & \(\tilde{p}_{\rm H}\) & \(\sigma_{p\rm HI}\) & \(\Delta\theta\) & \(C\) \\ No. & (deg) & (\%) & (\%) & (deg) & (\(10^{-3}\)) & (deg) & (\%) & (\%) & (deg) & (\(10^{-3}\)) \\ \hline
1 & 69.0 \(\pm\) 1.9 & 0.45 \(\pm\) 0.03 & 0.63 \(\pm\) 0.01 & 51.5 \(\pm\) 3.1 & 2.41 & 71.7 \(\pm\) 1.4 & 0.45 \(\pm\) 0.02 & 0.49 \(\pm\) 0.01 & 48.8 \(\pm\) 2.9 & 2.49 \\
2 & 52.0 \(\pm\) 7.1 & 0.20 \(\pm\) 0.04 & 1.05 \(\pm\) 0.02 & 6.8 \(\pm\) 8.9 & 49.7 & 37.8 \(\pm\) 3.2 & 0.20 \(\pm\) 0.03 & 0.52 \(\pm\) 0.01 & 4.8 \(\pm\) 5.2 & 2.19 \\
3 & 59.9 \(\pm\) 1.1 & 0.34 \(\pm\) 0.01 & 0.28 \(\pm\) 0.01 & 10.0 \(\pm\) 4.6 & 1.46 & (46.8 \(\pm\) 1.35) & (0.34 \(\pm\) 0.07 & 0.49 \(\pm\) 1.42) & 11.97 \\
4 & (147.0 \(\pm\) 2.47) & 0.30 \(\pm\) 0.18 & 2.69 \(\pm\) 0.12 & (31.0 \(\pm\) 25.9) & 9.97 & 110.2 \(\pm\) 3.7 & 0.30 \(\pm\) 0.04 & 0.50 \(\pm\) 0.02 & 67.9 \(\pm\) 8.4 & 2.45 \\
5 & 54.4 \(\pm\) 6.1 & 0.32 \(\pm\) 0.07 & 0.46 \(\pm\) 0.03 & 6.6 \(\pm\) 0.12 & 1.4 & 54.1 \(\pm\) 4.6 & 0.32 \(\pm\) 0.05 & 0.35 \(\pm\) 0.02 & 69.29 \(\pm\) 17.7 & 0.96 \\
6 & (115.0 \(\pm\) 16.8) & 0.18 \(\pm\) 0.08 & 1.36 \(\pm\) 0.04 & (21.3 \(\pm\) 23.1) & 5.84 & 22.4 \(\pm\) 3.5 & 0.18 \(\pm\) 0.02 & 0.30 \(\pm\) 0.01 & (74.3 \(\pm\) 16.2) & 0.69 \\
7 & 53.0 \(\pm\) 1.6 & 0.22 \(\pm\) 0.01 & 0.30 \(\pm\) 0.01 & 58.0 \(\pm\) 5.8 & 1.15 & 63.6 \(\pm\) 1.5 & 0.22 \(\pm\) 0.01 & 0.30 \(\pm\) 0.01 & 38.6 \(\pm\) 5.8 & 2.10 \\
8 & 58.1 \(\pm\) 2.0 & 0.21 \(\pm\) 0.01 & 0.39 \(\pm\) 0.01 & 10.3 \(\pm\) 5.1 & 63.6 \(\pm\) 0.21 & 0.21 \(\pm\) 0.01 & 0.41 \(\pm\) 0.01 & 10.8 \(\pm\) 5.1 & 1.98 \\
9 & 84.6 \(\pm\) 1.8 & 0.49 \(\pm\) 0.02 & 0.62 \(\pm\) 0.01 & 15.6 \(\pm\) 2.9 & 3.06 & 76.1 \(\pm\) 1.9 & 0.49 \(\pm\) 0.03 & 0.74 \(\pm\) 0.01 & 7.1 \(\pm\) 3.0 & 4.15 \\
10 & 136.8 \(\pm\) 3.3 & 0.26 \(\pm\) 0.03 & 0.72 \(\pm\) 0.01 & 5.5 \(\pm\) 3.50 & 56.9 \(\pm\) 1.2 & 0.26 \(\pm\) 0.01 & 0.27 \(\pm\) 0.01 & 75.4 \(\pm\) 4.9 & 1.23 \\
11 & 169.0 \(\pm\) 3.5 & 0.86 \(\pm\) 0.10 & 2.17 \(\pm\) 0.05 & 12.93 \(\pm\) 3.9 & 14.10 \(\pm\) 37.2 & 0.87 \(\pm\) 0.07 & 1.61 \(\pm\) 0.04 & 61.1 \(\pm\) 3.1 & 5.76 \\
12 & 108.4 \(\pm\) 8.2 & 0.36 \(\pm\) 0.09 & 1.36 \(\pm\) 0.05 & 50.6 \(\pm\) 10.08 & 12.68 \(\pm\) 10.6 & (70.1 \(\pm\) 14.3) & (0.36 \(\pm\) 0.04) & 12.20 \(\pm\) 0.08 & 38.3 \(\pm\) 16.0 & 18.93 \\
13 & 44.5 \(\pm\) 2.6 & 0.14 \(\pm\) 0.02 & 0.16 \(\pm\) 0.01 & 50.6 \(\pm\) 23.5 & 1.23 & 106.7 \(\pm\) 5.4 & 0.14 \(\pm\) 0.02 & 0.24 \(\pm\) 0.01 & (32.2 \(\pm\) 23.8) & 1.01 \\
14 & 49.3 \(\pm\) 3.6 & 0.03 \(\pm\) 0.00 & 0.04 \(\pm\) 0.00 & (43.1 \(\pm\) 44.2) & 0.33 & (16.9 \(\pm\) 17.0) & (0.03 \(\pm\) 0.01) & 0.18 \(\pm\) 0.01 & (75.4 \(\pm\) 47.2) & 0.61 \\
15 & 48.
values of \(C\) encapsulate information such as the gas-to-dust ratio in number density and the intrinsic properties of the dust (specifically, the efficacy in producing the observed starlight polarisation). While we obviously cannot then draw meaningful conclusions from comparing between the ray-traced \(\bar{\rho}_{\rm H{\textsc{i}}}\) and the observed \(\bar{\rho}_{\star}\), we can still compare the \(\sigma_{p}/\bar{\rho}\) values to assess the ability of a ray-tracing experiment through the GASKAP-Hi cube to uncover the small-scale magnetic field in the SMC (Section 4.3). Furthermore, the scaling does not affect the study of the large-scale magnetic field orientation with \(\bar{\theta}_{\rm H{\textsc{i}}}\) (Section 4.1).
Finally, we remark that the differences between our formulation and that of Clark & Hensley (2019) are our implementation of extinction along the line of sight, as well as their incorporation of the RHT amplitude. For the former, the inclusion of the extinction term is appropriate for our comparison with starlight polarisation data, while their approach of excluding the extinction term is suitable for their comparison of the H4PI (HI4PI Collaboration et al., 2016) and GALFA-Hi (Peek et al., 2018) cubes with the polarised dust emission from _Planck_ at 353 GHz (Planck Collaboration XIX, 2015). Meanwhile for the latter, our exclusion of the RHT amplitude represents a different view in the RHT outputs compared to that of Clark & Hensley (2019), with the RHT 4D-hypercube seen as a deterministic depiction of the filament locations (i.e., any non-zero values delineate filamentary structures) rather than a probabilistic one (i.e., the RHT amplitude describes the probability of being part of an Hi filament). We repeat our analysis with the RHT amplitude incorporated into Equations 10 and 11 similar to Clark & Hensley (2019), and find that the results are almost identical, with the resulting \(\bar{\theta}_{\rm H{\textsc{i}}}\) differing by \(5^{\circ}\) in the worst case and by less than \(1^{\circ}\) on average.
## 4 Results
### Magnetic alignment of Hi filaments
To test whether magnetic alignment of Hi filaments exists in the SMC, we compute the polarisation angle difference (\(\Delta\bar{\theta}=|\overline{\theta}_{\star}-\overline{\theta}_{\rm H{\textsc {i}}}|\)) between the Lobo Gomes et al. (2015) observations (see Section 2.2) and each of our four cases of ray-tracing experiment (see Section 3). The results are listed in Table 2 and plotted in Figure 4.
We recognise a notable trend in \(\Delta\bar{\theta}\) for the case of ray tracing through the low velocity range of the Hi cube (top panel of Figure 4) - the values of \(\Delta\bar{\theta}\) are close to \(0^{\circ}\) for most of the fields in the SMC Bar and the start of the SMC Wing (approximately fields 1 to 11). Meanwhile, no obvious trends can be seen for the other velocity ranges. Below, we will first statistically quantify this apparent alignment in the low velocity range, followed by exhaustively investigating the potential trends of \(\Delta\bar{\theta}\) with diagnostics from Hi, H\(\alpha\), and starlight polarisation data.
#### 4.1.1 Statistical significance of the preferential magnetic alignment
We first compute the average \(\Delta\bar{\theta}\) from ray tracing through the low velocity portion of the Hi. Considering all fields (1-20, less the uncertain fields), the mean, median, and inverse-variance weighted mean of \(\Delta\bar{\theta}\) are \(32^{\circ}\pm 2^{\circ}\), \(20^{\circ}\pm 3^{\circ}\), and \(29^{\circ}\pm 1^{\circ}\), respectively. The listed uncertainties are the corresponding standard errors. Meanwhile, considering fields 1-11 only (again excluding the uncertain fields) these three average values decrease to \(25^{\circ}\pm 2^{\circ}\), \(13^{\circ}\pm 3^{\circ}\), and \(24^{\circ}\pm 2^{\circ}\), respectively. These are all lower than the \(45^{\circ}\) expected if the Hi filament orientation is independent of the magnetic field orientation. To
Figure 4: Starlight polarisation angle difference (\(\Delta\overline{\theta}\)) between the Lobo Gomes et al. (2015) observations and our ray-tracing experiment through the GASKAP-Hi cube, with the four panels showing different velocity ranges adopted for the ray tracing. The translucent data points represent fields without a coherent starlight polarisation angle from either ray tracing or the actual starlight observations, while the two horizontal dash lines at \(20^{\circ}\) and \(70^{\circ}\) represent the cutoff value adopted for alignment and anti-alignment, respectively (Sections 4.1.1).
evaluate the statistical significance, we perform two statistical tests as described below.
First, we apply the one-sample Kolmogorov-Smirnov (KS) test to our results, comparing the \(\Delta\overline{\theta}\) distributions against a uniform distribution within [0\({}^{\circ}\), 90\({}^{\circ}\)). Our null hypothesis is that the cumulative distribution function (CDF) of the data is less than or equal to the CDF of a uniform distribution for all \(\Delta\overline{\theta}\) values, while the alternative hypothesis is that the data CDF is greater than that of a uniform distribution for at least some \(\Delta\overline{\theta}\). The resulting \(p\)-values considering all fields and fields l-11 (both excluding the uncertain fields) are 0.061 and 0.008, respectively. These indicate a preference of the alternative hypothesis above and, combined with the low average \(\Delta\overline{\theta}\) determined above, suggest a preferred alignment of the Hi filaments with magnetic fields in the concerned SMC volume (namely, the northeastern end of the SMC Bar and the Bar-Wing transition region).
For the second statistical test, we first draw a cutoff level of \(\Delta\overline{\theta}\) at 20\({}^{\circ}\), below which the Hi filaments and magnetic fields are defined as aligned. This adopted value is in line with the typical degree of alignment of Galactic Hi filaments with magnetic fields (Clark et al., 2014) and structures such as the Galactic plane (Soler et al., 2020). Similarly, a cutoff level at 70\({}^{\circ}\) can be defined, above which the two are classed as perpendicular to each other. This gives six out of 12 fields that exhibit apparent magnetic alignment of Hi filaments out of all starlight fields, again excluding the uncertain fields. We then evaluate the likelihood that this alignment fraction is purely by chance drawn from a uniform distribution within [0\({}^{\circ}\), 90\({}^{\circ}\)). This is done by drawing 10\({}^{8}\) sets of 12 \(\Delta\overline{\theta}\) values from such uniform distribution, and counting how many sets have at least six \(\Delta\overline{\theta}\) values of less than 20\({}^{\circ}\). We find that the likelihood of such chance alignment occurring is only 3.2% (i.e., \(p\)-value of 0.032). The case of fields l-11, which corresponds to the northeastern Bar and the Bar-Wing transition region, is similarly investigated to evaluate the likelihood for six out of nine fields to show an apparent magnetic alignment. We find that this arises in only 0.5 % of the cases (i.e., \(p\)-value of 0.005). These all again suggest that the agreement in orientation between Hi filaments and magnetic fields is astrophysical instead of randomly by chance.
#### 4.1.2 Coherence of \(\Delta\overline{\theta}\) between starlight fields
Next, we maintain our focus on the starlight fields where we find magnetic alignment of Hi filaments, looking into their spatial distribution and relationship with the magnetic field orientation. Such information on the spatial coherence can reflect the underlying astrophysics shaping both the magnetic fields and Hi structures, as well as affecting the statistical significance above (since, the analyses in Section 4.1.1 have implicitly assumed that there are no spatial correlations between different fields).
We plot the \(\Delta\overline{\theta}\) against the observed starlight \(\overline{\theta}_{\star}\) in Figure 5, with the six fields demonstrating magnetic alignment of Hi filaments (\(\Delta\overline{\theta}<20^{\circ}\)) being fields 2, 3, 8, 9, 10, and 11. These fields cover a broad range in the observed \(\overline{\theta}_{\star}\) that traces the magnetic field orientation, from 52.6\({}^{\circ}\) to 156.0\({}^{\circ}\). In particular, we note rapid angle changes between nearby fields - from 52.6\({}^{\circ}\pm 4.1^{\circ}\) to 106.0\({}^{\circ}\pm 4.5^{\circ}\) from field 2 to 3, and from 69.0\({}^{\circ}\pm 2.3^{\circ}\) to 142.3\({}^{\circ}\pm 4.7^{\circ}\) from field 9 to 10, both across 15\({}^{\prime}\) = 270 pc. Overall, despite the strongly fluctuating magnetic field orientation amongst these fields, the Hi filament orientations seem to remain preferentially aligned with their respective local magnetic field.
#### 4.1.3 Correlation with other diagnostics
Finally, we wrap up our investigation in \(\Delta\overline{\theta}\) by looking into the potential physical properties of the fields that may have led to the alignment or misalignment of the Hi filaments with the magnetic field. Diagnostics are derived from the new GASKAP-Hi data (Pingel et al., 2022), the starlight polarisation data (Lobo Gomes et al., 2015), and the data from the Wisconsin H\(\alpha\) Mapper (WHAM) survey (Smart et al., 2019). For Hi and H\(\alpha\), we compare the \(\Delta\overline{\theta}\) values against the moment 0 (velocity-integrated intensity), moment 1 (mean velocity), and moment 2 (velocity dispersion), as well as visually inspecting the
Figure 5: Starlight polarisation angle difference (\(\Delta\overline{\theta}\)) between the Lobo Gomes et al. (2015) observations and our ray-tracing experiment through the GASKAP-Hi cube, plotted against the magnetic field orientation as traced by the observed starlight polarisation. Only starlight fields l–11 with reliably determined \(\overline{\theta}_{\star}\) and \(\overline{\theta}_{\rm Hi}\) (see Tables 1 and 2) are shown here. The two horizontal dash lines at 20\({}^{\circ}\) and 70\({}^{\circ}\) represent the cutoff value adopted for alignment and anti-alignment, respectively (Sections 4.1.1).
Figure 6: The preferred orientation of Hi filaments, obtained by applying the ray-tracing algorithm through the full velocity range of the Hi cube (Section 3; but with the extinction term turned off), shown as the flow-line pattern generated by the LIC algorithm (Cabral & Leedom, 1993). The colour map shows the corresponding “polarised intensity” from ray tracing. This map can be compared with future polarised dust emission observations of the SMC, with the flow-line pattern here representing the predicted polarisation \(B\)-vector (= \(E\)-vector +90\({}^{\circ}\)) if Hi filaments are indeed tracing the plane-of-sky magnetic field orientation.
velocity profiles for each field. Meanwhile for starlight polarisation, we compare \(\Delta\overline{\theta}\) against \(\overline{p}_{\bullet}\) and \(\sigma_{p\star}\) (Section 2.2). We do not find any notable trends with any of these parameters.
### The preferred orientation of Hi filaments in the SMC
We use the results from applying the RHT algorithm to the GASKAP-Hi data to further obtain the preferred orientation of Hi filaments across the SMC. This can allow us to identify the astrophysical processes shaping the Hi structures of this galaxy, and can also be used to compare against future observations of the SMC magnetic fields to further test the magnetic alignment of Hi filaments, as elaborated in Sections 5.3 and 5.5.
First, we produce the preferred orientation map of Hi filaments by combining the full radial velocity range of the SMC. Specifically, we follow the ray-tracing analysis procedures as described in Section 3, but performed for each pixel of the GASKAP-Hi map instead of on a per-star basis. Furthermore, we have turned off the dust extinction effect (i.e., \(A_{V}(v_{j})=0\) for all \(v_{j}\) in Equation 12). The resulting Stokes \(Q\) and \(U\) maps are smoothed to \(8^{\prime}\) to extract the underlying \(\approx 150\) pc-scale pattern. The resulting "polarised intensity" map is shown in Figure 6, with the corresponding position angle tracing the preferred orientation of Hi filamentary structures shown as the flow-line pattern produced by the Line Integral Convolution (LIC) algorithm (Cabral & Leedom, 1993). The operations here are similar to that of Clark & Hensley (2019), which studied the case of the Milky Way by comparing the Hi4PI cube (H14PI Collaboration et al., 2016) with _Planck_ polarised dust emission (Planck Collaboration XIX, 2015) (see also Section 3).
Figure 8: Map of the angle difference of the preferred Hi filament orientations between the low- and high-velocity portions of the SMC (see Figure 7), shown as the colour dots. The background greyscale map shows the peak Hi intensity map.
Figure 7: The preferred orientation of Hi filaments in the low-velocity (presumably near-side; upper panel) and the high-velocity (presumably far-side; lower panel) portions of the SMC, outlined by the green tick marks. Only sightlines with SMC Hi column densities of higher than \(10^{21}\) cm\({}^{-2}\) are considered here. The background greyscale maps are the peak Hi intensity maps from GASKAP-Hi (Pingel et al., 2022).
Figure 9: Histogram of the angle difference of the preferred Hi filament orientations between the low- and high-velocity portions of the SMC (see Figures 7 and 8).
Next, we produce preferred orientation maps separately for the low-velocity (\(v<v_{\rm mean}\); presumably near-side) and the high-velocity (\(v\geq v_{\rm mean}\); presumably far-side) portions of the Hi cube. This is again done by a per-pixel ray tracing through the Hi cube without dust extinction (\(A_{V}\left(v_{J}\right)=0\)), but restricted in velocity space accordingly separated by \(v_{\rm mean}\). The 2D spatial domain is then divided into independent boxes of \(70\times 70\,\rm{px}\approx 8^{\prime}\times 8^{\prime}\), within which the Stokes \(Q\) and \(U\) values are summed and subsequently converted to \(\theta\). This operation is only performed for boxes with a hydrogen column density above \(10^{21}\,\rm{cm}^{-2}\) in order to focus on the main gaseous body of the SMC. The \(\theta\) maps for the two portions of the SMC are plotted in Figure 7, the angle difference map is plotted in Figure 8, and the histogram of angle difference is shown in Figure 9.
Finally, we generate a preferred orientation map of the Hi filaments for the low-velocity portion (presumably near-side) of the SMC only, with the attenuation of starlight flux density due to dust extinction taken into account (Equations 10-12). The resulting Stokes \(Q\) and \(U\) maps are again smoothed to \(\approx 8^{\prime}\), and the corresponding position angle map is shown in Figure 10 as the flow-line pattern from LIC over the Digitized Sky Surveys 2 (DSS2; Lasker et al., 1996) optical image of the SMC. This map can be compared to future starlight polarisation observations.
### Relationship between ray-traced and observed \(\sigma_{\rm{P}}/\overline{\rm{p}}\)
As pointed out in Section 2.2, the ratio between \(\sigma_{p\bullet}\) and \(\overline{p}_{\bullet}\) of the observed starlight can be indicative of the relative strength between the small- (\(\approx 100\,\rm{pc}\)) and large-scale (\(\gg 100\,\rm{pc}\)) magnetic field. It is therefore of interest to explore whether the Hi data have sufficient angular resolution to enable similar measurements. This can be evaluated by seeing whether the \(\sigma_{\rm{P}H}/\overline{\rm{p}}_{\rm{H}}\) parameter from the ray-traced starlight polarisation corresponds well with the observed \(\sigma_{p\bullet}\) and \(\overline{p}_{\bullet}\). We plot the ray-traced against observed \(\sigma_{p}/\overline{p}\) of fields 1-11 through the low velocity portion of the GASKAP-Hi cube in Figure 11, and we do not find good agreement between the two sets of \(\sigma_{p}/\overline{p}\) values. We further note that for all of the fields except 2 and 11, the ray-traced \(\sigma_{p\rm{H}}/\overline{\rm{p}}_{\rm{H}}\) values are lower than the observed counterparts. The mismatch between the two is likely due to the limited spatial resolution of the GASKAP-Hi data (see Section 5.2.2).
## 5 Discussion
### The physical nature of the Hi filaments
#### 5.1.1 Physical scales of the Hi filaments
We first consider the physical scales of the SMC Hi filaments that we study here, and attempt to identify analogues in the Milky Way. Given the spatial resolution of \(30^{\prime\prime}=9\,\rm{pc}\) of the GASKAP-Hi observations (Pingel et al., 2022) and our chosen HiT parameters (specifically, \(D_{W}=83\,\rm{px}=175\,\rm{pc}\) and \(\tt{threshold}=0.7\)), the Hi filamentary structures that we uncover have widths of \(\sim 9\,\rm{pc}\) and lengths of \(\gtrsim 120\,\rm{pc}\). This is clearly in a vastly different physical scale range compared to the individual Hi filaments on the local bubble wall with widths \(\lesssim 0.1\,\rm{pc}\) and lengths \(\sim 80\,\rm{pc}\) (Clark et al., 2014). However, as pointed out in their work, the Clark et al. (2014) filaments are highly spatially correlated in orientation, together forming coherent bundles of filaments across 10s of pc. Meanwhile, high spatial resolution studies of discrete elongated Hi clouds in the Milky Way have found that they can be further composed of fine strands of Hi filaments. For example, the Riegel-Crutcher cloud exhibits as an elongated Hi cloud with a width of \(\approx 5\,\rm{pc}\) and a length of \(\approx 20\,\rm{pc}\), and is constituted by countless Hi filaments with widths of \(\lesssim 0.1\,\rm{pc}\) (McClure-Griffiths et al., 2006). Similarly, the local velocity cloud towards the Ursa Major cirrus (Skalidis et al., 2022) has a width of \(\approx 5\,\rm{pc}\) and a length of \(\approx 15\,\rm{pc}\), and is formed by groups of \(\lesssim 1\,\rm{pc}\)-wide Hi filaments. Combining all the examples above, we hypothesise that the Hi filamentary structures in the SMC uncovered by our GASKAP-Hi observations may actually be bundles of fine Hi filaments with coherent orientations across \(\gtrsim 120\,\rm{pc}\). Alternatively, we can be tracing the general anisotropic features of the Hi gas as seen in the Galactic plane (e.g., Soler et al., 2022).
Finally, we note the existence of several gigantic filamentary Hi structures in the Milky Way. These can be seen in, for example, the Canadian Galactic Plane Survey (CGPS) data (\(\approx\) few degrees in length; see Gibson, 2010) as well as the GALFA-Hi data (\(\approx 10\,\rm{deg}\) in length; Peek et al., 2018). In addition, there is a recent discovery of an enormous Galactic Hi filament with width \(\approx 50\,\rm{pc}\) and length \(\approx 1,200\,\rm{pc}\) (the "Maggie" filament; Soler et al., 2020; Syed et al., 2022). Similar Hi filaments can account for a small fraction of our SMC Hi filaments, but given the apparent rarity of such class of Hi filaments in the Milky Way we deem it improbable as the primary explanation for most of the SMC filaments.
#### 5.1.2 The alignment with magnetic fields
We report and statistically assess the alignment of Hi filaments in the SMC with magnetic fields in Section 4.1, with the trend seen in the northeastern Bar and the beginning of the Wing region (approximately fields 1-11). The rest of the SMC volume lacks starlight polarisation data of sufficient quality to draw conclusions. This is the first time that such a relation of magnetically aligned Hi filaments has been identified beyond the Milky Way, enabled by the unprecedented combination of the angular resolution, velocity resolution, and surface brightness sensitivity of the new GASKAP-Hi data. The results suggest that magnetically aligned Hi filaments may also be seen beyond the Milky Way, which is a key piece of information for future numerical studies of the astrophysics governing the formation of these filamentary Hi structures.
The ability of the GASKAP-Hi data to see magnetic alignment of filaments at all is, in fact, somewhat surprising. Clark et al. (2014) has explored the effects of the spatial resolution of the Hi data on the alignment of the subsequently identified filamentary structures with magnetic fields. Upon comparing the results using GALFA-Hi data at a resolution of \(4^{\prime}=0.1\,\rm{pc}\)(Peek et al., 2011) with those from GASS data at a resolution of \(16^{\prime}=0.5\,\rm{pc}\)(McClure-Griffiths et al., 2009), they have found that the degree of alignment can worsen from within \(\approx 16^{\circ}\) from the former to within \(\approx 36^{\circ}\) from the latter. Meanwhile, our spatial resolution of the SMC in Hi is \(30^{\prime\prime}=9\,\rm{pc}\), significantly worse than even the GASS data. We suspect that the key here is to have matching spatial scales traced by both the Hi and the starlight polarisation data. In particular, we opt for an analysis on a per-field basis, with both data sets tracing \(\approx 150\,\rm{pc}\) scales. Meanwhile, the Clark et al. (2014) analyses studied much smaller scales (\(\lesssim 0.1\,\rm{pc}\)) in both data sets.
Finally, as pointed out in Section 4.1.2, the starlight fields in the SMC Bar region exhibit a range of plane-of-sky magnetic field orientation across \(\sim 100\,\rm{pc}\) scale (see Section 5.2 for more discussions). Despite such a rapidly varying magnetic structure, the Hi filament orientation remains following the magnetic fields. This strongly suggests that we are not looking at a chance alignment of the two across multiple spatially correlated starlight fields, and the statistical evaluation of the magnetic alignment in Section 4.1.1 is valid.
### The magnetic field structure of the SMC
#### 5.2.1 Information from starlight polarisation
In the Bar region of the SMC (defined here as fields 1-9), we do not find any consistent trends in the observed starlight polarisation amongst the starlight fields. This shows that the large-scale magnetic fields in the Bar are not ordered on scales much larger than that corresponding to the field-of-view of the starlight observations (\(8^{\prime}\approx 150\) pc). Meanwhile, within each of the starlight fields we find consistent coherent starlight polarisation signals (\(\overline{\rho}_{\bullet}\) and \(\overline{\theta}_{\bullet}\)) indicative of an ordered magnetic field on \(\approx 150\) pc scale. Finally, the large scatter of the per-field starlight polarisation (\(\sigma_{p,\mu}\)) is a manifestation of the strong turbulent magnetic field on \(\ll 150\) pc scale. All these can be explained by the perturbation of the magnetic field by some \(\gtrsim 150\) pc structures (e.g., supershells that are known to be ubiquitous in the SMC; Staveley-Smith et al. 1997, and tidal forces), as well as the injection of turbulent energy at \(\lesssim 10\) pc scale by stellar feedback processes (e.g., MacLow 2004).
Figure 11: Ray-traced against observed values of \(\sigma_{p}/\overline{p}\). Only starlight fields 1–11 with reliably determined \(\overline{p}\) (see Tables 1 and 2) are shown here. The diagonal grey line marks where the ray-traced and observed values agree.
Figure 10: Result of ray tracing through the low velocity portion (presumably the near-side of the SMC) of the GASKAP-Hi data cube, with the effect of starlight attenuation due to dust extinction taken into account. The position angle is shown as the flow-line pattern generated by the LIC algorithm (Cabral & Leedom 1993). This can be compared with future starlight polarisation observations to test whether Hi filaments are aligned with the magnetic field throughout the entirety of the SMC. The colour optical map is from the DSS2 (Lasker et al. 1996).
The above interpretation is in an _apparent_ conflict with the conclusion from studies of RM of extragalactic sources behind the SMC that found a coherent magnetic field pointed away from the observer consistently across the entire SMC Bar (Mao et al., 2008; Livingston et al., 2022), which would not have been observed if there is no coherent magnetic field on \(\gg 150\) pc scale. However, we point out that the lack of an ordered7 plane-of-sky magnetic field does not necessarily imply a lack of a coherent line-of-sight magnetic field. For example, imagine an initial perfectly coherent magnetic field along the line-of-sight only, with the magnetic field strength component in the plane-of-sky being zero. If this frozen-in magnetic field is then perturbed by turbulence, the resulting magnetic field configuration can have a significant unordered plane-of-sky component, while still preserving some degree of coherence along the line-of-sight.
Footnote 7: The distinction between a coherent and an ordered magnetic field being that, the former has a constant magnetic field _direction_ (without any flips), while the latter only need to have a constant _orientation_ (flips in direction are permitted; e.g., Jaffe et al., 2010; Beck, 2016).
We next move onto the SMC Wing region (defined here as fields 10-20), which is believed to have formed due to tidal interactions with the LMC. Four (namely, 10, 11, 12, and 15) of the five fields that individually show coherent starlight polarisation signals exhibit a consistent magnetic field orientation with \(\theta\approx 150^{\circ}\) across \(\approx 1\) kpc, with the remaining field (20) situated far from the SMC Bar (about \(3^{\circ}\approx 3.2\) kpc in projected distance) having a distinct \(\theta\approx 70^{\circ}\). The trend of \(\theta\approx 150^{\circ}\) has also been pointed out by Lobo Gomes et al. (2015) using their analysis methods (trend III in their section 5). The magnetic fields are oriented along the general elongation of the SMC Wing itself but with a slight offset of \(\approx 20^{\circ}\). Combined with the recent results of a general negative RM of extragalactic sources behind the SMC Wing (Livingston et al., 2022), we obtain a picture of a coherent magnetic field on scales \(\gtrsim 1\) kpc along the Wing's elongation. This resembles the \(\sim 20\) kpc tidal tail of the Antennae galaxies, which was found through studying its synchrotron emission to host a regular magnetic field along its entirety, believed to have come from tidal stretching of the original disk magnetic fields (Basu et al., 2017). Apart from the concerned physical scales, one key difference between the two cases is the strength ratio between the large-scale regular and the small-scale turbulent magnetic fields: the former is believed to be stronger for the case of the Antennae galaxies tidal tail (Basu et al., 2017), while the latter likely dominates in the SMC Wing on \(\ll 150\) pc scale as reflected by the consistently high \(\sigma_{p\star}\) compared to \(\overline{p}_{\star}\) for all of the starlight fields.
Finally, we point out again and further discuss a common characteristic in both the Bar and Wing regions of the SMC - the \(\ll 150\) pc turbulent magnetic field strength is much higher than the ordered magnetic field strength, as inferred from the consistently high \(\sigma_{p\star}/\overline{p}_{\star}\) ratio in all observed starlight fields. This is in agreement with the conclusions of numerous previous studies of the SMC magnetic fields (e.g., Mao et al., 2008; Lobo Gomes et al., 2015; Livingston et al., 2022), in line with our knowledge of the highly complex neutral and ionised gas dynamics in the SMC (e.g., Le Coarer et al., 1993; Stavley-Smith et al., 1997; Smart et al., 2019), and in contrast with most spiral galaxies that have comparable strengths between the turbulent and ordered components (see e.g., Beck & Wielebinski, 2013; Beck, 2016). Given that the \(\sigma_{p\star}\) values are larger than the corresponding \(\overline{p}_{\star}\) values, we argue that the traditional Davis-Chandrasekhar-Fermi method (Davis, 1951; Chandrasekhar & Fermi, 1953), which uses the spread of starlight polarisation angle as a measure of the turbulent-to-ordered magnetic field strength, cannot be directly applied to the SMC. This is because the starlight polarisation angles span the full \(180^{\circ}\), meaning that the angle spread loses physical meaning.
#### 5.2.2 Hi filaments as a tracer of the small-scale magnetic field
We explore using our ray-tracing analysis method on the GASKAP-Hi cube to obtain information on the small-scale magnetic field in Section 4.3, and do not find a good match between the ray-traced and observed \(\sigma_{p}/\overline{p}\) values, with the former underestimating the latter in most of the starlight fields. We discuss the possible reasons behind this mismatch below.
The most probable reason behind the low ray-traced \(\sigma_{p\rm{H}}/\overline{p}_{\rm{H}}\) values is the limited spatial resolution of the GASKAP-Hi data. The \(30^{\prime\prime}=9\) pc resolution of the data sets the absolute minimum scale that our study is sensitive to, while our RHT parameter choice of \(R_{\rm{sm}}=12\) px \(=25\) pc may further coarsen the effective resolution. If the spatial resolution of our data is comparable to or poorer than the outer scale of turbulence in the SMC, the map of filaments identified by the RHT algorithm and thus our ray-tracing analysis may not be able to capture the corresponding intricate features that actual starlight polarisation data can. Recent spatial power spectrum and structure function analyses of the SMC in Hi have concluded that the turbulence is being driven on a very large (galactic) scale (Szotkowski et al., 2019), suggesting that our Hi data at \(\approx 9\) pc resolution should well resolve the turbulent structures in the SMC spatially. Meanwhile, the RM structure function using extragalactic sources behind the SMC suggested an upper limit to the outer scale of turbulence in the SMC of 250 pc (Livingston et al., 2022), and similar studies through the Milky Way disk have indicated outer scales of turbulence of \(\sim 10\) pc in the spiral arms and \(\sim 100\) pc in the interarm regions (Haverkorn et al., 2008). All these could be reconciled if the _magnetic_ outer scale of turbulence that can be resolved by the starlight polarisation data but not our ray-tracing analysis are much smaller than that of the gas density. This would mean that the physical conditions portrayed by the Hi filaments may be less turbulent and more coherent than the actual reality traced by observed starlight polarisation, leading to the lower \(\sigma_{p\rm{H}}/\overline{p}_{\rm{H}}\) values from our ray-tracing analysis.
We further consider whether the polarised dust extinction in the Milky Way can be a reasonable explanation to the mismatch in \(\sigma_{p}/\overline{p}\). The procedure of Galactic foreground removal by the Lobo Gomes et al. (2015) starlight polarisation catalogue concerns the large-scale coherent component only, while the contributions by the turbulent magnetic fields in the Milky Way (if present) cannot be removed on a per-star basis due to the stochastic nature. The Galactic foreground can therefore introduce extra scatter in the observed starlight polarisation (i.e., higher \(\sigma_{p\star}\) and therefore \(\sigma_{p\star}/\overline{p}_{\star}\)), but not to the ray-traced starlight since we did not take the Galactic contributions into account. Along the line of sight towards the SMC (Galactic latitude: \(b=-44.3^{\circ}\)), the approximate path lengths through the Galactic Hi thin and thick disks (with half-widths of \(\sim 100\) and \(\sim 400\) pc, respectively; Dickey, 2013) are 140 and 570 pc, respectively. At these distances, the \(8^{\prime}\) field-of-view of the starlight polarisation observations convert to about 0.3 and 1.3 pc, respectively. These values are much smaller than the \(\sim 10\)-100 pc outer scale of turbulence in the Milky Way (Haverkorn et al., 2008). As the large-scale and small-scale magnetic fields are of comparable strengths in the Milky Way (e.g., Beck, 2016), the turbulent magnetic field must be significantly weaker than the large-scale counterpart at scales much smaller than the outer scale of turbulence. Therefore we deem this unlikely as the primary explanation of the mismatch in \(\sigma_{p}/\overline{p}\).
### The preferred orientation of Hi filaments
Moving along the Bar from the northeastern end (\(\rm RA=1^{h}\,10^{m}\); \(\rm Decl\approx-71.5^{\circ}\)) to the southwestern end (\(\rm RA\approx 0^{h}\,45^{m}\); \(\rm Decl\approx-73.5^{\circ}\)), we identify three distinct regions. First, the Hi filaments are preferentially oriented along the elongation of the Bar, seen in both the low- and high-velocity ranges (Figure 7). Noting that the Hi velocity gradient is also oriented northeast-southwest along the elongation of the SMC Bar (Di Teodoro et al., 2019), the orientation of the Hi filaments here can be controlled by the gas dynamics in the galaxy, similar to the case of the disk-parallel Hi filaments in the Milky Way (Soler et al., 2022). However, we point out that the internal gas dynamics of the SMC may be much more complex than that revealed by Hi data alone (see Murray et al., 2019). Second, at \(\rm RA\approx 0^{h}\,55^{m}\), Decl \(\approx-72.3^{\circ}\), the Hi filaments switch in the preferred orientation abruptly to be along northwest-southeast. This is seen in the low-velocity portion only, and is lined up with the SMC Wing to the southeast. These Hi structures here are likely shaped by the tidal stretching from interactions with the LMC that have also formed the SMC Wing and the Magellanic Bridge (Besla et al., 2012; Wang et al., 2022). We further note that in this same sky area within the SMC, the stellar proper motion (Niederhofer et al., 2021) that is believed to be tracing the effect of tidal stretching exhibits a consistent direction as our Hi filament orientation. Third and finally, starting from \(\rm RA\approx 0^{h}\,55^{m}\), Decl \(\approx-72.5^{\circ}\) to the southwest, the filaments are preferentially oriented east-west, with a significant perpendicular component to the Bar elongation, seen most clearly in the low-velocity and also in the high-velocity. This can be shaped by feedback processes from star formation that transport gas away from the galaxy into the circumgalactic medium. To summarise, the preferred orientation of Hi filaments across the SMC Bar shows highly complex geometries, possibly shaped by multiple astrophysical processes that the SMC is subjected to.
Meanwhile, we note that the preferred Hi filament orientation in the SMC Wing also exhibits highly complex structures. Overall, it appears as though the Hi filaments are wrapping around the elongated structure of the SMC Wing.
### The 3D structure of the SMC
In Section 4.1, we find moderate evidence for alignment between the orientation of starlight polarisation and that of the Hi filaments in the low velocity portion of the northeastern Bar and the start of the Wing regions. In comparison, the match with other Hi velocity ranges (high velocity portion, as well as the full velocity range in both ray-tracing directions) are considerably poorer. This information can be used to help decipher the complex 3D structure of the SMC (see, e.g., Panopoulou et al., 2021, and references therein for similar cases in the Milky Way). In particular, since starlight polarisation is induced by the foreground dusty ISM, our results suggest that the aforementioned SMC regions are physically closer to us than the higher velocity portion. This is in agreement with the result of Mathewson et al. (1986), which found that the radial velocities of the sample of 26 SMC stars are consistently higher than that of the associated Ca ii absorption from the SMC ISM. Assuming that their stellar radial velocities correspond to the ambient gas radial velocities, this would mean that the lower velocity gas component of the SMC is physically closer to us. The same conclusion has also been reached from the many newer optical and/or ultraviolet absorption line studies (e.g., Danforth et al., 2002; Welty et al., 2012).
We note that the remaining areas of the SMC, namely the southwestern end of the Bar and the majority of the Wing, remain relatively unexplored. Future, deep starlight polarisation surveys covering the entirety of the SMC will be key to unravelling the overall 3D structure of the gaseous component of this galaxy.
Finally, we identify from Figures 8 and 9 that filamentary Hi orientations in the low- and high-velocity portions of the SMC are similar across large areas in both the Bar and the Wing regions. The mean and median \(\theta\) differences are about \(35^{\circ}\) and \(30^{\circ}\), respectively, with \(35\,\%\) of the evaluated areas having \(\theta\) differences of less than \(20^{\circ}\). We further perform a one-sample KS test against a uniform distribution (similar to Section 4.1.1) and obtain a \(p\)-value of \(2\times 10^{-23}\). This suggests that the two velocity components of the SMC are physically linked.
### Future prospects
#### 5.5.1 Ray-tracing analysis
To enable a detailed comparison between the new GASKAP-Hi data of the SMC (Pingel et al., 2022) and starlight polarisation data (Lobo Gomes et al., 2015), we develop the new ray-tracing analysis method (Section 3), with which we establish the alignment of Hi filaments with the \(\approx 150\,\)pc-scale magnetic field in the SMC Bar region (Section 4.1). The same analysis method can be applied to similar future Galactic and Magellanic studies using recent and future data such as:
* Starlight polarisation: SOUTH-POL (Magalhaes et al., 2012), Polar-Areas Stellar-Imaging in Polarization High-Accuracy Experiment (PASIPHAE; Tassis et al., 2018), and the Galactic Plane Infrared Polarization Survey (GPIPS; Clemens et al., 2020);
* Hi emission: GALFA-Hi (Peek et al., 2011, 2018), GASKAP-Hi (Dickey et al., 2013), The Hi/OH/Recombination line survey of the Milky Way (THOR; Beuther et al., 2016), and the Dominion Radio Astrophysical Observatory (DRAO) Hi Intermediate Galactic Latitude Survey (DHIGLS; Blagrave et al., 2017);
* Diffuse synchrotron emission: the Global Magneto-Ionic Medium Survey (GMIMS; Wolleben et al., 2009), the Polarisation Sky Survey of the Universe's Magnetism survey (POSSUM; Gaensler et al., 2010), the LOFAR Two-metre Sky Survey (LoTSS; Shimwell et al., 2017), the C-Band All Sky Survey (C-BASS; Jones et al., 2018), and the S-band Polarization All Sky Survey (S-PASS; Carretti et al., 2019).
In particular, our work here concludes that the \(\sigma_{PH}/\overline{p}_{\rm Hi}\) parameter from GASKAP-Hi observations cannot be used to trace the small-scale magnetic field of the SMC, because of the lack of spatial resolution. We plan to apply the same analysis to future GASKAP-Hi data of the Milky Way. The much higher (\(\lesssim 1\,\)pc) spatial resolution will allow us to test whether the Hi data can be used as a good tracer of the turbulent magnetic field in the ISM.
#### 5.5.2 Polarised starlight and dust emission of the SMC
We identify the preferential alignment of Hi filaments with the magnetic fields traced by starlight polarisation in the northeaster end of the Bar region and the Bar-Wing transition region of the SMC. Subsequently, we use the GASKAP-Hi data to produce maps of the preferred orientation of Hi filaments across the SMC (Figures 6-10). These maps can be compared with future starlight polarisation and polarised dust emission data for further direct confirmation of the alignment of these Hi structures with the magnetic field.
In particular, the starlight data can be compared with the Hi emission on the near side of the SMC (Figure 10). This is especially intriguing in the SMC Wing, as this can shed light on both its 3D
(Section 5.4) and magnetic (Section 5.3) structures that are still not fully explored.
New, high spatial resolution observations of the polarised dust emission that probes the entire line of sight through the SMC, using forthcoming instruments such as the Prime-cam (CCAT-Prime Collaboration et al., 2023) and the Simons Observatory (Hensley et al., 2022), similar to the few other nearby galaxies observed with the Stratospheric Observatory for Infrared Astronomy (SOFIA) High-resolution Airborne Wideband Camera Plus (HAWC+ Jones et al., 2020; Lopez-Rodriguez et al., 2022), can be compared with our Hi filament orientation map (Figure 6). This will test whether the magnetic alignment of Hi filaments persists through the full SMC volume. If this will be confirmed, we will have higher confidence in using the GASKAP-Hi data for a tomographic view of the SMC's plane-of-sky magnetic field structure. Our Stokes \(Q(v)\) and \(U(v)\) cubes prior to the ray-tracing steps above retain the information of magnetic fields along the line of sight decomposed by radial velocity (see, e.g., Clark, 2018; Clark & Hensley, 2019). These can be compared with the POSSUM data (Gaensler et al., 2010) also from ASKAP that measures the polarised synchrotron emission. Applications of continuum polarimetric techniques such as RM-Synthesis (Brentjens & de Bruyn, 2005) to broadband spectro-polarimetric data can similarly decompose the polarised synchrotron emission by Faraday depth (see, e.g., Van Eck et al., 2019). These two ASKAP datasets can enable 3D-3D comparisons that can lead to crucial knowledge in how magnetic fields link the diffuse ISM probed by the polarised continuum emission and the neutral ISM probed by the Hi filaments. This will be a major step forward compared to the recent work on M 51 in the 2D-2D domain (Fletcher et al., 2011; Kierdorf et al., 2020; Borkaff et al., 2021).
## 6 Conclusions
We investigate whether the Hi filaments in the SMC are aligned with the magnetic fields, as is the case in the solar neighbourhood in the Milky Way (e.g., McClure-Griffiths et al., 2006; Clark et al., 2014; Clark & Hensley, 2019). Our work has been enabled by the new, sensitive, high resolution Hi observations using the ASKAP telescope (Hotan et al., 2021) by the GASKAP-Hi survey (Dickey et al., 2013; Pingel et al., 2022), in addition to the recently released starlight polarisation catalogue of the SMC (Lobo Gomes et al., 2015). The RHT algorithm (Clark et al., 2014; BICEP/Keck Collaboration et al., 2022) is applied to the GASKAP-Hi cube to automatically identify filamentary structures, and the Lobo Gomes et al. (2015) data are re-analysed with a vector approach to extract the large- and small-scale magnetic field information.
We devise a new ray-tracing analysis to perform a careful comparison between the Hi filament orientation and the starlight polarisation data, and find a preferential alignment of the low radial velocity Hi filaments with the large-scale magnetic fields traced by starlight polarisation in two regions of the SMC: the northeastern end of the Bar region, and the Bar-Wing transition region. The remainder of the Bar region, as well as the Wing region, do not yet have sufficient coverage by starlight polarisation observations for such detailed comparisons with Hi data. This is the first time that the alignment of Hi filaments with the ambient magnetic field is seen across large spatial volume (\(\gtrsim 1\) kpc) and outside of the Milky Way. The results further suggest that the lower velocity Hi component in the SMC Bar and Bar-Wing transition area is physically closer to us than the higher velocity component, consistent with previous findings (Mathewson et al., 1986; Danforth et al., 2002; Welty et al., 2012).
We produce maps tracing the preferred orientation of Hi filaments across the SMC, revealing the highly complex structures likely shaped by a combination of the intrinsic internal gas motion of the SMC, tidal forces from the LMC, and stellar feedback mechanisms. These maps can further be compared with future measurements of the magnetic field structure of the SMC from starlight and dust polarisation, as well as with the diffuse polarised synchrotron emission from POSSUM (Gaensler et al., 2010). We also find that the orientation of the Hi structures between the low- and high-velocity portions of the SMC are similar, suggesting that the two velocity components are physically linked.
## Acknowledgements
We thank the anonymous referee for the comments, especially on the discussions on the statistical robustness of the bootstrapping procedures. We thank Christoph Federrath, Isabella Gerard, Gilles Jones, Marc-Antoine Miville-Deschenes, Snezana Stanimirovic, and Josh Peek for the fruitful discussions on this work. We thank Rainer Beck for the careful reading of the manuscript and the thoughtful suggestions that have improved the presentation of this paper. YKM thanks Michael Kramer and Sui Ann Mao for their gracious extended host at the Max-Planck-Institut fur Radioastronomie in Bonn, Germany. This research was partially funded by the Australian Government through the Australian Research Council. LU acknowledges support from the University of Guanajuato (Mexico) grant ID CIIC 164/2022. This scientific work uses data obtained from Inyarimhanha Ilgari Bundara / the Murchison Radio-astronomy Observatory. We acknowledge the Wejayiri Yamaji People as the Traditional Owners and native title holders of the Observatory site. The Australian SKA Pathfinder is part of the Australia Telescope National Facility which is managed by CSIRO. Operation of ASKAP is funded by the Australian Government with support from the National Collaborative Research Infrastructure Strategy. ASKAP uses the resources of the Pawsey Supercomputing Centre. Establishment of ASKAP, the Murchison Radio-astronomy Observatory and the Pawsey Supercomputing Centre are initiatives of the Australian Government, with support from the Government of Western Australia and the Science and Industry Endowment Fund. The Parkes radio telescope is part of the Australia Telescope National Facility which is funded by the Australian Government for operation as a National Facility managed by CSIRO. We acknowledge the Wiradjuri people as the traditional owners of the Observatory site.
## Data Availability
The GASKAP-Hi Pilot Survey data are available on the CSIRO ASKAP Science Data Archive8 (CASDA). The auxiliary data products from this article will be shared on reasonable request to the corresponding author.
Footnote 8: [https://research.csiro.au/casda/](https://research.csiro.au/casda/).
|
2304.00968
|
Synchronous replication initiation of multiple origins
|
Initiating replication synchronously at multiple origins of replication
allows the bacterium Escherichia coli to divide even faster than the time it
takes to replicate the entire chromosome in nutrient-rich environments. What
mechanisms give rise to synchronous replication initiation remains however
poorly understood. Via mathematical modelling, we identify four distinct
synchronization regimes depending on two quantities: the duration of the
so-called licensing period during which the initiation potential in the cell
remains high after the first origin has fired and the duration of the blocking
period during which already initiated origins remain blocked. For synchronous
replication initiation, the licensing period must be long enough such that all
origins can be initiated, but shorter than the blocking period to prevent
reinitiation of origins that have already fired. We find an analytical
expression for the degree of synchrony as a function of the duration of the
licensing period, which we confirm by simulations. Our model reveals that the
delay between the firing of the first and the last origin scales with the
coefficient of variation (CV) of the initiation volume. Matching these to the
values measured experimentally shows that the firing rate must rise with the
cell volume with an effective Hill coefficient that is at least 20; the
probability that all origins fire before the blocking period is over is then at
least 92%. Our analysis thus reveals that the low CV of the initiation volume
is a consequence of synchronous replication initiation. Finally, we show that
the previously presented molecular model for the regulation of replication
initiation in E. coli can give rise to synchronous replication initiation for
biologically realistic parameters.
|
Mareike Berger, Pieter Rein ten Wolde
|
2023-04-03T13:36:58Z
|
http://arxiv.org/abs/2304.00968v1
|
# Synchronous replication initiation of multiple origins
###### Abstract
Initiating replication synchronously at multiple origins of replication allows the bacterium _Escherichia coli_ to divide even faster than the time it takes to replicate the entire chromosome in nutrient-rich environments. What mechanisms give rise to synchronous replication initiation remains however poorly understood. Via mathematical modelling, we identify four distinct synchronization regimes depending on two quantities: the duration of the so-called licensing period during which the initiation potential in the cell remains high after the first origin has fired and the duration of the blocking period during which already initiated origins remain blocked. For synchronous replication initiation, the licensing period must be long enough such that all origins can be initiated, but shorter than the blocking period to prevent reinitiation of origins that have already fired. We find an analytical expression for the degree of synchrony as a function of the duration of the licensing period, which we confirm by simulations. Our model reveals that the delay between the firing of the first and the last origin scales with the coefficient of variation (CV) of the initiation volume. Matching these to the values measured experimentally shows that the firing rate must rise with the cell volume with an effective Hill coefficient that is at least 20; the probability that all origins fire before the blocking period is over is then at least 92%. Our analysis thus reveals that the low CV of the initiation volume is a consequence of synchronous replication initiation. Finally, we show that the previously presented molecular model for the regulation of replication initiation in _E. coli_ can give rise to synchronous replication initiation for biologically realistic parameters.
## I Introduction
Passing on the genetic information from one generation to the next with high fidelity is crucial for the survival of every organism. Many bacteria contain several copies of their chromosome [1, 2, 3, 4, 5, 6]. In nutrient-rich environments, the bacterium _Escherichia coli_ initiates DNA replication of several copies of the same chromosome synchronously with very high precision [2, 3, 4, 7]. Already in the 1960s, Cooper and Helmstetter suggested that initiating new rounds of replication synchronously at several origins enables _E. coli_ to divide even faster than the fixed time it takes to replicate its entire chromosome [8]: Rounds of replication that started in the mother cell continue to be replicated during cell division and finish only in the following generations (Fig. 1a). To ensure that all daughter cells obtain a fully replicated copy of the chromosome at these high division times, replication must be initiated at all chromosomes synchronously. Later, Skarstad et al. confirmed the prediction of Cooper and Helmstetter by counting the numbers of origins in rapidly growing cultures: They found that most cells have \(2^{n}\) (\(n=1,2,3\)) copies of their chromosome and only a small fraction of cells (2-7%) contained 3, 5, 6 or 7 chromosomes [9]. Recent single-cell measurements indeed show that _E. coli_ initiates replication synchronously at up to eight origins with very high precision in the fast growth regime [2, 7]. It remains however an open question how _E. coli_ achieves such a high degree of synchrony.
Replication initiation in _E. coli_ is controlled by the initiator protein DnaA [10, 11, 12, 13]. This protein can switch between two nucleotide-binding states, an inactive state in which DnaA is bound to ADP and an active one in which it is bound to ATP [14, 15, 16, 17, 18]. Both the inactive and active form can bind to an origin of replication, but binding of the inactive state is not sufficient: replication initiation requires the binding of ATP-DnaA [19, 20, 21, 12]. The evidence is accumulating that the origin binding of DnaA and hence replication initiation is controlled via two distinct mechanisms, titration and protein activation [13, 18, 22]. Titration of DnaA via high-affinity DnaA binding sites on the chromosome generates a cycle in the concentration of free DnaA that is available for binding to the origin [13, 23], while an activation switch induces a cycle in the fraction of active DnaA [24, 25, 18]. These two cycles together conspire to generate robust oscillations in the concentration of free and active DnaA [26]. This concentration of free and active DnaA forms the initiation potential of the cell, which determines the propensity of origin firing.
Initiation synchrony entails that all origins are initiated during each cell cycle, yet also only once per cell cycle. This is a major challenge because the cell needs to meet two potentially conflicting constraints. The requirement that all origins must fire during each cell cycle means that when the first origin fires, the initiation potential cannot go down immediately: it must continue to rise so that also the other origins can fire. On the other hand, the origin that has fired, should not fire again, even though the initiation potential is still rising. It appears that _E. coli_ employs two distinct mechanisms to meet these two constraints. The oscillations in the initiation potential, the concentration of free and active DnaA, constitute a global mechanism that induces not only the first origin to fire, but also prompts, and allows, the remaining origins to fire (Fig. 1a). To prevent the immediate reinitiation of origins that have already fired, a local mechanism is used.
The local mechanism that prevents the immediate reinitiation of newly replicated origins is based on the so-called sequestration of these origins. In _E. coli_, after an origin has initiated replication, the protein SeqA transiently binds to this origin and thus prevents that new rounds of replication start immediately again at the same origin [27, 28]. When either of the two proteins SeqA or Dam that are required for sequestration after replication initiation are deleted, synchrony is lost and replication is initiated throughout the entire cell cycle [27, 29]. Blocking of recently initiated origins during a so-called blocking period is therefore an essential mechanism to ensure synchronous replication initiation (Fig. 1a).
The combination of global oscillations in the initiation potential, which induce all origins to fire, and local ori
Figure 1: **Model of stochastic replication initiation at each origin.** (a) Scheme of the cell cycle of _E. coli_: The volume of the cell grows exponentially with a growth rate \(\lambda\). At doubling times \(\tau_{\rm d}=\ln(2)/\lambda\) that are shorter than the time to replicate the entire chromosome and divide (C+D period), cells are typically born with an ongoing round of chromosomal replication. Replication is initiated stochastically at each origin (yellow circles) at times \(t_{1}\) and \(t_{2}\), respectively, and the replication forks (blue triangles) advance towards the terminus (grey bar) with a constant replication speed. In _E. coli_, all origins fire within a very short time interval, thus giving rise to synchronous replication initiation. To fire replication synchronously a global and a local mechanism are required: the global mechanism keeps the initiation potential high for a licensing period \(\tau_{1}\) (red shaded area), while the local mechanism based on SeqA prevents already initiated origins from refiring for a blocking period \(\tau_{b}>\tau_{1}\) (grey shaded area). In our model, cell division is triggered a fixed cycling time \(\tau_{\rm cc}=T_{\rm C}+T_{\rm D}\) after replication has been initiated. (b) We model the initiation potential \(y\) in the cell as a function of the volume per origin \(v=V/n_{\rm ori}\) via a Hill function with the Hill coefficient \(n\). At the critical volume per origin \(v^{*}\) the initiation potential equals the critical initiation potential \(y^{*}=0.5\). (c) Stochastic model of replication initiation at the origin as a function of the initiation potential in the cell: The origin can be in an open or in a closed configuration and replication can be initiated with a constant rate \(k_{\rm f}^{0}\) if the origin is open. The probability to be in the open state \(p_{\rm o}(y)\) depends on the initiation potential in the cell and is modelled via a Hill function with the Hill coefficient \(m\) and the critical active fraction of DnaA \(f^{*}\).
gin sequestration, which prevents the newly replicated origins from reinitiation, appears to be an elegant solution to the problem of initiation synchrony. Yet, many questions remain. Newly replicated origins are only sequestered for a finite amount of time: the blocking period is about 10 minutes long [12, 29, 30]. Hence, while, after the initial origin has fired, the initiation potential must first continue to rise sufficiently long in order to allow all the remaining origins to fire, it must also come down before this blocking period is over because otherwise, the newly replicated origin(s) will fire again after all. The licensing period during which origins can fire must thus be long enough for all origins to fire, yet also shorter than the blocking period (Fig. 1a). Given that the blocking period is only 10 minutes, this constraint is likely to pose a major challenge.
The problem of replication synchrony is compounded by the fact that the oscillations in the initiation potential are directly shaped by replication initiation itself [27]. When a new origin is fired, the newly generated repli-somes will stimulate the deactivation mechanism called RIDA [12, 18, 31, 32]. Moreover, a few minutes after an origin has initiated DNA replication, the locus _datA_ is duplicated, which enhances deactivation by stimulating the hydrolysis of ATP bound to Dna [33, 34, 35, 16, 36]. Furthermore, the newly duplicated DNA will harbor new titration sites [13, 23], which also tend to reduce the initiation potential by lowering the concentration of cytoplasmic DnaA. How these molecular mechanisms cause the initiation potential to first continue to rise during the licensing period and then fall before the blocking period is over is far from understood.
To study how replication can be initiated synchronously at several origins, we first propose a minimal coarse-grained model in which an initiation potential rises when the cell reaches a critical volume per origin. Each origin can initiate stochastically with a firing probability that depends on the initiation potential. The model contains a licensing period during which the initiation potential rises and origins can fire, and a blocking period during which newly fired origins cannot fire again. By varying the duration of the licensing and the blocking period we reveal four regimes. Only one of these gives rise to robust synchronous replication initiation. In particular, in order to initiate synchronously, the licensing period must be long enough for all origins to fire, yet shorter than the blocking period. However, given that the measured blocking period is only 10 minutes [12, 29, 30], the licensing period must be shorter than 10 minutes. To fire all origins within this short blocking period with a success rate of 92%, the firing rate must rise with the volume with a Hill coefficient of at least 20, such that the average time between the first and last initiation event is less than 4 minutes, as measured experimentally by Skarstad et al. [9]. Our modelling thus provides a rationale for the question of why DNA replication initiation in _E. coli_ is so tightly controlled.
We then investigate how these general synchronization requirements could be realized in the bacterium _E. coli_, by replacing the coarse-grained initiation potential with our previously proposed molecular model, in which the free ATP-DnaA concentration oscillates over the course of the cell cycle [26]; to this end, we have extended this model to include stochastic origin firing. We find that if replication initiation is controlled by the DnaA activation switch [13, 18, 24, 25], initiation synchrony is only achieved for a narrow range of parameters, which is hard to reconcile with the experimentally measured values. Adding titration [11, 22, 23] and bringing the system into a regime where the DnaA concentration in the cytoplasm is low during most of the cell cycle significantly improves the degree of synchrony by sharpening the rise of the initiation potential at a critical volume per origin. This suggests that combining a concentration cycle based on titration with a protein activation cycle is crucial for initiating replication synchronously at multiple origins in the bacterium _E. coli_.
## II The licensing period must be non-zero and shorter than the blocking period
Our coarse-grained model to study the effect of stochastic replication initiation on the cell cycle consists of two parts: Firstly, we model the available amount of initiator proteins in the cell as an initiation potential \(y\) that depends on the volume per origin \(v(t)=V(t)/n_{\text{ori}}(t)\) according to
\[y(v)=\frac{v^{n}}{v^{n}+v^{*n}} \tag{1}\]
with the Hill coefficient \(n\) and the critical volume per origin \(v^{*}\) (Fig. 1b). Secondly, each origin is modelled as a two-state system that can switch stochastically between an open and a closed configuration (see Appendix VIII.1 for details). Exploiting that the origin is more likely to be in the open state when the initiation potential \(y\) in the cell is high, we assume that the probability to be in the open state \(p_{\text{o}}\) increases with the activation potential \(y\) as
\[p_{\text{o}}(y)=\frac{y^{m}}{y^{m}+y^{*m}} \tag{2}\]
with the Hill coefficient \(m\) and the critical initiation potential \(y^{*}\) (Fig. 1c). Molecularly, this non-linear opening probability \(p_{\text{o}}(y)\) could arise via cooperative binding of the initiator to the origin or via a Monod-Wyman-Changeux model, where the open configuration becomes more energetically favorable the more initiators bind to the origin. Assuming rapid opening and closing dynamics of the origin, the origin firing rate is given by the probability to be in the open state \(p_{\text{o}}\) times the maximal firing rate \(k_{\text{f}}^{0}\):
\[k_{\text{f}}=k_{\text{f}}^{0}\,p_{\text{o}} \tag{3}\]
To investigate the effect of stochastic replication initiation on the cell cycle of _E. coli_, we model the volume \(V(t)\) of the cell as an exponential function, \(V(t)=V_{\rm b}\,e^{\lambda\,t}\), where the growth rate \(\lambda=\ln(2)/\tau_{\rm d}\), with cell-doubling time \(\tau_{\rm d}\), is a model parameter. We track the number of chromosomes together with their state of replication (e.g. fully replicated or replication ongoing) and whenever an origin fires a new round of replication at time \(t^{*}\), a new division time a constant cycling time \(\tau_{\rm cc}\) after replication initiation is set at \(\tau_{\rm div}=t^{*}+\tau_{\rm cc}\). The constant cycling time \(\tau_{\rm cc}\) is given by the sum of the time to replicate the entire chromosome \(T_{\rm C}\) and the time from the end of replication until cell division \(T_{\rm D}\) (Fig. 1a). Every set division time \(\tau_{\rm div}\) is linked to the chromosome that just initiated replication. This choice ensures that a cell never divides before the chromosome has been replicated (Fig. VIII.4). When the next division time is reached, the cell volume is divided by two, and one of the two daughter chromosomes is kept at random for the next cell cycle. When the cell inherits a chromosome that is already being replicated but has not yet reached its division time, it also inherits the next division time (Fig. VIII.4).
For synchronous replication initiation at several origins, the initiation potential must remain high after the first initiation event during a licensing time \(\tau_{\rm l}\) and already initiated origins must be prevented from reinitiation during a blocking time \(\tau_{\rm b}>\tau_{\rm l}\). To study the effect of stochastic replication initiation on the degree of synchrony, we consider the fast growth regime (\(\tau_{\rm d}<\tau_{\rm cc}\)), where there are typically two or more origins in the cell at the moment of replication initiation. In what follows below, we focus on the regime with two origins at the beginning of the cell cycle, yet also argue that these results generalize to regimes with more origins at the beginning of the cell cycle. At the critical volume per origin \(v^{*}\), the activation potential \(y(t)\) rises, and the probability to initiate replication \(p_{\rm o}(t)\) increases strongly (Fig. 2, lowest panel). When the first origin fires, the number of origins in the cell increases stepwise, and the volume per origin \(v(t)\) drops instantaneously (Fig. 2, second and third panel). If the initiation potential \(y(t)\) (and therefore also the opening probability \(p_{\rm o}(y)\)) followed the change in the volume per origin instantaneously, it would become very unlikely for the second origin to initiate replication as well, resulting in asynchronous replication initiation. We therefore introduce a licensing time \(\tau_{\rm l}\), during which the initiation potential does not yet sense the change in the volume per origin \(v(t)\) and continues to rise (Fig. 2). The opening probability \(p_{\rm o}(t)\) therefore rises sharply during this licensing time and the second origin also initiates replication stochastically. In order to prevent that already initiated origins fire again, we additionally introduce a blocking period \(\tau_{\rm b}\), during which replication cannot be initiated again at the same origin (Fig. 2, red crossed origins in the cartoon). At the end of the licensing time, the activation potential is updated according to the current number of origins in the cell, and it thus drops instantaneously (Fig. 2, fourth panel). For a suf
Figure 2: **Replication is initiated synchronously at several origins by introducing a blocking and a licensing period**. The volume \(V(t)\), the number of origins \(n_{\rm ori}(t)\), the volume per number of origins \(v(t)=V(t)/n_{\rm ori}(t)\), the initiation potential \(y(t)\) and the opening probability \(p_{\rm o}(t)\) as a function of time (in units of the doubling time of the cell \(\tau_{\rm d}\)). Every origin is initiated stochastically (dashed vertical grey lines) and during the blocking period \(\tau_{\rm b}\) (light blue shaded area), the newly replicated origins cannot be re-initiated. The initiation potential \(y(t)\) and the opening probability \(p_{\rm o}(t)\) continue to increase during the licensing period \(\tau_{\rm l}\) (grey shaded area), such that the remaining origins that have not yet initiated replication are also initiated. At the end of the licensing period, the initiation potential \(y(t)\) and therefore also the opening probability \(p_{\rm o}(t)\) instantaneously decreases to a lower value, making re-initiation highly unlikely. At cell division (vertical solid grey lines), the cell volume is divided by two and one of the two chromosomes is chosen at random for the next cell cycle. The cartoon below shows cells and their circular chromosome at four different moments of the cell cycle. Replication is initiated at the origin (yellow circle) and advances to the terminus (black bar) via two replication forks (light blue triangles). Blocked origins are marked by red crosses and the shaded color of the cell indicates whether the initiation potential in the cell is low (grey color) or high (red color). (See Table 1 for all parameters.)
ficiently long licensing time and a block period that is longer than the licensing time, \(\tau_{\rm b}>\tau_{\rm l}\), we indeed obtain stable cell cycles with synchronous replication initiation events (Fig. 2).
To quantify the degree of synchrony of replication initiation for a given parameter set, we define the degree of synchrony \(s\) as the change of the number of origins \(\Delta n_{\rm ori}\) from the beginning of the initiation period \(t_{\rm i}\) to the end of the initiation period \(t_{\rm f}\), relative to the number of origins \(n_{\rm ori}(t_{\rm i})\) at the beginning of the initiation period (Fig. 3a):
\[s=\frac{\Delta n_{\rm ori}}{n_{\rm ori}(t_{\rm i})}=\frac{n_{\rm ori}(t_{\rm f} )-n_{\rm ori}(t_{\rm i})}{n_{\rm ori}(t_{\rm i})} \tag{4}\]
The beginning of the initiation period \(t_{\rm i}\) is given by the time at which the first origin fires and the initiation period ends at \(t_{\rm f}=t_{\rm i}+\tau_{\rm l}\), when the licensing period of the first origin that has fired is over. When the degree of synchrony \(s\) is one, replication is initiated synchronously, as all origins that were present at the beginning of the initiation period have fired (Fig. 1a). For \(s<1\) or \(s>1\), replication is under- or over-initiated, respectively (Fig. 3a). We measure the degree of synchrony \(s\) for many cell cycles to obtain the average degree of synchrony \(\langle s\rangle\) for any parameter set.
By varying the duration of the licensing and the blocking period, we find four different synchronization regimes (Fig. 3). All simulations start with a single, fully replicated chromosome and if replication is initiated in perfect synchrony, the system settles to a state, where the number of origins oscillates between two and four. However only in regime four is replication initiated synchronously (Fig. 3b, regime 4). When the blocking period \(\tau_{\rm b}\) is zero but the licensing period \(\tau_{\rm l}\) is larger than zero, replication is severely over-initiated, such that no stable cell cycles can be obtained (Fig. 3b, regime 1, grey fields). If on the other hand the licensing period \(\tau_{\rm l}\) is zero or very short and the blocking period \(\tau_{\rm b}\) is larger than \(\tau_{\rm l}\), we obtain a highly under-synchronized cell-cycle: After each initiation event, the initiation potential drops rapidly, preventing further initiation events. This results in periodic, individual initiation events throughout the entire cell cycle (Fig. 3b, regime 2). When both the licensing and the blocking period are non-zero and the licensing period is longer than the blocking period, \(\tau_{\rm l}>\tau_{\rm b}\), origins that have already fired can fire again after the end of the licensing period. This results in initiation events where all origins fire synchronously, but with too many origin firing events. As can be seen in Fig. 3b, in this third regime the number of origins goes from one to four during one initiation duration instead of oscillating between two and four. We therefore call this regime "over-synchronized". Replication is only initiated synchronously once per cell cycle when the licensing period \(\tau_{\rm l}\) is sufficiently large and the blocking period is even larger \(\tau_{\rm b}>\tau_{\rm l}\) (Fig. 3b, regime 4).
## III A steep rise in the origin opening probability is essential
A key question is what controls the size of the synchronization regime 2. While the transition from the over-initiation (Fig. 3b, regime 3) to the perfect synchronization regime (Fig. 3b, regime 4) is sharp and clearly defined by the requirement that \(\tau_{\rm b}>\tau_{\rm l}\), the transition from the under-synchronization (Fig. 3b, regime 2) to the perfect synchronization regime (Fig. 3b, regime 4) is smooth and there is no clear separation between these two regimes. In the following, initially still focusing on the regime with two origins at the beginning of the cell cycle, we show that the average degree of synchrony \(\langle s\rangle\) at different delay periods \(\tau_{\rm l}\) and Hill coefficients \(n\) and \(m\) can be derived from the probability that two independent origins fire within a time interval \(\Delta t<\tau_{\rm l}\).
To calculate the probability that two independent firing events happen within a time interval \(\Delta t\), we first derive an analytical expression for the instantaneous firing probability \(k_{\rm f}(t)=k_{\rm f}^{0}\,p_{\rm o}(t)\). In our model, the opening probability \(p_{\rm o}(y)\) depends indirectly on the time-dependent volume per origin \(v(t)\) via the activation potential \(y(v)\), see equations 1 and 2. The opening probability as a function of the volume per origin \(v\) can however be approximated by a Hill function (see Appendix VIII.2 for derivation)
\[p_{\rm o}(v)\approx\frac{v^{n_{\rm eff}}}{v^{n_{\rm eff}}+v^{*n_{\rm eff}}} \tag{5}\]
with the effective Hill coefficient
\[n_{\rm eff}=\frac{n\,m}{2}. \tag{6}\]
This is a good approximation for the opening probability \(p_{\rm o}(y(v))\), when both the Hill coefficient of the activation potential and that of the opening probability, \(n\) and \(m\), respectively, are relatively high (see Eqs. 1 and 2 and Fig. VIII.1c and d). The firing rate is then given by equation 3 with the approximate opening probability \(p_{\rm o}(v)\) from equation 5. In the following, we use the procedure proposed in Ref. [2] where the maximal firing rate \(k_{\rm f}^{0}\) is chosen such that the average initiation volume \(\langle v^{*}\rangle\) equals the theoretical initiation volume \(v^{*}\) in equation 5 (See Appendix VIII.3). Using this analytical approximation for the opening probability in the regime of sufficiently high Hill coefficients \(n\) and \(m\), we can now calculate the probability that two independent initiation events at times \(t_{1}\) and \(t_{2}>t_{1}\) happen within a time interval \(\Delta t=t_{2}-t_{1}\leq\tau\) (Appendix VIII.4). In order to compare this probability \(\langle P(\Delta t<\tau_{\rm l})\rangle\) to the degree of synchrony obtained from the simulations in the growth regime where two origins are present at the beginning of an initiation event, we re-scale the probability to range from \(s_{\rm min}=0.5\) to \(s_{\rm max}=1\):
\[s_{\rm th}=0.5+\langle P(\Delta t<\tau_{\rm l})\rangle\times 0.5 \tag{7}\]
The average degree of synchrony \(\langle s\rangle\) as a function of the licensing period \(\tau_{\rm l}\) for different Hill coefficients \(n\) and
\(m\) is indeed very well approximated by \(s_{\text{th}}\) (Fig. 4a). The transition from the under-synchronized to perfect synchronization regime in Figure 3b is therefore given by the probability that two independent origin firing events happen within a short time window given by the licensing time \(\tau_{\text{l}}\). The higher the effective Hill coefficient \(n_{\text{eff}}\), the higher the degree of synchrony for a given delay period \(\tau_{\text{l}}\) (Fig. 4a). The degree of synchrony \(\langle s\rangle\) increases with the effective Hill coefficient because that raises the firing rate more steeply, making it more likely that the two origins fire within the licensing time \(\tau_{\text{l}}\).
While the degree of synchrony \(\langle s\rangle\) increases with \(\tau_{\text{l}}\), \(\tau_{\text{l}}\) cannot be longer than the blocking period \(\tau_{\text{b}}\), because otherwise, origins that have already fired will fire again. The blocking period thus bounds \(\tau_{\text{l}}\). In the bacterium _E. coli_, the origin of the blocking period \(\tau_{\text{b}}\) is well understood: The protein SeqA can bind to specific sites on the newly replicated origins which overlap with the binding sites for the initiation protein and thus prevent it from reinitiating. After about ten minutes, the protein SeqA unbinds and new rounds of replication can start again [12; 29; 30]. As the licensing time must be shorter than the blocking period to prevent over-initiation (Fig. 3b, regime 3), the licensing time in _E. coli_ must be less than \(\tau_{\text{b}}^{\text{exp}}=\)10 min. Fig. 4a shows this puts a major constraint on the Hill coefficient: to get a degree of synchrony \(\langle s\rangle\) that is above 95%, the effective Hill coefficient must be at least \(n_{\text{eff}}=30\).
The question remains what the effective Hill coefficient \(n_{\text{eff}}\) is that is consistent with experiments. Interestingly, Skarstad et al. have measured the time between the first and last firing event, which we can compare against our theoretical prediction [9]. However, to do so, we must first examine the dependence on the growth rate, because Skarstad performed their measurements at a higher growth rate. Fig. VIII.5 shows that while the average degree of synchrony \(\langle s\rangle\) as a function of the licensing time \(\tau_{\text{l}}\) varies strongly with the effective Hill coefficient \(n_{\text{eff}}\), it
Figure 3: **Replication is only initiated synchronously when the licensing period is sufficiently long, yet shorter than the blocking period.** (a) The degree of synchrony \(s\) of an initiation cascade is given by the number of origins at the end of the initiation cascade \(n_{\text{ori}}(t_{\text{l}})\) minus the number of origins at the beginning of the initiation cascade \(n_{\text{ori}}(t_{\text{l}})\) relative to the number of origins at the beginning of the initiation cascade \(n_{\text{ori}}(t_{\text{l}})\). An initiation cascade begins with the moment where the first origin fires and ends after the licensing time \(\tau_{\text{l}}\). Replication is initiated synchronously when all origins that were present at the beginning of the cascade have fired exactly once during the cascade (\(s=1\)) and replication was under- or overinitiated when less or more origins have initiated, respectively. (b) The average degree of synchrony \(\langle s\rangle\) as a function of the licensing period \(\tau_{\text{l}}\) and the blocking period \(\tau_{\text{b}}\). The effective Hill coefficient \(n_{\text{eff}}\) is obtained by fitting the opening probability \(p_{\text{o}}(f(v))\) to a Hill function \(p_{\text{o}}(v)\) (Appendix VIII.2). For each parameter set, the average degree of synchrony was obtained from \(N=5000\) consecutive cell cycles. We show example time traces of the number of origins as a function of time (in units of the doubling time of the cell \(\tau_{\text{d}}\)) for four different synchronization regimes as indicated in the heatmap. When no cell cycle could be obtained, the field in the heatmap is marked in grey. (See Table 1 for all parameters.)
is fairly independent of the doubling time of the cell \(\tau_{\rm d}\).
Given that \(\langle s\rangle\) as a function of \(\tau_{\rm l}\) is fairly independent of the growth rate, we now examine the data of Skarstad et al. [9]. To obtain an experimental estimate for the effective Hill coefficient and thus for the average degree of synchrony, we calculate the average time interval between the first and last initiation event \(\langle\Delta t\rangle\) (see Appendix VIII.5) and compare it to the experiments. Skarstad et al. find that this time is on average \(\langle\Delta t_{\rm exp}\rangle\approx 3\) min with an upper limit of \(\Delta t_{\rm exp}^{\rm max}\approx 4\) min [9]. Our theory predicts that to fire two initiation events within an average time interval of \(\langle\Delta t\rangle=3-4\) min, the effective Hill coefficient must be in the range \(n_{\rm eff}=29-38\) (Fig. 4b, vertical grey dotted lines). Interestingly, the dependence of \(\langle\Delta t\rangle\) on \(n_{\rm eff}\) closely tracks that of the coefficient of variation (CV) of the initiation volume (Fig. 4b). The \(\langle\Delta t\rangle=3-4\) min measured by Skarstadt et al. corresponds to a coefficient of variation (CV) of the initiation volume CV\(\approx 0.05-0.06\). This agrees fairly well with the experimental finding that the initiation volume is one of the most tightly controlled cell cycle parameters with CV=0.08-0.1 [2; 37]. A CV of 0.1 as measured by Ref. [2] corresponds to \(n_{\rm eff}\approx 20\) (Fig. 4b, see also calculation in Ref. [2]) and would thus only result in a relatively low degree of synchrony of less than \(\langle s\rangle=0.92\) corresponding to a probability of initiating synchronously of \(\langle P(\Delta t<\tau_{\rm l})\rangle\equiv P_{\rm s}=84\%\) (see equation 7). Recent experiments show however that the contribution from the intrinsic noise in replication initiation to the CV is only about CV\({}_{\rm int}\)=0.04 - 0.05 [7], in even better agreement with the Skarstad data (Fig. 4b). Our model, which only concerns the effect of intrinsic noise, then predicts that for this low CV\({}_{\rm int}\) the effective Hill coefficient \(n_{\rm eff}\) must be at least 40 (Fig. 4b), which then means that at least \(P_{\rm s}=\)98% of the initiation events happen synchronously within a period of 10 min corresponding to \(\langle s\rangle=0.99\) (Fig. 4a).
Before we conclude, we must discuss one key parameter, which is the maximal firing rate \(k_{\rm f}^{0}\). In our theoretical model, we covaried \(k_{\rm f}^{0}\) with \(n_{\rm eff}\) to keep the average ini
Figure 4: **The experimentally observed high precision of replication initiation is required to ensure synchronous replication initiation at multiple origins.** (a) The average degree of synchrony \(\langle s\rangle\) as a function of the licensing time \(\tau_{\rm l}\) for varying effective Hill coefficients \(n_{\rm eff}\) (with \(n=m=\sqrt{2\,n_{\rm eff}}\)) from the simulations (solid lines) agrees well with the theoretical prediction derived in the Appendix VIII.4 (dashed lines). The small difference between the simulations and theory at very low delay periods arises from the fact that while in the theory for two synchronous firing events, the minimal degree of synchrony is \(s_{\rm min}=0.5\), in the simulations there can be more origins at the beginning of an initiation cascade, leading to a lower degree of synchrony \(s_{\rm min}<0.5\). In these simulations, the blocking period \(\tau_{\rm b}\) is set larger than all tested licensing \(\tau_{\rm l}\) periods (\(\tau_{\rm b}=0.25\) h), such that over-initiation events cannot occur. The maximal firing rate \(k_{\rm f}^{0}\) is set such that the average initiation volume \(\langle v^{*}\rangle\) equals the theoretical initiation volume \(v^{*}\) in equation 5 as explained in Appendix VIII.3. The experimentally measured blocking period \(\tau_{\rm b}^{\rm exp}\) is marked as a grey vertical dotted line. For each parameter set, the average degree of synchrony was obtained from \(N=5000\) consecutive cell cycles. (b) The theoretical average time interval between two consecutive firing events \(\langle\Delta t\rangle\) (pink line and axes) and the coefficient of variation of the initiation volume (CV, blue line and axes) as a function of the effective Hill coefficient \(n_{\rm eff}\) (see Appendix VIII.5 for derivation). Skarstad et al. [9] found experimentally that the average time interval to fire all origins in the B/r A. _coli_ strain is about \(\langle\Delta t_{\rm exp}\rangle=\)3 min with an upper estimate of \(\Delta t_{\rm exp}^{\rm max}=\)4 min (horizontal dotted pink lines). The effective Hill coefficient lies therefore in the range \(n_{\rm eff}=29-38\) (vertical dotted grey lines), corresponding to a coefficient of variation of CV=0.05-0.07. This agrees well with the experimentally measured precision of the initiation volume of CV\(\leq\)0.1 [2; 7; 37]. Interestingly, the average degree of synchrony at \(n_{\rm eff}=30\) and \(n_{\rm eff}=40\), respectively, is given by \(\langle s\rangle(n_{\rm eff}=30)=0.975\) and \(\langle s\rangle(n_{\rm eff}=40)=0.996\), corresponding to the probabilities to fire all origins synchronously of \(\langle P(\Delta t<\tau_{\rm l})\rangle(n_{\rm eff}=30)\equiv P_{\rm s}(n_{ \rm eff}=30)=95.5\%\) and \(P_{\rm s}(n_{\rm eff}=40)=98.9\%\). This prediction of the degree of synchrony agrees well with the qualitative experimental observation that in _E. coli_ DNA replication is typically initiated synchronously at multiple origins. Our model therefore provides a rationale for the experimentally observed high precision of replication initiation. (See Table 1 for all parameters.)
tiation volume per origin \(\langle v^{*}\rangle\) constant and equal to \(v^{*}\) of Eq. 5, following the procedure of Wallden et al. [2]. Fig. VIII.6a shows \(\langle\Delta t\rangle\) in our theoretical model as a function of \(k_{\rm f}^{0}\) and \(n_{\rm eff}\) separately (thus without enforcing the constraint \(\langle v^{*}\rangle=v^{*}\)). While \(\langle\Delta t\rangle\) increases with both \(k_{\rm f}^{0}\) and \(n_{\rm eff}\), there is a minimal \(n_{\rm eff}\) that is necessary to reach a given \(\langle\Delta t\rangle\), corresponding to the limit \(k_{\rm f}^{0}\to\infty\) (see inset Fig. VIII.6a). The Hill coefficient necessary to reach the \(\langle\Delta t\rangle\) that matches the value measured by Skarstadt et al. is lower than that in the above procedure in which \(k_{\rm f}^{0}\) and \(n_{\rm eff}\) are covaried (corresponding to the diagonal in Fig. VIII.6a), but it is still very high, around \(n_{\rm eff}\approx 20\) (Fig. VIII.6a). Fig. VIII.6b shows that in this limit, \(k_{\rm f}^{0}\to\infty\) and \(n_{\rm eff}=20\), the degree of synchrony is very high, with \(P_{\rm s}=92\%\). Our model of stochastic replication initiation thus provides a rationale for the experimentally observed high precision of DNA replication initiation in _E. coli_: Given the constraint set by the duration of the blocking period, the system requires a very high Hill coefficient in order to initiate replication synchronously. Since increasing the Hill coefficient beyond this already large value becomes progressively harder, it seems that the system operates close to what is theoretically possible given the duration of the blocking period.
## IV Initiation synchrony in molecular activation switch model for _E. coli_
Our coarse-grained model of replication initiation revealed general requirements for initiating replication synchronously at several origins. It remains however an open question how these requirements are implemented on a molecular level in different organisms. In _E. coli_, both a protein activation cycle and a concentration cycle are required for robust replication initiation at all growth rates [26]. In the following, we first address the question whether a protein activation cycle alone, i.e. without the help of a concentration cycle, can yield synchronous replication. To this end, we will study the so-called LDDR model, which we have developed previously [26] (Fig. 5a). This model contains activation of DnaA via the lipids and the chromosomal sites DARS1/2 and deactivation of DnaA via the chromosomal site datA and the replication-associated mechanism of Regulatory Inactivation of DnaA (RIDA) [26]. We show that this cycle alone can induce synchronous replication initiation, but only over a very limited parameter regime. In a second step, we show that adding a concentration cycle via titration sites can significantly enhance the degree of synchrony.
To test the effect of stochastic origin firing in the LDDR model, we replace the abstract initiation potential \(y(v)\) we used in the coarse-grained model with the LDDR model for the ATP-DnaA fraction \(f(t)\) in the cell. The opening probability \(p_{\rm o}(f)\) is again modelled as a simple Hill function according to equation 2. Motivated by the experimental observation that there are 10 sites for DnaA binding to the origin [13], the Hill coefficient was chosen to be \(m=10\); in addition, the critical fraction in Eq. 2 was chosen to be \(f^{*}=0.5\). Moreover, the maximal firing rate \(k_{\rm f}^{0}\) was set to a large value, i.e. \(k_{\rm f}^{0}=1000\) h\({}^{-1}\), such that the system is in the regime where the degree of synchrony is not limited by \(k_{\rm f}^{0}\), but only limited by the dynamics of the activation cycle \(f(t)\). Like in the coarse-grained model, already initiated origins are blocked transiently during a blocking period of \(\tau_{\rm D}=\)10 min. Contrary to the coarse-grained model, where after the end of the licensing period the initiation potential drops instantaneously to a very low value, in the LDDR model, the active fraction \(f\) follows from the temporal dynamics of the antagonistic interplay between DnaA activation and deactivation. In the LDDR model, the licensing period is thus not imposed, as in the coarse-grained model above, but is implicit in the dynamics of the LDDR model. Yet, to quantify the degree of synchrony, we need to define an effective initiation period \(\tau_{\rm i}\), akin to the licensing period \(\tau_{\rm i}\) in the coarse-grained model (see Eqs. 4 and 7). We define \(\tau_{\rm i}\) to be a fraction of the cell cycle time \(\tau_{\rm d}\): \(\tau_{\rm i}=\alpha\,\tau_{\rm d}\). While \(\alpha\) cannot be defined uniquely, we show in the Appendix VIII.6 that the degree of synchrony is fairly robust to the precise choice of \(\alpha\). In the following, we therefore choose \(\alpha=0.4\), such that \(\tau_{\rm i}=0.4\,\tau_{\rm d}\).
The LDDR model can indeed give rise to synchronous replication initiation at multiple origins, but only for a small range of parameters: When the (de)activators _DARS1_, _DARS2_ and _datA_ are located at the experimentally measured positions on the chromosome, replication is initiated asynchronously when RIDA starts directly after an origin has fired (Fig. 5b at \(\tau_{\rm rida}=0\) h). As RIDA is a strong deactivator, it causes the active fraction to drop rapidly after the first origin has been initiated and thus prevents other origins from firing as well. By varying both the position of _datA_ on the chromosome and the time at which RIDA starts after an origin has fired, we find that replication can be initiated synchronously in the LDDR model for a small range of parameters: At the experimentally measured replication time of _datA_ of \(\tau_{\rm datA}=0.13\,\)h \(\approx 8\) min [13, 33], replication is initiated with a high degree of synchrony when the deactivation rate of RIDA becomes high with a delay of \(\tau_{\rm rida}=0.1\) h=6 min after the origin has fired (Fig. 5b and c). The closer the site _datA_ is to the origin, the later RIDA should start for synchronous replication initiation (Fig. 5b).
It remains however unclear what molecular mechanism could cause a delay in the onset of RIDA of about 6 minutes. In RIDA, the DNA polymerase clamp on newly synthesized DNA forms a complex with ADP and the Hda protein. The resultant ADP-Hda-clamp-DNA can bind ATP-DnaA and stimulates ATP hydrolysis yielding ADP-DnaA [32, 31]. It is conceivable that Hda binding is slow, but whether it would yield a delay of about 6 minutes is far from clear. For experimentally realistic parameters, the LDDR model appears therefore not sufficient to explain how replication is initiated synchronously
in _E. coli._
## V Titration can enhance the degree of synchrony of an activation switch
In _E. coli_, DNA replication initiation is not only controlled via an activation switch but also via titration [10, 11]. To study the effect of titration on the degree of synchrony, we add homogeneously distributed titration sites on the chromosome to the LDDR model [26]. In the LDDR-titration model, the initiation potential is given by the free ATP-DnaA concentration \([D]_{\mathrm{ATP,f}}\) in the cell and both oscillations in the active fraction \(f\) and in the free DnaA concentration \([D]_{\mathrm{T,f}}\) contribute to regulating replication initiation. We again model the stochastic opening probability of the origin as a Hill function (equation 2) with Hill coefficient \(m=10\). The critical initiation potential \(y^{*}\) is now given by a critical free ATP-DnaA concentration \([D]_{\mathrm{ATP,f}}^{*}\) at which ATP-DnaA binds cooperatively to the origin. We here neglect the effect of the relatively small number of about 10-20 DnaA proteins that are bound to the origin on the free DnaA concentration. As explained in Ref. [26], we set the parameters (by varying the lipid activation rate \(\alpha_{\mathrm{l}}\)) such that the initiation volume of the switch \(v_{\mathrm{s}}^{*}\) and the initiation volume of the titration mechanism \(v_{\mathrm{t}}^{*}\) are approximately the same. This optimal choice ensures that both the free concentration and the active fraction rise at the same critical volume per origin, thus increasing the amplitude of the oscillations in the free ATP-DnaA concentration.
Fig. 6a/c show the time traces of the model that combines titration with the activation switch. The small jump in the total free DnaA concentration upon cell division results from the following interplay. Firstly, only one out of two chromosomes is selected per daughter cell
Figure 5: **The LDDR model can ensure a high degree of synchronous replication initiation for a narrow range of parameters.** (a) In the Lipid-_DatA-DARS1/2_-RIDA (LDDR) model, replication forks overlap and RIDA is the main deactivator in combination with the activators _DARS1_ and _DARS2_. (b) The average degree of synchrony \(\langle s\rangle\) as a function of the replication time of the site _datA_\(\tau_{\mathrm{data}}\) and the onset time of RIDA \(\tau_{\mathrm{data}}\). The sites _DARS1_ and _DARS2_ are replicated at the experimentally measured times \(\tau_{\mathrm{d1}}=0.25\,\mathrm{h}=15\) min and \(\tau_{\mathrm{d1}}=0.4\,\mathrm{h}=24\) min, respectively. Replication is only initiated synchronously for a small range of parameters. When the site _datA_ is replicated after the experimentally measured time of \(\tau_{\mathrm{data}}=0.13\,\mathrm{h}\approx 8\) min (red horizontal line), replication in the LDDR model is only initiated synchronously if RIDA starts only about 6 minutes after the origin is initiated. It is however not clear what could cause a delay of 6 minutes in the onset of RIDA. (c) The volume \(V(t)\), the number of origins \(n_{\mathrm{ori}}(t)\) and the ATP-DnaA fraction \(f(t)\) as a function of time (in units of the doubling time of the cell \(\tau_{\mathrm{d}}\)) for the parameter combination marked in b. The large amplitude oscillations in the active fraction in combination with a long delay in the onset of deactivation via RIDA and _datA_ can give rise to a high degree of synchrony for a small range of parameters. For each parameter set in b, the average degree of synchrony was obtained from \(N=5000\) consecutive cell cycles. (See Table 1 for all parameters.)
(Fig. 6a/c, second panel). The stochastic firing of the origins causes a temporal delay between the initiation of replication at the respective origins. Moreover, in the growth-rate regime of overlapping replication forks considered here, not all chromosomes have been fully replicated at the moment of cell division. Taken together, this means that at the moment of cell division not all chromosomes have the same number of titration sites (the sites are distributed uniformly). The difference in the number of titration sites per chromosome causes a slight change in the free concentration upon cell division.
Adding titration sites to the LDDR model affects the degree of synchrony only little when the critical free ATP-DnaA concentration at which replication is initiated is high. When a new round of replication is initiated, new titration sites are generated and the free DnaA concentration drops. As discussed in [26], at high growth rates, where multiple chromosomes are present in the cell, new
Figure 6: **Adding titration sites to the LDDR model enhances initiation synchrony for low critical free DnaA concentrations.** (a, c) The volume \(V(t)\), free DnaA concentration (independent of whether DnaA is bound to ATP or ADP) \([D]_{\mathrm{T,f}}(t)\), the ATP-DnaA fraction \(f(t)\), and the free ATP-DnaA concentration \([D]_{\mathrm{ATP,f}}(t)\) as a function of time (in units of the doubling time of the cell \(\tau_{\mathrm{d}}=0.67\) h=40 min) for a critical free ATP-DnaA concentration of \([D]^{*}_{\mathrm{f,ATP}}=50\,\mu\mathrm{m}^{-3}\) (a) and \([D]^{*}_{\mathrm{f,ATP}}=10\,\mu\mathrm{m}^{-3}\) (c). During the blocking period \(\tau_{\mathrm{b}}\) (light blue shaded area), the newly replicated origins cannot be re-initiated. (a) When the critical free ATP-DnaA concentration is relatively high, the free DnaA concentration \([D]_{\mathrm{T,f}}(t)\) oscillates only weakly and decreases slightly after new rounds of replication are initiated due to the synthesis of new sites. The shape of the oscillations in the free ATP-DnaA concentration \([D]_{\mathrm{ATP,f}}(t)\) is therefore mainly determined by the oscillations in the ATP-DnaA fraction \(f(t)\). (b, d) The average degree of synchrony (\(s\)) as a function of the replication time of the site \(data\,\tau_{\mathrm{dataA}}\) and the onset time of RIDA \(\tau_{\mathrm{rida}}\) for \([D]^{*}_{\mathrm{f,ATP}}=50\,\mu\mathrm{m}^{-3}\) (b) and \([D]^{*}_{\mathrm{f,ATP}}=10\,\mu\mathrm{m}^{-3}\) (d). The sites \(DARS1\) and \(DARS2\) are replicated at the experimentally measured times \(\tau_{\mathrm{d1}}=0.25\,\mathrm{h}=15\) min and \(\tau_{\mathrm{d1}}=0.4\,\mathrm{h}=24\) min, respectively. (b) When the critical free ATP-DnaA concentration is high, the effect of the titration sites on the degree of synchrony is small and almost indistinguishable from the scenario without titration sites (compare to Fig. 5b). (c) At a lower critical free ATP-DnaA concentration, the oscillations in the free concentration are larger and lead to sharper oscillations of the free ATP-DnaA concentration. This causes a broader range of parameters for which replication is initiated synchronously (d). For each parameter pair in b and d, the average degree of synchrony was obtained from \(N=5000\) consecutive cell cycles. (See Table 1 for all parameters.)
titration sites are however replicated at a similar rate as new DnaA proteins are synthesized. Titration therefore introduces only weak oscillations in the free total DnaA concentration (Fig. 6a). If the critical free DnaA concentration at which DNA replication is initiated is relatively high, the oscillations in the free DnaA concentration contribute only little to the oscillations in the initiation potential (Fig. 6a). In this scenario, adding titration to the LDDR model does not significantly change the degree of synchrony, the optimal position of _datA_ on the chromosome or the optimal onset time of RIDA (compare Fig. 6b to Fig. 5b).
When the free DnaA concentration \([D]_{\rm T.f}\) is however low, titration can significantly enhance the degree of synchrony of the LDDR model. Setting the critical free ATP-DnaA \([D]_{\rm f,ATP}^{*}\) to a value that is comparable to the affinity of the titration sites increases the oscillations in the free DnaA concentration (Fig. 6c). The resulting sharper rise of the free ATP-DnaA concentration gives rise to a higher degree of synchrony at all positions of _datA_ and onset times of RIDA (Fig. 6c). The regime of parameters in which replication is initiated with a high degree of synchrony now extends also to shorter and more realistic onset times of RIDA than in the LDDR model (Fig. 6d). In summary, the full titration-switch model is able to synchronously initiate replication.
## VI Discussion
The bacterium _E. coli_ initiates replication at several origins synchronously with high precision. How it achieves this high degree of synchrony remained however unknown. In this work, we have revealed several general principles that govern whether replication is initiated synchronously at several origins: (1) the initiation potential must remain high after the first origin has fired so that the remaining origins can fire; (2) origins that have already fired must be prevented from reinitiating immediately as long as not all origins have fired; this necessitates a blocking period and (3) the initiation potential must come down before the blocking period is over to prevent reinitiation of the newly replicated origins. The licensing period, during which the origins can fire, must thus be shorter than the blocking period. The blocking period, in turn, is limited to only 10 minutes [12, 29, 30], which means that the licensing period must be shorter than 10 minutes. To ensure that all origins fire during this short licensing period, the initiation potential must rise sharply, and to guarantee that the initiation potential is low again before the blocking period is over, it also must fall sharply. Synchronous replication initiation thus requires sharp oscillations in the initiation potential. Such oscillations will also give rise to small variations in the initiation volume. Our results therefore predict that the experimentally observed small variation in the initiation volume is a result of the requirement of synchronous replication initiation.
We showed that the previously presented model for the regulation of replication initiation in the bacterium _E. coli_[26] can ensure a high degree of initiation synchrony for a range of parameters that agree with the experimentally measured ones. We find that if replication initiation is governed by a protein activation switch only, the optimal onset time of the RIDA mechanism would have to be about 6 min in order to ensure synchronous replication initiation. As RIDA is coupled to active replication [38], protein diffusion in cells is typically in the order of seconds rather than minutes [39] and binding of HdA to the replication clamps is rather strong [31, 32], it seems natural to assume that the deactivation via RIDA becomes strong directly after a new round of replication starts. It is however conceivable that HdA concentration rises slowly, that HdA binding is slow or that several RIDA complexes are required for a strong deactivation rate of RIDA [31, 32]. Adding a concentration cycle based on titration sites to the activation switch and bringing the system to a regime where the free DnaA concentration is low during the entire cell cycle enhances the degree of synchrony significantly for a broad range of parameters. Importantly, in the combined model replication is initiated with a high degree of synchrony also for shorter onset times of RIDA. Combining an activation cycle with a concentration cycle is therefore likely to be vital to synchronous replication initiation in _E. coli_.
Increasing the duration of the blocking period would be an easy way for the cell to increase the degree of synchrony. However, also at very fast growth rates, where the doubling time of _E. coli_ is about 20 min, the blocking period must remain shorter than the doubling time in order to allow for a new round of replication to start in time. This imposes a natural bound for the duration of the blocking period. Since the duration of the blocking period imposes a hard constraint on synchronous replication initiation, it is tempting to speculate that the requirement of synchronous replication initiation limits the maximal growth rate of _E. coli_.
Also other organisms such as the bacteria _Bacillus subtilis_[3, 4], _Mycobacterium smegmatis_[5] and _Vibrio cholerae_[6] initiate multiple chromosomes synchronously in certain growth conditions. These bacteria are evolutionarily divergent and have different molecular mechanisms to control the initiation of replication. Nevertheless, the general principles for synchronous replication initiation presented in this work should also remain valid for these organisms. For example, while the bacterium _B. subtilis_ lacks the protein SeqA, it instead contains the protein Spo0A, which can inhibit replication initiation in the _B. subtilis_ phage \(\phi 29\) in vivo and has been shown to bind to specific sites on the origin in vitro [12]. These experiments suggest that Spo0A, similar to SeqA in _E. coli_, represses chromosomal replication by binding directly to the origin region of _B. subtilis_.
Finally, we have not modelled the binding of about 10-20 ATP-DnaA proteins to the origin explicitly. It has however been proposed in the so-called 'initiation cascade
model' that initiating replication at the first origin could cause other origins to fire as well by releasing the bound initiator proteins into the cytoplasm [40, 41]. The resulting higher concentration of free DnaA proteins could lead to a redistribution of the free DnaA proteins to the remaining origins, making the next replication initiation event more likely [40]. We tested this idea by introducing weak, cooperative origin binding sites to which only ATP-DnaA can bind into our model. When in this extended model the concentration of ATP-DnaA in the cytoplasm rises, the weak binding sites at the respective origins begin to fill up and then trigger the initiation of replication at a randomly selected origin (see Appendix VIII.7). After replication has been initiated, the binding sites at the origin that fired become unavailable for binding DnaA for the duration of the blocking period, causing a rise in the free DnaA concentration, as predicted by Ref. [40]. We find, however, that the ATP-DnaA binding to the origin has two opposing effects: on the one hand, the initiation potential indeed increases right after the first initiation event due to the released ATP-DnaA proteins, making the next initiation event more likely (see Fig. VIII.3a and b). On the other hand, binding of ATP-DnaA proteins to the origin leads to a less sharp rise in the free DnaA concentration right before the first origin initiates replication (see Fig. VIII.3a and b). A sharp rise of the initiation potential right before replication initiation is however a necessary requirement for synchronous replication initiation. Therefore, the net effect of the initiation cascade on the degree of synchrony is approximately zero and we do not find a significant increase in the degree of synchrony (see Fig. VIII.3c).
## VII Acknowledgements
We want to thank Vahe Galstyan for the fruitful discussions and his mathematical insights, and Lorenzo Olivi for inspiring discussions. We acknowledge financial support from The Netherlands Organization of Scientific Research (NWO/OCW) Gravitation program Building A Synthetic Cell (BaSyC) (024.003.019).
## VIII Appendix
### coarse-grained model for origin opening
We describe the origin region as a two-state system that can switch between an open (O) or a closed (C) configuration with the opening rate \(k_{\mathrm{o}}\) and the closing rate \(k_{\mathrm{c}}\). If the origin is open, replication can be initiated (I) with a maximal firing rate \(k_{\mathrm{f}}^{0}\):
\[C\overset{k_{\mathrm{o}}}{\underset{k\mathrm{c}}{\underset{k\mathrm{c}}{ \longrightarrow}}}O\overset{k_{\mathrm{f}}^{0}}{\longrightarrow}I \tag{8}\]
In thermal equilibrium, the ratio of the transition rates between the open and closed state is given by the Boltzman distribution of the energy difference between the two states:
\[\frac{k_{\mathrm{c}}}{k_{\mathrm{o}}}=\frac{e^{-\beta\,E_{\mathrm{c}}}}{e^{- \beta\,E_{\mathrm{o}}}}=e^{\beta\,\Delta G} \tag{9}\]
with \(\beta=k_{\mathrm{B}}\,T\) and the energy difference
\[\Delta G=E_{\mathrm{o}}-E_{\mathrm{c}} \tag{10}\]
where \(E_{\mathrm{o}}\) is the energy of the open state and \(E_{\mathrm{c}}\) is the energy of the closed state. The probability to be in the open state as a function of the energy difference \(\Delta G\) is given by
\[p_{\mathrm{o}}=\frac{e^{-\beta\,E_{\mathrm{o}}}}{e^{-\beta\,E_{\mathrm{o}}}+e ^{-\beta\,E_{\mathrm{c}}}}=\frac{1}{1+e^{\beta\,\Delta G}} \tag{11}\]
Assuming rapid opening and closing dynamics of the origin, the origin firing rate is given by equation 3. The higher the initiation potential \(f\) in the cell, the more likely is it that the origin is open and that replication can be initiated. We model this observation phenomenologically by assuming that the opening probability \(p_{\mathrm{o}}\) increases with the activation potential \(f\) following a Hill function (see equation 2).
### Derivation of approximation for opening probability
We want to find an analytical expression for the opening probability \(p_{\mathrm{o}}\) and therefore also the instantaneous firing rate \(k_{\mathrm{f}}\) (equation 3) as a function of time. We therefore insert equation 1 into equation 2 to obtain:
\[p_{\mathrm{o}}(f(v))= \frac{v^{n\,m}}{f^{*\,m}\,(v^{*\,n}+v^{n})^{m}+v^{n\,m}} \tag{12}\] \[= \frac{v^{n\,m}}{f^{*\,m}\,v^{*\,n\,m}\,(1+\tilde{v}^{n})^{m}+v^{n \,m}}, \tag{13}\]
where we used \(\tilde{v}=v/v^{*}\). According to the binomial formula, we can write
\[(1+\tilde{v}^{n})^{m} =\sum_{k=0}^{m}\binom{m}{k}\,1^{k}\,(\tilde{v}^{n})^{m-k} \tag{14}\] \[=\sum_{k=0}^{m}\binom{m}{k}\,(\tilde{v}^{n})^{m-k}. \tag{15}\]
with the binomial coefficient
\[\binom{m}{k}\coloneqq\frac{m!}{k!\,(m-k)!}. \tag{16}\]
We introduce the shifted parameter \(k^{\prime}=k-m/2\), such that equation 15 can be rewritten as:
\[(1+\tilde{v}^{n})^{m}=\sum_{k^{\prime}=-m/2}^{m/2}\binom{m}{k^{\prime}+\frac{ m}{2}}\,(\tilde{v}^{n})^{\frac{m}{2}-k^{\prime}}. \tag{17}\]
By examining the first and the second term of the sum in equation 17 separately, we find that the binomial coefficient has a maximum at \(k^{\prime}=0\) and decays quickly for \(k^{\prime}\neq 0\) (See Fig. VIII.1a). Secondly, as can be seen in Figure VIII.1b, for small \(k^{\prime}\ll\pm m/2\) and sufficiently large Hill coefficient \(m\), the second term is approximately given by
\[\tilde{v}^{n\,(\frac{m}{2}-k^{\prime})}\approx\tilde{v}^{\frac{mn}{2}}. \tag{18}\]
Combining these two observations, we can approximate equation 17 by
\[(1+\tilde{v}^{n})^{m}\approx\sum_{k^{\prime}=-m/2}^{m/2}\binom{m}{k^{\prime}+ \frac{m}{2}}\tilde{v}^{\frac{mn}{2}}. \tag{19}\]
Finally, using that
\[\sum_{k^{\prime}=-m/2}^{m/2}\binom{m}{k^{\prime}+\frac{m}{2}}=2^{m}, \tag{20}\]
we find
\[(1+\tilde{v}^{n})^{m}\approx\,2^{m}\,\tilde{v}^{\frac{mn}{2}} \tag{21}\]
Plugging this expression into equation 13 gives
\[p_{\rm o}(v)\approx \frac{v^{n\,m}}{f^{*\,m}\,v^{*\,n\,m}\,2^{m}\,\tilde{v}^{\frac{mn} {2}}+v^{n\,m}} \tag{22}\] \[= \frac{v^{n\,m}}{f^{*\,m}\,2^{m}\,v^{*\,\frac{mn}{2}}\,v^{\frac{mn} {2}}+v^{n\,m}}\] (23) \[= \frac{v^{\frac{nm}{2}}}{f^{*\,m}\,2^{m}\,v^{*\,\frac{mn}{2}}+v^{ \frac{mn}{2}}} \tag{24}\]
For \(f^{*}=0.5\) we then find equation 5 of the main text with the effective Hill coefficient \(n_{\rm eff}=n\,m/2\). By comparing the approximation of \(p_{\rm o}(v)\) in equation 5 to a function
\[p_{\rm o}^{\rm fit}(v)=a^{\rm fit}\,\frac{v^{n_{\rm eff}^{\rm fit}}}{v^{*\,n _{\rm eff}^{\rm fit}}+v^{n_{\rm eff}^{\rm fit}}} \tag{25}\]
that was fitted to \(p_{\rm o}(f(v))\) (equation 13), we find that the approximation in equation 5 is indeed a good ap
\begin{table}
\begin{tabular}{l l l l} Parameter & name & value & Motivation \\ \hline \hline \(n\) & Hill coefficient of initiation potential & 5 & set to match initiation precision reported \\ & & & in [14] \\ \(v^{*}\) [\(\mu\)m\({}^{3}\)] & initiation volume per origin & 1 & set to match initiation volume reported \\ & & & in [14] \\ \(m\) & Hill coefficient of opening probability & 10 & [13] \\ \(y^{*}\) & critical initiation potential & 0.5 & set to maximal sharpness of opening probability \\ \(K_{\rm D}\) [\(\mu\)m\({}^{-3}\)] & dissociation constant of (de)activators & 5 & [23] \\ \(\alpha_{1}\)[\(l\)] [\(\mu\)m\({}^{-3}\) h\({}^{-1}\)] & activation rate lipids & LDDR:500, & set to match initiation volume reported \\ & & & LDDR+titration:800 in [14] \\ \(\beta_{\rm data}\) [h\({}^{-1}\)] & deactivation rate _datA_ & 600 & [16] \\ \(\tau_{\rm datA}\) [h] & replication time _datA_ & 0.13 & [16] \\ \(f^{*}\) & critical initiator fraction & 0.5 & [18, 25] \\ \(\tau_{\rm l}\) [h] & initiation duration & 0.27 & see Fig. VIII.2 \\ \(\alpha_{41}\) [h\({}^{-1}\)] & activation rate _DARS1_ & 100 & [13, 17] \\ \(\tau_{\rm al}\) [h] & replication time _DARS1_ & 0.4 & [13] \\ \(\alpha_{42}^{+}\) [h\({}^{-1}\)] & high activation rate _DARS2_ & 600 & combined with \(\beta_{\rm rida}\) \\ \(\alpha_{42}\) [h\({}^{-1}\)] & low activation rate _DARS2_ & 50 & set to arbitrary low value \\ \(\tau_{42}\) [h] & replication time _DARS2_ & 0.25 & [17] \\ \(\tau_{42}^{+}\) [h] & start high activation rate _DARS2_ & 0.2 & [17] \\ \(\tau_{\rm d2}^{-}\) [h] & end high activation rate _DARS2_ & 2/3 & [17] \\ \(\beta_{\rm rida}\) [h\({}^{-1}\)] & deactivation rate RIDA & 500 & [16, 32, 42] \\ \([D]_{\rm T}\) [\(\mu\)m\({}^{-3}\)] & total DnaA concentration & 400 & [20, 22] \\ \(\phi_{0}\) & gene allocation fraction & \(4\times 10^{-4}\) & set to match \([D]_{\rm T}\) \\ \(K_{\rm B}^{\rm K}\) [\(\mu\)m\({}^{-3}\)] & dissociation constant of titration sites & 1 & [23] \\ \(n_{\rm ori}^{\rm ori}\) & number of origin binding sites & 10 & [13] \\ \([D]_{\rm ATP,f}\) & critical free ATP-DnaA concentration & 10 & [23] \\ \(\rho\) [\(\mu\)m\({}^{-3}\)] & number density & \(10^{6}\) & [43] \\ \(k_{\rm f}^{0}\) [s\({}^{-1}\)] & maximal origin firing rate & 1000 & set such that degree of synchrony is maximal \\ \(\tau_{\rm h}\) [h] & blocking period & 0.17 & [28, 29, 30] \\ \(\lambda\) [h\({}^{-1}\)] & growth rate & 1.04 & [2, 3] \\ \(T_{\rm C}\) [h] & C-period & 2/3 & [8] \\ \(T_{\rm D}\) [h] & D-period & 1/3 & [8] \\ \end{tabular}
\end{table}
Table 1: **Parameters used in the simulations** One molecule per cubic micrometer corresponds to approximately one nM (1 \(\mu\)m\({}^{-3}=1.67\) nM).
proximation for sufficiently large Hill coefficients \(n\) and \(m\), especially at volume close to the critical volume \(v^{*}\) (Fig. VIII.1c). Indeed, the fitted Hill coefficient agrees well with the approximated Hill coefficient in equation 6 for a broad range of Hill coefficients \(n\) and \(m\), respectively (Fig. VIII.1d).
### Parameter choice for maximal firing rate
Combining the approximation for the opening probability as a function of the volume per origin (equation 5), the exponentially growing cell-volume \(V(t)=V_{\rm b}\,e^{\lambda\,t}\), and the expression for the firing rate (equation 3), we find the following time-dependent firing rate of a single origin:
\[k_{\rm f}(t)=k_{\rm f}^{0}\frac{\left(V_{\rm b}\,e^{\lambda\,t}\right)^{n_{\rm eff }}}{v^{*\,n_{\rm eff}}+\left(V_{\rm b}\,e^{\lambda\,t}\right)^{n_{\rm eff}}} \tag{26}\]
From this rate, we can calculate the survival probability
\[S(t)= e^{-\int_{t_{0}}^{t}dt^{\prime}k_{\rm f}(t^{\prime})} \tag{27}\] \[= e^{-\frac{k_{\rm f}^{0}}{n_{\rm eff}\lambda}\,\ln\left(\frac{(V_{ \rm b}\,e^{\lambda\,t};p_{\rm eff}+\,e^{*\,n_{\rm eff}})}{V_{\rm b}^{*\,n_{\rm eff }}+v^{*\,n_{\rm eff}}}\right)} \tag{28}\]
where we solved the integral with the initial condition \(S(t_{0}=0)=1\). We now impose that at the theoretical initiation volume per origin \(v(t=t^{*})=v^{*}\), the survival probability is exactly \(S(t^{*})=0.5\). Using this constraint, we obtain the following expression for the maximal firing
Figure VIII.1: **The instantaneous opening probability can be approximated by a Hill function with an effective Hill coefficient** (a) The binomial coefficient as defined in equation 16 as a function of the index \(k^{\prime}\) for \(m=10\). The binomial coefficient is centered around and maximal at \(k^{\prime}=0\) and becomes small for \(k^{\prime}\gg 0\). (b) The second term in equation 17 for different values of the index \(k^{\prime}\) as a function of the rescaled volume \(\tilde{v}=v/v^{*}\). For small values of \(k^{\prime}\), the second term in equation 17 is well approximated by the term \(\tilde{v}^{\frac{n}{2}}\) (dashed blue line). (c) The opening probability of the origin \(p_{\rm o}(f(v))\) (equation 13) as a function of the volume per origin \(v\) for different Hill coefficients \(n\) and \(m\) (solid lines). The effective Hill coefficient \(n_{\rm eff}^{\rm fit}\) is obtained from a fit of the function \(p_{\rm o}(f(v))\) to a Hill function (equation 25). The dashed lines show the approximated opening probability (equation 5) with the effective Hill coefficient as defined in equation 6. The vertical dotted line indicates the critical volume per origin \(v^{*}\) at which the opening probability equals \(1/2\). (d) The fitted (solid line) and the approximated (dashed line, equation 6) Hill coefficient as a function of the Hill coefficient \(n\) for different values of the Hill coefficient \(m\). Except for very low Hill coefficients \(n\) and \(m\), the approximated Hill coefficient agree well. In all graphs \(f^{*}=v^{*}=0.5\).(See Table 1 for all parameters.)
rate as a function of the effective Hill coefficient \(n_{\rm eff}\):
\[k_{\rm f}^{0}(n_{\rm eff})=\frac{n_{\rm eff}\,\lambda\,\ln(2)}{\ln\left(\frac{2\, v^{n_{\rm eff}}}{V_{\rm b}^{\rm end}+v^{n_{\rm eff}}}\right)} \tag{29}\]
This parameter choice ensures that the average initiation volume \(\langle v^{*}\rangle\) is given by \(v^{*}\).
### Derivation of the theoretical prediction for the degree of synchrony
In the following, we derive a theoretical prediction for the probability that two initiation events happen within a time interval \(\tau_{l}\). We assume here that the two firing events are statistically independent, meaning that between the first initiation event at time \(t_{1}\) and the time \(t_{1}+\tau_{l}\), the change in the number of origins induced by the first event has no effect on the firing of the second event. Using the firing rate in equation 26, we can now calculate the error probability \(S_{\rm err}\) that the second event does _not_ happen within a time \(\tau_{1}\) after the first event, given that the first event happened at time \(t_{1}\):
\[S_{\rm err}(t_{2}-t_{1}>\tau_{l}|t_{1}) =e^{-\int_{t_{1}}^{t_{1}+\tau_{l}}dt^{\prime}k_{\rm f}(t^{\prime})} \tag{30}\] \[=e^{-\frac{k_{\rm f}^{0}}{n_{\rm eff}\,\lambda}\,\log\left(\frac{ \left(V_{\rm b}\,e^{\lambda\,(t_{1}+\tau_{l})}\right)^{n_{\rm eff}}+v^{*}n_{ \rm eff}}{\left(V_{\rm b}\,e^{\lambda\,t_{1}}\right)^{n_{\rm eff}}+v^{*}n_{ \rm eff}}\right)} \tag{31}\]
The average error probability \(\langle S_{\rm err}\rangle\) over all initiation times of the first event, \(t_{1}\), is then given by
\[\langle S_{\rm err}\rangle =\int_{0}^{\tau_{\rm d}}dt_{1}\,q_{1}(t_{1})\,S_{\rm err}(t_{2}-t _{1}>\tau_{1}|t_{1}) \tag{32}\] \[=\int_{0}^{\tau_{\rm d}}dt_{1}\,q_{1}(t_{1})\,e^{-\int_{t_{1}}^{t _{1}+\tau_{l}}dt^{\prime}k_{\rm f}(t^{\prime})} \tag{33}\]
where \(\tau_{\rm d}\) is the doubling time of the cell. The propensity \(q_{1}(t_{1})\) that one out of two origin events happens at time \(t_{1}\) is given by
\[q_{1}(t_{1})=2\,k_{\rm f}(t_{1})\,e^{-\int_{t_{0}}^{t_{1}}dt^{\prime}2\,k_{ \rm f}(t^{\prime})} \tag{34}\]
Therefore, the average error probability \(\langle S_{\rm err}\rangle\) that the second origin fires after a time \(\tau_{1}\) after the first event is given by plugging expression 34 into equation 33:
\[\langle S_{\rm err}\rangle =2\,\int_{0}^{\tau_{\rm d}}dt_{1}\,k_{\rm f}(t_{1})\,e^{-2\, \int_{t_{0}}^{t_{1}}dt^{\prime}k_{\rm f}(t^{\prime})}\,e^{-\int_{t_{1}}^{t_{1 }+\tau_{l}}dt^{\prime}k_{\rm f}(t^{\prime})} \tag{35}\] \[=2\,k_{\rm f}^{0}\int_{0}^{\tau_{\rm d}}dt_{1}\,\frac{\left(V_{ \rm b}\,e^{\lambda\,t_{1}}\right)^{n_{\rm eff}}}{v^{*\,n_{\rm eff}}+\left(V_{ \rm b}\,e^{\lambda\,t_{1}}\right)^{n_{\rm eff}}}\] \[\qquad\qquad\times\,e^{-\frac{2\,k_{\rm f}^{0}}{n_{\rm eff}\, \lambda}\,\ln\left(\frac{\left(V_{\rm b}\,e^{\lambda\,t_{1}}\right)^{n_{\rm eff }}+v^{*\,n_{\rm eff}}}{V_{\rm b}^{\rm end}+v^{*\,n_{\rm eff}}}\right)}\] \[\qquad\qquad\times\,e^{-\frac{k_{\rm f}^{0}}{n_{\rm eff}\, \lambda}\,\ln\left(\frac{\left(V_{\rm b}\,e^{\lambda\,(t_{1}+\tau_{l})}\right) ^{n_{\rm eff}}+v^{*\,n_{\rm eff}}}{\left(V_{\rm b}\,e^{\lambda\,t_{1}}\right) ^{n_{\rm eff}}+v^{*\,n_{\rm eff}}}\right)}, \tag{36}\]
where \(\tau_{\rm d}\) is the average division time of the cell and \(\tau_{\rm d}\gg\tau_{1}\), such that the probability that both origins have not yet fired at \(\tau_{\rm d}\) becomes negligible. The average probability that the second origin fires within a time interval \(\Delta t=t_{2}-t_{1}<\tau_{1}\) after the first has fired at \(t_{1}\), is then given by:
\[\langle P(\Delta t<\tau_{1})\rangle=1-\langle S_{\rm err}(\tau_{1})\rangle \tag{37}\]
We solve the integral in equation 36 numerically and use expression 7 to predict the degree of synchrony for two origins (see Fig. 4b).
One can also calculate analytically the degree of synchrony at higher growth rates where there are typically four or more origins per cell at the beginning of an initiation cascade. The probability that none of the \(n-1\) origins fire within the time \(\tau_{1}\) after the first origin has fired at \(t_{1}\) is similar to equation 36 and given by
\[\langle S_{\rm err}\rangle=n\,\int_{0}^{\tau_{\rm d}}dt_{1}\,k_{\rm f}(t_{1}) \,e^{-n\,\int_{t_{0}}^{t_{1}}dt^{\prime}k_{\rm f}(t^{\prime})}\,e^{-(n-1)\int_ {t_{1}}^{t_{1}+\tau_{l}}dt^{\prime}k_{\rm f}(t^{\prime})} \tag{38}\]
This is the probability that given that the first origin fires at \(t_{1}\), all \(n-1\) other origins fire later than \(t_{1}+\tau_{1}\). Importantly, one now also needs to take into account the cases where only one or more origins fire \(t_{1}+\tau_{1}\) and the other fire before. We here do not derive an expression for the scenario \(n>2\).
Derivation of theoretical prediction for \(\langle\Delta t\rangle\) and the CV of the initiation volume
The average time interval between two independent firing events \(\langle\Delta t\rangle\) can be calculated analytically for the approximate opening probability in equation 5 via
\[\langle\Delta t\rangle= \int_{0}^{\tau_{\rm d}}dt_{1}\,\int_{t_{1}}^{\tau_{\rm d}}dt_{2} \,2\,k_{\rm f}(t_{1})\,k_{\rm f}(t_{2})\,e^{-2\,\int_{t_{0}}^{t_{1}}dt^{\prime }k_{\rm f}(t^{\prime})}\] \[\times e^{-\int_{t_{1}}^{t_{2}}dt^{\prime\prime}k_{\rm f}(t^{ \prime\prime})} \tag{39}\]
Solving this integral numerically gives the pink line in Fig. 4b.
The theoretical coefficient of variation of the initiation volume \(V\) is given by
\[CV=\frac{\sigma}{\mu}=\frac{\sqrt{\langle V^{2}\rangle-\langle V\rangle^{2}}}{ \langle V\rangle} \tag{40}\]
where we use
\[\langle V\rangle=\int_{0}^{\tau_{\rm d}}dt\,k_{\rm f}(t)\,e^{-\int_{t_{0}}^{t} dt^{\prime}k_{\rm f}(t^{\prime})}\,V_{\rm b}\,e^{\lambda\,t} \tag{41}\]
and
\[\langle V^{2}\rangle=\int_{0}^{\tau_{\rm d}}dt\,k_{\rm f}(t)\,e^{-\int_{t_{0}}^{t} dt^{\prime}k_{\rm f}(t^{\prime})}\,\left(V_{\rm b}\,e^{\lambda\,t}\right)^{2} \tag{42}\]
In this theoretical model, the only source of noise is intrinsic noise and the CV in equation 40 therefore also corresponds to the intrinsic noise as defined by Ref. [7] based on the derivation of Elowitz et al. [44].
### Definition of the initiation duration in LDDR and LDDR-titration models
While a synchronization parameter cannot be defined uniquely, we will define one to quantify the degree to which replication is initiated synchronously and then show that the result is fairly robust to the precise definition. Specifically, the degree of synchrony is obtained by counting the number of origin firing events per initiation event, where the initiation duration \(\tau_{\mathrm{i}}\) is a parameter that we will choose carefully (Fig. 3a). In the coarse-grained model, an initiation event starts when the first origin initiates and ends after the licensing period is over. As after the end of the licensing period the initiation potential drops instantaneously to a very low value, re-initiation events after the end of the licensing period are very unlikely in the coarse-grained model. In the LDDR model, the active fraction \(f\) does however not decrease instantaneously after RIDA has started and the site _datA_ has been doubled (Fig. 5c). Therefore, it is less clear what the initiation period should be. We test the effect of varying the initiation duration \(\tau_{\mathrm{i}}\) on the average degree of synchrony \(\langle s\rangle\) for different starting times of RIDA \(\tau_{\mathrm{rida}}\) (Fig. 5). The average degree of synchrony \(\langle s\rangle\) varies strongly with the initiation duration in parameter regimes where replication is initiated asynchronously: At very low starting times of RIDA \(\tau_{\mathrm{rida}}\) replication is under-initiated (Fig. 5b), but the degree of synchrony nevertheless becomes larger than one at high initiation durations \(\tau_{\mathrm{i}}>0.6\,\tau_{\mathrm{d}}\) (Fig. VIII.2). The larger the initiation duration \(\tau_{\mathrm{i}}\) the more origin firing events are counted per initiation event, leading to an average degree of synchrony that is larger than one. Conversely, when the RIDA is starting too late and replication is over-initiated (Fig. 5b), the degree of synchrony can nevertheless be smaller than one if the initiation duration is chosen too short. At the optimal starting time of RIDA of \(\tau_{\mathrm{rida}}=0.1\,\mathrm{h}\approx 6\) min, where replication is initiated synchronously, the choice of the initiation duration becomes however less relevant: Because all origin firing events happen within a relatively small time window, increasing the initiation duration further does not change the degree of synchrony significantly. Only if the initiation duration is chosen way too small (\(\tau_{\mathrm{i}}=0.2\,\tau_{\mathrm{d}}\approx 8\) min) or too large (\(\tau_{\mathrm{i}}=0.9\,\tau_{\mathrm{d}}\approx 36\) min) becomes the average degree of synchrony smaller or larger than one. We therefore in the following choose an intermediate initiation duration of \(\tau_{\mathrm{i}}=0.4\,\tau_{\mathrm{d}}\approx 16\) min.
### Model for the initiation cascade
To initiate DNA replication, 8 ATP-DnaA proteins and 3 DnaA proteins independent of the nucleotide-binding state form a cooperative complex at the origin, which induces a conformational change and leads to the opening of the origin [13]. Consequently, the DNA replication machinery binds to the open origin, and replication is initiated. Upon replication initiation, the DnaA proteins that were bound to the origin are likely being released into the cytoplasm, leading to a transient rise in the free DnaA concentration. It has been suggested that the release of origin-bound DnaA proteins from one origin triggers replication initiation at the remaining origins
Figure VIII.2: **The average degree of synchrony \(\langle s\rangle\) depends strongly on the initiation duration when replication is initiated asynchronously, but not when replication is initiated synchronously.** The average degree of synchrony \(\langle s\rangle\) as a function of the starting time of RIDA \(\tau_{\mathrm{rida}}\) for varying initiation durations \(\tau_{\mathrm{i}}\) (in units of the doubling time \(\tau_{\mathrm{d}}\)) for the LDDR model. The degree of synchrony \(s\) is obtained by counting the number of origin firing events from the first origin firing until the end of the initiation duration \(\tau_{\mathrm{i}}\). When replication is initiated synchronously at all origins within a short time interval (at \(\tau_{\mathrm{rida}}\approx 0.1\) h), the average degree of synchrony does not depend strongly on the initiation period \(\tau_{\mathrm{i}}\). When origins are however initiated asynchronously over the course of the cell cycle, the average degree of synchrony can either be smaller or larger than one, depending on the duration \(\tau_{\mathrm{i}}\). In the rest of this manuscript, we use an intermediate initiation duration of \(\tau_{\mathrm{i}}=0.4\,\tau_{\mathrm{d}}\). (See Table 1 for all parameters.)
in a so-called 'initiation cascade' [40]. Here we propose a model to test whether the rise in the free DnaA concentration upon replication initiation at one origin could be sufficient to trigger replication initiation at the other origins.
So far, we have modelled the origin opening and firing process in a coarse-grained manner using a Hill function as a function of the initiation potential for the opening probability of the origin (see Appendix VIII.1). Now, we instead model the binding of ATP-DnaA to the origin explicitly by introducing weak, cooperative binding sites for ATP-DnaA proteins at the origin. Specifically, we neglect the three strong binding sites to which both ATP- and ADP-DnaA can bind and assume that there are \(n\) weak binding sites with the dissociation constant \(K_{\rm D}^{\rm ori}\) to which only ATP-DnaA can bind cooperatively. The probability that \(n\) ATP-DnaA proteins are bound to the
origin is given by
\[p_{\rm b}^{\rm n}=\frac{Z_{\rm b}^{\rm n}}{\sum_{i=0}^{N}Z_{\rm i}} \tag{43}\]
where \(Z_{\rm b}^{\rm n}\) is the partition function of \(n\) proteins bound to the origin and \(\sum_{i=0}^{N}Z_{\rm i}\) is the sum over all possible configurations the origin can be in. Let us first consider the scenario of only two cooperative binding sites. This gives rise to the following probability that two ATP-DnaA proteins are bound to the origin:
\[p_{\rm b}^{2}=\frac{Z_{\rm b}^{2}}{Z_{\rm b}^{0}+2\,Z_{\rm b}^{1}+Z_{\rm b}^{2}} \tag{44}\]
The statistical weight of zero bound ATP-DnaA proteins is normalized to one \(Z_{\rm b}^{0}=1\) and the weight of one bound ATP-DnaA protein is given by \(Z_{\rm b}^{1}=[D]_{\rm f,ATP}/K_{\rm D}\) with the dissociation constant \(K_{\rm D}=c_{0}^{-1}\,e^{-\beta\,\Delta G}\) and the free ATP-DnaA concentration \([D]_{\rm f,ATP}\). The weight of two bound ATP-DnaA proteins is then given by \(Z_{\rm b}^{2}=w\,[D]_{\rm f,ATP}^{2}/K_{\rm D}^{2}\) where \(w=e^{\beta\,\Delta E}\) accounts for the additional energy gain from cooperative binding of two ATP-DnaA proteins. When cooperative binding is very strong then \(\Delta E\gg\Delta G\) and we can neglect terms with lower powers of \(w\):
\[p_{\rm b}^{2}\approx\frac{Z_{\rm b}^{2}}{Z_{\rm b}^{0}+Z_{\rm b}^{2}}=\frac{w \,[D]_{\rm f,ATP}^{2}/K_{\rm D}^{2}}{1+w\,[D]_{\rm f,ATP}^{2}/K_{\rm D}^{2}}= \frac{[D]_{\rm f,ATP}^{2}}{\left(\frac{K_{\rm D}}{\sqrt{w}}\right)^{2}+[D]_{ \rm f,ATP}^{2}} \tag{45}\]
This expression can be generalized to the case of \(n\) strongly cooperative ATP-DnaA origin binding sites:
\[p_{\rm b}^{\rm n}\approx\frac{[D]_{\rm f,ATP}^{n}}{\left(\frac{K_{\rm D}}{ \sqrt{w}}\right)^{n}+[D]_{\rm f,ATP}^{n}} \tag{46}\]
We therefore recover the expression 2 for the origin opening probability in the coarse-grained model where now the critical free ATP-DnaA concentration is given by \([D]_{\rm ATP,f}^{*}=K_{\rm D}/\sqrt[n]{w}\).
In order to calculate the free DnaA concentration \([D]_{\rm f}\) in the scenario where both DnaA forms can bind to the 300 homogeneously distributed strong binding sites on the chromosome and ATP-DnaA can additionally bind cooperatively to \(n\) weak binding sites on the origin, we write down the following expression
\[[D]_{\rm f}=[D]_{\rm T}-[D]_{\rm s}-[D]_{\rm o} \tag{47}\]
where \([D]_{\rm T}\) is the total DnaA concentration in the cell, \([D]_{\rm s}\) is the concentration of titration-site bound DnaA and \([D]_{\rm o}\) is the origin-bound concentration of ATP-DnaA. An expression for the titration-site bound concentration \([D]_{\rm s}\) as a function of the free DnaA concentration \([D]_{\rm f}\) is obtained from the quasi-equilibrium approximation as explained in [26]
\[[D]_{\rm s}=\frac{[s]_{\rm T}\,[D]_{\rm f}}{K_{\rm D}^{\rm n}+[D]_{\rm f}} \tag{48}\]
with the total titration site concentration \([s]_{\rm T}\) and the dissociation concentration of the titration sites \(K_{\rm D}^{\rm s}\). The origin-bound ATP-DnaA concentration \([D]_{\rm o}\) is given by the probability \(p_{\rm b}^{\rm n}\) that \(n\) ATP-DnaA proteins are bound to the origin times the total concentration of proteins that can be bound to these origin sites. This total concentration is given by the concentration of origins that are available for ATP-DnaA binding \([n_{\rm ori}^{\rm f}]\) times the number of binding sites per origin \(n\). Therefore, we obtain the following expression for the free DnaA concentration:
\[[D]_{\rm f}=[D]_{\rm T}-\frac{[s]_{\rm T}\,[D]_{\rm f}}{K_{\rm D}^{\rm n}+[D] _{\rm f}}-\frac{n\,[n_{\rm ori}^{\rm f}]\,([D]_{\rm f}\,f)^{n}}{\left(\frac{K_ {\rm D}}{\sqrt{w}}\right)^{n}+\left([D]_{\rm f}\,f)^{n}} \tag{49}\]
We here made the simplifying assumption that the free ATP-DnaA concentration \([D]_{\rm f,ATP}\) is given by the ATP-DnaA fraction \(f\) times the free DnaA concentration \([D]_{\rm f}\). This is a reasonable approximation because the number of origin binding sites is small compared to the total number of DnaA proteins and the total number of titration sites. The total fraction of ATP-DnaA proteins in the cytoplasm. Importantly, as explained in [26], we assume that the switch components (de)activate DnaA independent of whether it is bound to the chromosome (either titration sites or origin sites) or freely diffusing in the cytoplasm. We solve equation 49 numerically at every time step of the simulations given the total titration site concentration \([s]_{\rm T}\), the total DnaA concentration \([D]_{\rm T}\) and the total number of for ATP-DnaA available origin binding sites \([n_{\rm ori}^{\rm f}]\). Replication is again initiated stochastically at every origin with a rate \(k_{\rm f}=k_{\rm f}^{0}\,p_{\rm b}^{\rm n}\). We model the effect that DnaA proteins are released to the cytoplasm upon replication initiation by transiently reducing the number of available origin binding sites for a duration of \(\tau_{\rm b}=10\) min after an origin has initiated replication.
Modelling the ATP-DnaA binding to the origins explicitly does not significantly increase the degree of synchrony for a broad range of parameters. When the free ATP-DnaA concentration rises and approaches the critical free ATP-DnaA concentration \([D]_{\rm ATP,f}^{*}\), ATP-DnaA begins to bind cooperatively to the origin binding sites. This causes a weaker rise in the free DnaA concentration right before replication initiation as compared to a system without these origin binding sites (Fig. VIII.3a and b). After an origin has been initiated, the origin binding sites become unavailable, causing an increase in the free DnaA concentration after replication initiation (Fig. VIII.3a and b). While this increase should enhance the probability of other origins to fire replication as well, the weaker rise in the free concentration before replication initiation reduces the sharpness of the rise in the free ATP-DnaA concentration and should therefore lead to a decrease in the degree of synchrony. Indeed, comparing the simulations in which the origin-binding is modelled explicitly (Fig. VIII.3c) to the previous model in which we simply used a Hill function for the opening probability
(Fig. 6d) shows that the degree of synchrony is not significantly enhanced by the initiation cascade. The reason likely is that the positive effect of the initiation cascade is counterbalanced by the negative effect of a lower rise in the free concentration before replication initiation.
|
2306.08595
|
TensorKrowch: Smooth integration of tensor networks in machine learning
|
Tensor networks are factorizations of high-dimensional tensors into networks
of smaller tensors. They have applications in physics and mathematics, and
recently have been proposed as promising machine learning architectures. To
ease the integration of tensor networks in machine learning pipelines, we
introduce TensorKrowch, an open source Python library built on top of PyTorch.
Providing a user-friendly interface, TensorKrowch allows users to construct any
tensor network, train it, and integrate it as a layer in more intricate deep
learning models. In this paper, we describe the main functionality and basic
usage of TensorKrowch, and provide technical details on its building blocks and
the optimizations performed to achieve efficient operation.
|
José Ramón Pareja Monturiol, David Pérez-García, Alejandro Pozas-Kerstjens
|
2023-06-14T15:55:19Z
|
http://arxiv.org/abs/2306.08595v3
|
# TensorKrowch: Smooth integration of tensor networks in machine learning
###### Abstract
Tensor networks are factorizations of high-dimensional tensors into networks of smaller tensors. They have applications in physics and mathematics, and recently have been proposed as promising machine learning architectures. To ease the integration of tensor networks in machine learning pipelines, we introduce TensorKrowch, an open source Python library built on top of PyTorch. Providing a user-friendly interface, TensorKrowch allows users to construct any tensor network, train it, and integrate it as a layer in more intricate deep learning models. In this paper, we describe the main functionality and basic usage of TensorKrowch, and provide technical details on its building blocks and the optimizations performed to achieve efficient operation.
## I Introduction
Tensor networks are factorizations of high-dimensional tensors into network-like structures composed of smaller tensors. Originating from condensed matter physics and acclaimed for their efficient representation of quantum many-body systems [1; 2; 3; 4; 5; 6; 7; 8; 9; 10], these structures have allowed researchers to comprehend the intricate properties of such systems and, additionally, simulate them using classical computers [11; 12; 13]. Notably, tensor networks are the most successful method for simulating the results of quantum advantage experiments [14; 15; 16]. Furthermore, tensor networks were rediscovered within the numerical linear algebra community [17; 18; 19], where the techniques have been adapted to other high-dimensional problems such as numerical integration [20], signal processing [21], or epidemic modelling [22].
With the advent of machine learning and the the quest for expressive yet easy-to-train models, tensor networks have been suggested as promising candidates, due to their ability to parameterize regions of the complex space of size exponential in the number of input features. Since the pioneering works [23; 24] that used simple, 1-dimensional networks known as Matrix Product States (MPS) in the physics literature [4; 25] and as Tensor Trains in the numerical linear algebra literature [18], these have been applied in both supervised and unsupervised learning settings [26; 27; 28]. Recent studies have also delved into alternative architectures, including Tree Tensor Networks (TTN) [29; 30] and Projected Entangled Pair States (PEPS) [31; 32].
While there exist indications that tensor network architectures may outperform neural networks in certain scenarios [33], neural networks still hold the upper hand both in versatility and efficiency. However, there exists a growing number of cases where tensor networks seem to provide advantages. First, tensor networks offer a means to compress the matrices used in existing neural networks. This process, known as tensorization, reduces the amount of memory required to store the model, and improves the efficiency of the model in both training and inference. The potential of tensorization has already been explored in several studies [34; 35; 36], offering a way to execute complex models in edge computing devices [37]. Second, the large expertise of the community of quantum many-body physics in tensor networks, and their inspiration in real physical systems, allows to better understand questions related to explainability [29; 38]. Third, this expertise can also bring novel features, such as guarantees on privacy that do not compromise on model performance [39]. Finally, another promising research line involves the integration of tensor network layers with neural network layers. For instance, Ref. [26] proposes using the output of a convolutional neural network, treated as a feature extractor, as the input to four 1-D tensor networks. Remarkably, this straightforward model achieves near-state-of-the-art performance on the FashionMNIST dataset [40].
Therefore, there are several reasons to believe that the integration of tensor networks into deep learning pipelines can enhance the capabilities of current models. Several libraries exist [41; 42; 43; 44; 45] that allow to use some concrete tensor network architectures as machine learning models, or that use gradient-based methods to optimize tensor-network ansatzes for quantum many-body calculations. However, from the point of view of machine learning, there is still a need for extensive research to determine, for instance, the situations in which the properties of tensor networks can be maximally leveraged, which are the most effective training methods, optimal architectures (and architectures beyond those used by physicists), and so on. Consequently, there is a demand for user-friendly tools that enable rapid experimentation in this field.
To address this demand, here we introduce TensorKrowch [46], a Python library built on top of PyTorch [47] that aims to bring the full power of tensor networks to machine learning practitioners. TensorKrowch allows to construct any tensor network model using the familiar language and capabilities of PyTorch. The key strength of TensorKrowch lies in defining a solid set of basic components, namely Nodes and Edges, upon which the entire tensor network can be built. By connecting these Nodes, a complete TensorNetwork model can be created, that integrates smoothly with other PyTorch modules. Consequently, TensorKrowch leverages the full power of PyTorch, including
GPU acceleration, automatic differentiation, compilation to XLA, and easy composition of multi-layer networks. Additionally, TensorKrowch incorporates built-in implementations of widely used tensor networks such as MPS, TTN and PEPS.
This work is structured as follows: Section II introduces tensor networks and their graphical notation. Section III presents the library, discussing its underlying philosophy, basic requirements, and main components. Section IV provides a detailed explanation of the components comprising TensorKrowch, such as Nodes, Edges, and TensorNetworks, and how they are interconnected. Section V discusses the operations one can perform between nodes. Section VI combines all the previously described pieces to guide readers in building their own custom models and training them. Section VII covers advanced concepts like memory management. Lastly, Section VIII contains additional software information such as future development and contribution guidelines, and Section IX presents some concluding remarks.
This paper aims to be self-contained, providing a glimpse of the basics of tensor networks for readers from both the machine learning and quantum physics backgrounds, and introducing the fundamental components of TensorKrowch in order to enable readers to create and train their own models. However, it is not intended as a comprehensive tutorial on all the capabilities of the library or a complete description of all its functionality. Such information is available in the TensorKrowch documentation at: [https://joserapa98.github.io/tensorkrowch](https://joserapa98.github.io/tensorkrowch).
## II Tensor Networks
In this section, we will introduce the concept of a tensor network and the basic operations that are relevant in the context of machine learning. For a more in-depth analysis, we refer the reader to [4; 48; 49; 25].
Tensors are an extension of vectors and matrices to higher dimensions. They can be visualized as collections of indexed numbers arranged in multi-dimensional arrays. In general, a rank-\(r\) tensor with dimensions \(d_{1}\times\dots\times d_{r}\) belongs to the tensor product vector space \(\bigotimes_{i=1}^{r}\mathbb{C}^{d_{i}}\simeq\mathbb{C}^{\times_{i=1}^{r}d_{i}}\). Its elements are represented by \(T_{i_{1}\dots i_{r}}\), where each \(i_{j}\in[d_{j}]\).
The literature on tensor networks also presents a useful and practical graphical notation, where tensors are represented as the nodes of a graph with edges corresponding to their indices. In this notation, vectors, matrices and arbitrary tensors take the following form:
In certain areas where tensors are relevant, for instance in geometry or general relativity, subscripts and superscripts are employed to denote indices in covariant or contravariant spaces, and changing the position of the indices requires using the associated metric. However, in the tensor network literature it is customary to obviate all these subtleties, and make no distinctions between sub- and superscript indices. Moreover, it is standard to group and ungroup sets of indices whenever it is convenient. In this way, the same collection of numbers can be arranged in a matrix \(A_{ij}\in\mathbb{C}^{d_{1}}\otimes\mathbb{C}^{d_{2}}\), or in a vector \(A_{k}\in\mathbb{C}^{d_{1}\times d_{2}}\) by defining \(k=(i-1)d_{2}+j\).
(1)
This flexibility in representation makes it clear that tensors are essentially linear mappings between tensor product spaces, similar to how matrices represent linear mappings between vector spaces. Furthermore, it provides a convenient framework to decompose tensors via matrix factorization algorithms, as we will see shortly below.
Indices of tensors can be _contracted_. The contraction of indices (of the same or different tensors) consists of a generalization of the scalar product between vectors, this is, the sum of products of the elements in the corresponding axes. For two arbitrary tensors, \(R\) and \(S\), of ranks \(r\) and \(s\), respectively, the contraction over one of its indices is given by
\[\sum_{\alpha=1}^{d}R_{i_{1}\dots i_{m-1}\alpha i_{m+1}\dots i_{r}}S_{j_{1}\dots j _{n-1}\alpha j_{n+1}\dots j_{s}}=T_{i_{1}\dots m_{m-1}i_{m+1}\dots i_{r}}^{j_{1} \dots j_{n-1}j_{n+1}\dots j_{s}}, \tag{2}\]
where we have assumed that the dimensions of the indices \(i_{m}\) and \(j_{n}\) are both \(d\). This tensor \(T\) is of rank \((r+s-2)\). In the graphical notation, the contraction is represented by connecting the corresponding edges of the tensors:
(3)
Contraction enables connecting nodes to form graphs, with the only requirement being that the nodes must have the same dimensions along the connected axes. By contracting all the connected edges in the graph, one can then contract a whole tensor network to obtain a single tensor that preserves the remaining dangling edges:
(4)
Conversely, one may obtain a tensor network by splitting tensors via the various existing decomposition methods [50]. A commonplace method is exploiting grouping and ungrouping of indices to reshape tensors as matrices and applying the singular value decomposition,
(5)
since this provides, in many cases of relevance, a form of the tensor with desirable properties [4]. In particular, truncations of the singular value decomposition give the best low-rank approximations of the original tensor under the Frobenius and \(\ell_{2}\) norms.
In machine learning, there are two main applications of tensor networks. The first is the decomposition, or approximation, of large tensors in networks with a smaller number of entries. This procedure is known as tensorization [34, 35, 36]. Tensorization allows not only to reduce the number of elements to be stored (and thus reducing the storage memory), but also to reduce the computation time. Consider a linear layer within a neural network, where both the input and output dimensions are \(d^{n}\). The multiplication of the corresponding weight matrix with the input data vector requires \(O(d^{2n})\) operations.
On the other hand, consider the same operation between factorizations (or approximations) of the vector and the matrix into tensor networks. For this example, let us use the well-known Matrix Product State (MPS) [4, 25] form for the vector and a Matrix Product Operator (MPO) [51] form for the matrix, both with \(n\) nodes, each equipped with dangling edges of dimension \(d\) (this is known as the _physical dimension_ in the physics literature) and connected to their neighbors via edges of dimension \(D\) (this is known as the _bond dimension_). Thus, the MPS requires only \(O(ndD^{2})\) elements to represent the vector, and the MPO requires \(O(nd^{2}D^{2})\) elements to represent the matrix.
(6)
The vector-matrix multiplication is carried out by contracting the MPS with the MPO, connecting all the dangling edges of the MPS to the input edges of the MPO:
(7)
The contraction of these connected edges requires only \(O(nd^{2}D^{4})\) operations. Note that the contractions in the boxes in Eq. (7) all involve different tensors and thus can be performed in parallel, thereby giving even greater savings in practice.
The second main application of tensor networks in machine learning is as architectures themselves. In tensor network architectures, every cell of every tensor is a trainable parameter, and so they implement linear models operating in high-dimensional tensor-product spaces. Taking the example of classification, the contraction of a tensor form of the input with the tensor network gives the corresponding prediction, that is used to compute a loss function whose gradients are used to adjust the model parameters. This approach was pioneered by [23; 24] with MPS, and since has been applied to many different problems [27; 33; 52] and network architectures [29; 30; 31; 53].
## III TensorKrowch
### Motivation
The works [23; 24] gave rise to the area of tensor network machine learning, where nowadays many different network architectures are being used for different purposes. However, the common choices for architectures are inherited from the physics community, which has broad experience in tensor networks but is restricted to 1-D and select 2-D architectures. It is for this reason that, currently, there are libraries that are well-integrated into deep learning frameworks but are optimized for specific tensor networks [41] or put their focus on tensorization [42; 43]. On the other hand, there exist libraries like TensorNetwork [44], that offer more generality in terms of architectures at the cost of more complications at the time of integrating them into machine learning pipelines. Furthermore, the available libraries tend to have a broader focus on physics applications, and thus features that are of interest in machine learning, such as custom parameter initializations, hyperparameter optimization, or even the construction of complicated models, are not considered.
TensorKrowch is developed with the aim of providing a comprehensive framework where one can rapidly prototype tensor network layers as standalone models and integrate them in deep machine learning models. Its main characteristics are the following:
**Generality:** TensorKrowch allows users to construct any tensor network by creating and connecting nodes. Users can selectively choose which nodes to train and to define arbitrary operations between their components. In addition to this, TensorKrowch has pre-built classes for the most common types of networks.
**Ease of use:** At the core of TensorKrowch is a Pythonic approach, presenting a simple interface with building blocks and operations that are combined in order to create complex models and training pipelines. These primary objects and operations are described in sections IV and V, respectively.
**Optimization:** While the interface remains simple, numerous optimizations are implemented in the background in order to perform operations efficiently and to reduce redundant computations during training. Section VII details the optimizations that are performed.
**Integration:** TensorKrowch is written on top of a well-established deep learning framework, namely PyTorch [47]. This integration enables the creation of TensorNetwork models that function like any other PyTorch layer, seamlessly combining with existing components. Consequently, TensorKrowch fully leverages the capabilities of PyTorch, including GPU acceleration, automatic differentiation, and easy composition of multi-layer networks.
Moreover, TensorKrowch can be also used for tensorization purposes, by substituting or approximating dense matrices in deep PyTorch models and applying the built-in matrix factorization techniques.
### Installation and requirements
TensorKrowch is a Python library available on Linux, Mac and Windows operating systems. It can be installed via pip with the following command line:
```
pip install tensorflow
```
The basic requirement of TensorKrowch is PyTorch [47], which is used as machine learning backend. Additionally, TensorKrowch requires opt_einsum [54], used in the implementation of the einsum operation and that allows for the automatic search of good network contraction paths via greedy algorithms.
The source code for TensorKrowch is hosted on GitHub at
```
[https://github.com/joserapa98/tensorkrowch](https://github.com/joserapa98/tensorkrowch)
```
and is distributed under the MIT License. More details about the software, packaging information, and guidelines for contributing to TensorFlowArch are included in Sec. VIII.
### Basic Usage
TensorKrowch provides a set of basic components, namely Node, Edge, TensorNetwork, and variants of these, that can be combined to build trainable models. The usual workflow consists in the following steps: 1) Define the structure of the graph by creating nodes and connecting them; and initialize the tensors within those. 2) Specify which nodes will be used to store the input tensors coming from the training dataset. 3) Define the contraction algorithm to reduce the whole network to a single output tensor, which one can input to any other layer that might follow the tensor network.
To carry out these steps, one needs to know how to create and combine the different building blocks of the model, and how to operate them to contract the network. These topics will be covered in detail in Section IV and Section V, respectively.
Once the custom tensor network is defined, the process of training it is analogous to what one would do in vanilla PyTorch. To illustrate this with an example, let us introduce one of the built-in classes provided by TensorKrowch, MPSLayer. This class is a variation of the traditional MPS with an additional node that has a dangling edge representing an output dimension. As a result, the MPSLayer is contracted into a vector of the specified dimension.
```
importtorchimporttensorkrowchastk
```
#Instantiatedmodel mps=tk.models.MPSLayer(n_features=1000, in_dim=2, out_dim=10, bond_dim=10) ```
Tensor network models expect as input a rank-3 tensor with dimensions \(b\times n\times d\), being \(b\) the size of the batch, \(n\) the number of feature vectors, and \(d\) their dimension. This tensor, which is a batch of sequences of vectors, represents a tensor network itself, given by the tensor product of all the vectors corresponding to each batch element. These vectors will be placed in specific nodes so that the network can be contracted with these new data.
However, data tensors tend to come in the form of \(b\times n\) matrices, requiring a previous transformation to get the proper rank-3 tensor. That is, for each batch element there is a vector of features, that has to be turned into a sequence of feature vectors. To accomplish this, each feature has to be embedded into a vector space. TensorKrowch provides three common embedding functions, namely unit, add_ones and poly, the first two being introduced in [23] and [24], respectively.
```
#Shape:batch_sizexn_features data=torch.randn(100,1000)
#Shape:batch_sizexn_featuresxin=2 data=tk.embeddings.unit(data)
#Shape:batch_sizexout_dim=10 labels=torch.randn(100,10) ```
To reduce repetition of ancillary computations due to dealing with the tensor network structure, TensorKrowch introduces some extra steps that need to be carried out before starting training. These involve setting memory modes and tracing the model; both advanced features will be covered in Section VII.
```
#Setmemorymodes mps.auto_stack=True mps.auto_unbind=False
#Trace example=torch.zeros(1,1000,2)
#Shape:batch_size=1xn_featuresxin=2 mps.trace(example) ```
To be able to train, one needs to set a loss function to minimize and an optimizer to update the parameters of the model according to their gradients. It is important that the optimizer is set after the model has been traced, since the parameters of the model might change during this process.
loss_fun=torch.nn.CrossEntropyLoss() optimizer=torch.optim.Adam(mps.parameters(), lr=1e-4, weight_decay=1e-2) ```
Finally, the above ingredients can be put together in the training loop.
``` foreepochimrange(n_epochs):#Contractensornetworkwithinputdata scores=mps(data) #Computethloss loss=loss_fun(scores,labels)
#Backpropagateandupdateparameters optimizer.zero_grad() loss.backward( optimizer.step() ```
## IV The building blocks
The main structure of TensorKrowch is similar to that of TensorNetwork [44]. Namely, the main object in TensorKrowch is a TensorNetwork, that is populated with Nodes. These have Edges that are connected according to the desired structure. Creating a TensorNetwork is done as follows:
``` importtorch importtensorkrovchastk net=tk.TensorNetwork() ```
Nodes are the basic elements that make up a TensorNetwork. They serve as containers for PyTorch's torch.Tensor objects and hold essential information for building, operating, and training the network. Key aspects associated with nodes include shape, tensor, axes, edges, network membership, and successors. Tensors themselves are not actually contained within nodes; instead, they are stored in the shared memory system that is accessible to all nodes in the TensorNetwork (see Section VII for details). To create a Node one can specify its name, shape, names for its axes, network, and an initialization method to create a new tensor for it. Together with the code we present a graphical notation to depict not only the tensor network but all the elements that form the TensorNetwork object.
``` node1=tk.Node(name="model", shape=(2,5,2), axes_names=('left","input","right"), network=net, init_method='randn") ```
Specifying a shape automatically creates a set of edges for the Node. An Edge is nothing more than an object that wraps references to the nodes it connects. Thus it stores information like the nodes it connects, the corresponding nodes' axes it is attached to, its size, etc.
To connect nodes, one can access the desired edges using their names, and connect them with the caret operator:
```
#EquivalenttoinitializeNodewith"randn=init_method node2=tk.randn(name="node2", shape=(2,5,2), axes_names=("left","input","right"), network=net) ```
``` node1["right"]=node2["left"]
```
It is important to note that the operation * does not perform contractions, since it may be done on edges of empty nodes. As it will be shown below, contraction between tensors is performed by using the @ operator, that contracts tensors along all the edges that connect both nodes.
Edges can be designated as _batch_ edges if one includes the string "batch" in the name of the corresponding axis. This allows for performing batch contractions if the nodes involved in the operation both have batch edges sharing the same name. This functionality is analogous to that of PyTorch functions like, for instance, torch.bmm, used to perform batch matrix multiplication.
### Types of nodes
TensorKrowch features several types of nodes, that have different functionalities and goals within the library.
**Parameter nodes:** In analogy with PyTorch having torch.Tensor to denote a tensor of non-trainable entries and torch.Parameter for tensors of trainable parameters, TensorKrowch has Node to denote containers of tensors with non-trainable entries and ParamNode to denote containers of tensors with trainable entries. One can instantiate a ParamNode with:
```
paramnode=tk.ParamNode(name="paramnode", shape=(2,2), axes_names=("left","right"), network=net, init_method="randn") paramnode["left"]~node["left"] paramnode["right"]~node2["right"]
```
In the graphical notation we will use squares for denoting trainable nodes.
**Leaf nodes:** These are the nodes that form the TensorNetwork, along with the data nodes (see below). Usually, leaf nodes will be the trainable nodes. All leaf nodes contain PyTorch leaf tensors, this is, tensors that are not generated by operations tracked by PyTorch's automatic differentiation engine. By default, all nodes are leaf. In the graphical notation, these are the blue nodes.
**Data nodes:** These are nodes that are used to store the tensors coming from input data. It is possible to instantiate data nodes directly by specifying data=True in the initialization of Node. However, data nodes will be usually created when specifying where they should be put in the network using set_data_nodes (see Section VI). Graphically, data nodes will be depicted as red circles.
```
data1=tk.Node(name="data1", shape=(5,), axes_names=("feature",), data=True, network=net) data2=tk.Node(name="data2", shape=(5,), axes_names=("feature",), data=True, network=net)
#We can addinputdatatensorstothenodes data1.tensor=torch.randn(5) data2.tensor=torch.randn(5) node1["input"]~data1["feature"] node2["input"]~data2["feature"]
**Virtual nodes:** These nodes are internal nodes that TensorKrowch uses as shortcuts when one wishes to have the same tensor in different nodes, or to process auxiliary information without adding new leaf nodes to the network. This is useful when defining uniform tensor networks [55], as all their nodes simply use a single tensor stored in a virtual node instead of one tensor per node. Virtual nodes are intended to be mainly internal, but they can be created manually using the argument virtual=True. This is needed when defining custom uniform architectures beyond those natively implemented, such as UMPS, UPEPS or UTree. Virtual nodes are represented by empty nodes with dashed borders.
``` virtual=tk.Node(name="virtualnode", shape=(2,5,2), axes_names=("left","input","right"), virtual=True, network=net, init_method="randn")
#Putvirtual'sensorintnode1andnode2 node1.set_tensor_from(virtual) node2.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from() node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.set_tensor_from(virtual) node1.
**Resultant nodes:** The result of a contraction of tensors (stored in their respective nodes) is another tensor that must be stored in a new node. These nodes are resultant nodes, which coexist in the TensorNetwork object together with the original nodes, inheriting their edges. This allows for subsequent contraction with other neighbouring nodes. Resultant nodes are automatically created upon contracting two nodes on a same network, or when tracing the model on its first run (see Section V.1). Resultant nodes are displayed as green circles.
## V Operations between nodes
Operations between TensorKrowch Nodes are instances of a class, Operation, that is optimized to avoid unnecessary repeating of basic computations (see Section VII for details). Operations create resultant nodes that inherit information from the parent nodes. TensorKrowch implements many operations that are extensions of those in vanilla PyTorch, like permute, tprod (outer in PyTorch), mul, add and sub. The complete list of implemented operations is available in the corresponding page of the documentation. In addition to these, TensorKrowch includes implementations of operations that are inherent to tensor networks. These are:
**Contract:** Enables contraction of all the edges connecting two nodes, although it is also possible to use it to contract only a selected range of edges. Furthermore, batch contraction is automatically performed if some edge is of batch type (recall, with the string "batch" in its name). This is the most basic operation one can use, allowing for the contraction of the whole network just by implementing a contraction path.
```
node1=tk.randn(shape=(2,4,3,6,2)) node2=tk.randn(shape=(3,2,5,4))
#Edgescanalsobecaccessedbyindex node1[2]-node2[0] node1[4]-node2[1]
#node1andnode2willbelongtothesamenetworkupon connection result=node1#node2
#resultwillhaveallthemon-contractededesfrom node1andnode2
```
**Split:** As an inverse of the contract operation, split factorizes the tensor in a node into two, selecting which edges should go to each resultant node. Factorization algorithms include singular value and QR decompositions, with additional functionalities such as selecting the desired amount of singular values to keep, modifying the rank of the resultant nodes accordingly. This can be useful to reduce the bond dimension in cases where it might explode during the contraction algorithm. Furthermore, by iterative application of split, arbitrary tensors can be decomposed into tensor network formats, which enables defining custom tensorization routines [34; 35; 36].
```
node=tk.randn( shape=(2,7,3,4), axes_names=("left_0","left_1","right_0","right_1") ) node_left,node_right=tk.split( node, ["left_0","right_0"],["left_1","right_1"], mode="svd", rank=5)
```
**Stack:** In many situations, stacking a set of tensors into a larger tensor with one extra dimension speeds up operations by allowing for parallelizing contractions (see Section VII for more details). Creating such stacks is achieved with stack. The node resulting from this operation is a special type of node, namely a StackNode or ParamStackNode,
that is only intended for internal use.
```
net=tk.TensorNetwork() nodes=[tk.randn( shape=(2,4,2), axes_names=( "left", "input", "right"), network=net) for_inrange(n_nodes)] stacknode=tk.stack(nodes)
```
**Unbind:** As the counterpart of stack, this enables one to unbind a stack of resultant nodes, returning them in a list. Only StackNodes and ParamStackNodes can be unbound.
```
stacknode=tk.stack(nodes) result=tk.unbind(stacknode)
```
**Einsum:** Allows for the implementation of complex contractions along sets of edges. Instead of contracting along a connected edge, one can define a contraction path following the Einstein summation convention to contract along several edges at once. This operation uses opt_einsum [54] at its core, making specific checks and simplifications beforehand to adhere to the rules and structures defined in TensorKrowch. For example, indices used twice can only correspond to nodes already connected by an edge in the specified axes. There is a variant of this operation, stacked_einsum, which allows to use lists of nodes as inputs, to perform stack followed by einsum in the same operation.
```
node1=tk.randn(shape=(10,15,100), axes_names=("left", "right", "batch")) node2=tk.randn(shape=(15,7,100), axes_names=("left", "right", "batch")) node3=tk.randn(shape=(7,10,100), axes_names=("left", "right", "batch")) node1["right"]~node2["left"] node2["right"]~node3["left"] node3["right"]~node1["left"] result=tk.einsum("ijb,jkb,_kb->b",node1,node2,node3)
```
For a comprehensive explanation of all these operations and the arguments they admit, the reader is referred to the TensorKrowch documentation.
### Trace and Reset
Every operation in TensorKrowch returns a new resultant Node that stores the output tensor and inherits the non-contracted edges of its parents. Therefore, in order to reduce the creation of redundant nodes and the amount of memory used for this purpose, it is useful to generate void containers for the resultant tensors. This is achieved with the trace operation, which can be called using an example input with batch dimension 1 in order to create all the necessary resultant nodes in the fastest way possible. Further details and explicit comparisons can be found in Section VII.
```
#Shape:batch_sizexn_featuresxfeature_dim data=torch.randn(100,20,5) example=torch.zeros(1,20,5) net.trace(example)
```
The inverse of the trace function is reset. This function deletes all the resultant nodes created during training, resetting the network to its initial state. This is useful when one wants to make changes to the structure of the network, to switch on/off the memory modes, or to save a trained model (otherwise, calling torch.save(net.state_dict())
will also export the tensors in the resultant nodes).
```
#Aftertraining net.reset()
#Savemodel torch.save(net.state_dict(),"net.pt")
#Loadmodel new_net=TensorNetwork() new_net.load_state_dict(torch.load("net.pt"))
```
## VI Building and training tensor network models
Sections IV and V showed how one can create nodes belonging to a TensorNetwork, and operate them to contract the network. However, although this functionality might be useful for experimentation, the main usage of TensorKrowch will be to define custom models as subclasses of TensorNetwork. This allows, for instance, to instantiate tensor networks that work as any other PyTorch layer.
The workflow to define custom tensor networks is similar to how one defines custom layers in PyTorch. There, one needs to subclass torch.nn.Module, to define the parameters and architecture of the layer in the __init__ method, and to specify how input data is processed by the layer in the forward method.
Similarly, for defining a custom tensor network in TensorKrowch one needs to subclass TensorNetwork, overriding the following methods:
```
1classCustomNetwork(tk.TensorNetwork): def__init__(self): super()__init__(name="CustomNetwork") self.node1=tk.randn(shape=(2,5,2), axes_names=("left","input","right"), name="node1", network=self) self.node2=tk.randn(shape=(2,5,2), axes_names=("left","input","right"), name="node2", network=self) self.paramnode=tk.randn(shape=(2,2), axes_names=("left","right"), name="paramnode", network=self, param_node=True) self.node1["right"]=self.node2["left"] self.paramnode["left"]=self.node1["left"] self.paramnode["right"]=self.node2["right"]
```
set_data_nodes(optional):Creates the data nodes where the data tensors will be placed. Usually, it will just select the edges to which the data nodes should be connected, and call the parent method.
``` classCustomNetwork(tk.TensorNetwork):
#... defset_data_nodes(self):
#Collectededgestowhichdatnodeswillbecconnected input_edges=[self.node1["input"],self.node2["input"]]
#Definenumberofbatchindicesfortheinput num_batch_edges=1
#Callparentmethod super().set_data_nodes(input_edges,num_batch_edges)
add_data (optional): Places input data into the previously specified data nodes. Commonly, all data nodes will have the same shape, namely \(b\times d\), being \(b\) the batch size and \(d\) the feature dimension. Assuming there are \(n\) of these nodes (typically one per feature), the input to add_data must be a tensor of dimension \(b\times n\times d\). On default operation, the input tensor will be then unbound at its second axis, delivering each slice of shape \(b\times d\) to each of the data nodes. However, this method can be overridden to customize how data is set into the data nodes.
contract: Very much like the forward method in PyTorch, this is the main method that describes how the components of the network are combined. In contrast to vanilla PyTorch, however, in a TensorKrowch TensorNetwork the forward method shall not be overriden, since its goal is to just call set_data_nodes, if needed, add_data, and contract, and then it will return the tensor corresponding to the last resultant node. Instead, one should customize the contract method. TensorKrowch does not implement algorithms for searching optimal contraction paths [56]. Thus, one must specify custom contraction algorithms for each user-defined tensor network, via einsum (recall Section V) or by any other means. As will be detailed in Section VII, the order in which TensorKrowch Operations appear in the algorithm is significant, being mandatory that the last Operation is the one returning the final node.
```
classCustomNetwork(tk.TensorNetwork):
#... defcontract(self): stack_nodes=tk.stack([self.node1,self.node2]) stack_data=tk.stack(list(self.data_nodes.values()))
#Stacksneedtobecreennectedbeforecontraction stack_nodes["input"]~stack_data["feature"] stack_result=stack_nodes@stack_data stack_result=tk.unbind(stack_result) result=stack_result[0]@stack_result[1]
#Lastoperationmustreturntheoutputnode result@=self.parammode returnresult
```
CustomNetwork parammode
With the subclass correctly defined, one can now instantiate the custom network and feed it with new input data tensors:
```
net=CustomNetwork()
#Passdatatothemodel
#Shape:batch_sizexn_featuresxfeature_dim data=torch.randn(100,2,5) result=net(data)
```
As mentioned in the beginning of the section, creating a custom network as a subclass of TensorNetwork makes the integration of tensor network layers within PyTorch models straightforward:
importtorch.nnasmn
#Combinebuilt-incustomTNlayerwithPyTorchlayers model=nn.Sequential{ tk.models.MPSLayer(n_features=100, in_dim=2, out_dim=10, bond_dim=5), nn.ReLU(), nn.Linear(10, 10)) ```
The last codeblock contains a built-in class, MPSLayer, that readily implements an MPS architecture. Similar implementations are available for PEPS and TTNs, both uniform and non-uniform. In general, tensor networks have gauge symmetries, i.e., several collections of parameters describe the exact same final tensor. In certain tensor network architectures there exist ways to define a preferred set of parameters. This is known as choosing a _canonical form_[4, 57], and is desirable in some physics applications. In machine learning, canonical forms have been associated to benefits in terms of privacy preservation [39]. TensorKrowch allows, in the built-in MPS and TTN implementations, to compute canonical forms using the function canonicalize (or canonicalize_univocal for the univocal canonical form described in [39]).
## VII Time and Memory Optimizations
Operating tensor networks requires a careful handling of memory, since the memory requirement may vary drastically with the contraction path. In addition to this, it is always desirable to have fast and efficient operations in machine learning pipelines. To ensure efficient memory utilization, TensorKrowch employs a memory management scheme where nodes do not possess their own memory. Instead, the memory is stored within the TensorNetwork object, and nodes are just pointers to the corresponding addresses in this shared memory. This design choice enables memory sharing among all elements of the model, facilitating operations and allowing nodes to utilize tensors from other nodes. However, it is important to note that memory sharing is limited to elements within the same object, meaning that nodes created in different TensorNetworks will not share memory. By adopting this memory management approach, TensorKrowch incorporates a range of optimizations that effectively reduce both time and memory overheads:
### Operations
Tensor network operations in TensorKrowch are not simple functions, but rather instances of a class, Operation, that is designed to minimize redundant steps during training. Each node operation consists of two functions: one that is executed the first time the operation is called, and one that is executed in every subsequent call with the same arguments. Although these functions are similar, the former makes extra computations regarding the creation of the resultant nodes and some auxiliary operations that yield the same result in every call. For instance, when contracting two nodes, tensors are typically permuted first; how this permutation is carried out is always the same, in spite of the fact that the tensors themselves may be different.
Furthermore, to keep track of repeated calls to an Operation, a new object is created during the first run: Successor. This is a class intended for internal use that acts as a cache memory, storing the arguments that were used to call the operation, some hints regarding the auxiliary tensor-network-related computations, and a reference to the resultant nodes created. Hence, once an operation has been called, both the parent and children nodes are determined, and only their tensors will change in further contractions. This enables to reduce all the code of the contraction algorithm, which may include plain Python code to collect parent nodes, into a sequence of calls to TensorKrowch Operations. Because of this simplification, the order in which these operations are called is relevant. Consequently, the last operation must always be the one returning the final node that corresponds to the contraction of the entire network.
These two optimizations (having different functions for different calls, and exploiting cache memory) break the whole contraction into a set of basic tensor operations that are computed sequentially, thus improving the efficiency of the training process.
### Trace
In Section V.1 the trace operation was introduced as a means of keeping heavy auxiliary operations involved in the first run of the contraction out of the training loop. Tracing the model not only saves time, but also saves memory. While tracing a TensorNetwork model, a new memory is created to keep track of which nodes are involved in each operation of the contraction algorithm. This enables to free up the memory of data or resultant nodes that have already taken part in some operation but are not going to be needed any more in the contraction. An explicit example of the difference in memory usage that tracing the model produces can be found in Figure 1.
### Memory Modes
Additionally, there are two modes that can change how tensor networks utilize their memory. These modes, auto_stack and auto_unbind, allow the stack and unbind operations, respectively, to reuse information stored in other parts of the memory instead of recalculating it in every contraction. This helps accelerating both, training and inference. To illustrate how these modes affect the efficiency of the tensor network model, Figure 2 presents a comparison of running times for the built-in MPSLayer class, when these modes are activated or deactivated.
## VIII Additional Library Information
### Limitations and future extensions
Tensor networks have shown promise in various fields and possess valuable properties for machine learning. However, their application in this domain is relatively recent, leaving much to be explored in terms of best practices such as initialization techniques, optimizers, and architectures. In fact, although TensorKrowch allows the creation of any tensor network, some limitations exist for using certain graphs as models. For example, contracting PEPS is known to be #P-hard [32], and finding optimal contraction paths in arbitrary graphs is also #P-hard [58]. Therefore, TensorKrowch is presented as a tool that enables rapid prototyping to explore the best techniques to be used in this domain.
We have discussed the various optimizations carried out by TensorKrowch to avoid redundant calculations as much as possible, ensuring both efficiency and simplicity. However, for scenarios where efficiency is paramount, such as quantum computer simulation [14; 15] or tensorization of neural networks for edge computing [59], there may be
Figure 1: Comparison of the maximum memory usage of one contraction of the built-in MPSLayer tensor network, when the model is traced or not, using different bond dimensions. Contraction is performed in a training regime: 1) an example data tensor is passed through the model, 2) gradients are computed via backpropagation, and 3) parameters are updated according to the gradients. All the contractions are computed in CPU using a batch size of 500, with both memory modes auto_stack and auto_unbind set to True, both contraction arguments inline_input and inline_mats also set to True, and the following arguments for the MPSLayer model: n_features=1000, in_dim=2, out_dim=10. All the experiments were run on an Intel Core i7-11800H CPU with 32GB of RAM.
libraries with better resource management. Nevertheless, TensorKrowch remains useful even in those cases, as it can be part of a preliminary prototyping step.
Since TensorKrowch has a strong focus on machine learning, the priority has been to implement tensor networks understood as architectures that parameterize complex families of functions. Alternative perspectives, such as considering tensor networks as quantum states, have assumed a secondary role. Therefore, there are currently no native implementations of many operations that are interesting from a physical perspective and that allow to explore questions regarding explainability [29, 38]. This will be one of the first steps to be addressed in the future.
On the other hand, other interesting operations include multiple tensor decompositions known in the numerical linear algebra community [50]. Although TensorKrowch allows for manual tensorization of neural network layers through iterative splitting, it does not yet have native implementations that directly transform linear or convolutional layers into their tensorized counterparts. This is another upcoming objective to develop.
Finally, there are numerous other methods that will progressively be implemented, such as allowing for the modification of the number of edges a node has through a reshape operation, or incorporating visual feedback for observing the constructed graph, akin to the graphical notation used in this paper.
Figure 2: Comparison of running times of one contraction of the built-in MPSLayer tensor network, using different bond dimensions. Contraction is performed in different regimes: training/inference, parallel/inline algorithm, CPU/GPU execution, and using different combinations for the options auto_stack and auto_unbind. For training, 1) an example data tensor is passed through the model, 2) gradients are computed via backpropagation and 3) parameters are updated according to the gradients. For inference, only the example data tensor is passed to perform one contraction of the model. Parallel and inline refer to the two possible algorithms that can be used to contract MPSLayer, specified by the argument inline_mats. The argument inline_input is always set to True. Solid lines represent CPU times, dashed lines represent GPU times. All the contractions are computed using a batch size of 100 and the following arguments for the MPSLayer model: n_features=100, in_dim=2, out_dim=10. All the experiments were run on an Inte Core i7-11800H CPU with 32GB of RAM and an NVIDIA GeForce RTX 3070 laptop GPU.
### Documentation for TensorKrowch
The documentation for TensorKrowch is available online at [https://joserapa98.github.io/tensorkrowch](https://joserapa98.github.io/tensorkrowch). It consists of a comprehensive user's guide and an API glossary. The user's guide provides detailed information that expands upon the topics covered in this paper. It includes more information on the installation and in-depth tutorials with examples ranging from the basic usage of Nodes and Edges to building advanced hybrid neural-tensor networks like the one discussed in [26]. The API glossary is automatically generated from the docstrings (formatted comments to code objects), containing detailed information about the public functions and classes defined in TensorKrowch.
### Contribution guidelines
We welcome contributions to TensorKrowch from the wider communities interested in integrating tensor networks within machine learning frameworks, and in quantum information theory. Contributions can include feedback about the library, feature requests, bug fixes, or code contributions via pull requests.
Feedback and feature requests can be done by opening an issue on the TensorKrowch GitHub repository [46]. Bug fixes and other pull requests can be done by forking the TensorKrowch source code, making changes, and then opening a pull request to the TensorKrowch GitHub repository. Pull requests are peer-reviewed by TensorKrowch's core developers to provide feedback and/or request changes.
Contributors are expected to adhere to TensorKrowch development practices including style guidelines and unit tests. Tests are written with the PyTest Python framework and are implemented outside the module. To test installation or changes, one can download the source code from the repository, and use standard PyTest functions. For example, executing the following in a Unix terminal in the test folder runs all the tests:
```
python-mpytest-v
```
## IX Concluding remarks
Machine learning research relies heavily on rapid prototyping and iteration. By building on top of PyTorch, TensorKrowch enables these features for machine learning architectures based on tensor networks. With it, it is possible to use, off the shelf, standard tensor networks (MPS, PEPS and TTNs) either as standalone architectures or as layers in deep networks, as well as defining custom networks. In the latter case, the user has access to customizing fine details of how the network processes input data, such as specifying which parts of the input are sent to which nodes, or fully customizing the contraction path. Our aim is that TensorKrowch contributes to the wide adoption of tensor networks by the machine learning community, and that allows to go beyond (and helps advancing) the body of knowledge generated by the communities working in quantum information theory and quantum many-body physics.
In this work we have described the main logic behind TensorKrowch, as well as the broad families of cases in which it can be used. We have also detailed the features of its building blocks, and many optimizations that are performed behind the scenes in order to obtain a fast and efficient operation. Further information on these topics, as well as end-to-end examples of use with state-of-the-art tensor-network and hybrid architectures in standard datasets can be found in the documentation website. We strongly encourage any willing user to contribute to the development of the library via its repository [46].
###### Acknowledgements.
This work is supported by the Spanish Ministry of Science and Innovation MCIN/AEI/10.13039/501100011033 (CEX2019-000904-S, CEX2019-000904-S-20-4 and PID2020-113523GB-I00), the Spanish Ministry of Economic Affairs and Digital Transformation (project QUANTUM ENIA, as part of the Recovery, Transformation and Resilience Plan, funded by EU program NextGenerationEU), Comunidad de Madrid (QUITEMAD-CM P2018/TCS-4342), Universidad Complutense de Madrid (FEI-EU-22-06), and the CSIC Quantum Technologies Platform PTI-001.
|
2302.02229
|
Entanglement capacity of fermionic Gaussian states
|
We study the capacity of entanglement as an alternative to entanglement
entropies in estimating the degree of entanglement of quantum bipartite systems
over fermionic Gaussian states. In particular, we derive the exact and
asymptotic formulas of average capacity of two different cases - with and
without particle number constraints. For the later case, the obtained formulas
generalize some partial results of average capacity in the literature. The key
ingredient in deriving the results is a set of new tools for simplifying finite
summations developed very recently in the study of entanglement entropy of
fermionic Gaussian states.
|
Youyi Huang, Lu Wei
|
2023-02-04T19:53:51Z
|
http://arxiv.org/abs/2302.02229v1
|
# Entanglement capacity of fermionic Gaussian states
###### Abstract
We study the capacity of entanglement as an alternative to entanglement entropies in estimating the degree of entanglement of quantum bipartite systems over fermionic Gaussian states. In particular, we derive the exact and asymptotic formulas of average capacity of two different cases - with and without particle number constraints. For the later case, the obtained formulas generalize some partial results of average capacity in the literature. The key ingredient in deriving the results is a set of new tools for simplifying finite summations developed very recently in the study of entanglement entropy of fermionic Gaussian states.
_Keywords_: quantum entanglement, entanglement capacity, fermionic Gaussian states, random matrix theory, orthogonal polynomials, special functions
## 1 Introduction
Entanglement is a fundamental feature of quantum mechanics and it is also the resource that enables quantum information processing as an emerging technology. The understanding of entanglement is crucial to a successful exploitation of advances of the quantum revolution. In the past decades, there has been considerable progress in estimating the degree of entanglement over different models of generic states, where one of the most extensively studied area is the entropy based estimations using, for example, von Neumann entropy [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], quantum purity [7, 12, 13, 14, 15, 16], and Tsallis entropy [17, 18] as entanglement indicators. These results mainly focus on the statistical behavior of entanglement entropies over generic state models, such as the well-known Hilbert-Schmidt ensemble [1, 2, 3, 4, 5, 6, 8, 11, 12, 15, 17, 18], the Bures-Hall ensemble [7, 9, 10, 13, 14, 16, 19], and the fermionic Gaussian ensemble [20, 21, 22, 23]. Besides entropies, there is a growing interest in understanding the capacity of entanglement as another entanglement quantifier. Similarly to entanglement entropy as an analogy to the thermal entropy, the entanglement capacity introduced in [24] serves as an analogy to thermal heat capacity. It is also identified as a critical value to distinguish integrable systems from chaotic ones [25]. In the literature, different properties of entanglement capacity have been numerically studied in [25, 26]. Moreover, exact formulas of the average capacity are recently obtained for the Hilbert-Schmidt ensemble [27, 28, 29] and the Bures-Hall ensemble [29]. For the fermionic Gaussian ensemble without particle number constraint, the average capacity of equal subsystem dimensions is derived in [21, 30], whereas the corresponding exact formula in the general case of unequal dimensions remains open.
In this work, we compute the exact average entanglement capacity valid for any subsystem dimensions of fermionic Gaussian states for the cases of with and without particle number constraints. A key ingredient in obtaining the results is the set of tools for simplifying finite summations developed very recently [23] in the study of von Neumann entropy of the fermionic Gaussian ensemble. Our exact results also lead to the limiting values of average capacity when the subsystem dimensions approach infinity with a fixed dimension difference. Simulations are performed to numerically verify the derived results.
The rest of the paper is organized as follows. In section 2, we first outline the problem formulation before presenting our main results of the exact mean capacity of fixed particle numbers and arbitrary particle numbers in proposition 1 and proposition 2, respectively. The corresponding asymptotic capacity formulas are given in corollary 1. Proofs to the results are provided in section 3. In appendix A, we list summation representations of the integrals involved in the proofs. Summation identities utilized in the simplification are listed in appendix B. The coefficients of some intermediate results appeared in the derivation are provided in appendix C.
## 2 Problem formulation and main results
### Problem formulation
We first introduce the formulation that leads to the entanglement capacity of fermionic Gaussian states with and without particle number constraints as well as the corresponding statistical ensembles.
A system of \(N\) fermionic degree of freedom can be formulated in terms of a set of fermionic creation and annihilation operators \(\hat{a}_{i}\) and \(\hat{a}_{i}^{\dagger}\), \(i=1,\ldots,N\), which obey the canonical anti-commutation relation,
\[\{\hat{a}_{i},\hat{a}_{j}^{\dagger}\}=\delta_{ij}\mathbb{I},\qquad\{\hat{a}_{i },\hat{a}_{j}\}=0=\{\hat{a}_{i}^{\dagger},\hat{a}_{j}^{\dagger}\}, \tag{1}\]
where \(\{\hat{A},\hat{B}\}=\hat{A}\hat{B}+\hat{B}\hat{A}\) denotes the anti-commutation relation and \(\mathbb{I}\) is an identity operator. These fermionic modes can be equivalently described via the Majorana operators \(\gamma_{l}\), \(l=1,\ldots,2N\), and
\[\hat{\gamma}_{2i-1}=\frac{\hat{a}_{i}^{\dagger}+\hat{a}_{i}}{\sqrt{2}},\qquad \hat{\gamma}_{2i}=\imath\frac{\hat{a}_{i}^{\dagger}+\hat{a}_{i}}{\sqrt{2}} \tag{2}\]
with \(\imath=\sqrt{-1}\) denoting the imaginary unit. The Majorana operators also satisfy the anti-commutation relation
\[\{\hat{\gamma}_{l},\hat{\gamma}_{k}\}=\delta_{lk}\mathbb{I}. \tag{3}\]
By collecting the Majorana operators into a \(2N\) dimensional operator-valued column vector \(\gamma=(\hat{\gamma}_{1},\ldots,\hat{\gamma}_{2N})^{\dagger}\), a system of fermionic Gaussian state is then characterized by the density operator of the form [22, 31]
\[\rho(\gamma)=\frac{\rme^{-\gamma^{\dagger}Q\gamma}}{\tr(\rme^{-\gamma^{ \dagger}Q\gamma})}, \tag{4}\]
where the coefficient matrix \(Q\) is a \(2N\times 2N\) imaginary anti-symmetric matrix as the consequence of the anti-communication relation (3).
- _Entanglement capacity over fermionic Gaussian states without particle number constraint_
There always exists an orthogonal matrix \(M\) that diagnoses the coefficient matrix \(Q\) by transforming \(\gamma\) into another Majorana basis \(\mu=(\hat{\mu}_{1},\ldots,\hat{\mu}_{2N})^{\dagger}=M\gamma\). A fermionic Gaussian state of arbitrary particle numbers is determined by the anti-symmetric covariance matrix [22]
\[J=-\imath\tanh(Q)=M^{T}J_{0}M, \tag{5}\]
where \(\tanh(x)\) denotes the hyperbolic tangent function [32], the matrix \(J_{0}\) takes the block diagonal form
\[J_{0}=\left(\begin{array}{ccc}\tanh(\lambda_{1})\mathbb{A}&\ldots&0\\ \vdots&\ddots&\vdots\\ 0&\ldots&\tanh(\lambda_{N})\mathbb{A}\end{array}\right), \tag{6}\]
and
\[\mathbb{A}=\left(\begin{array}{cc}0&1\\ -1&0\end{array}\right). \tag{7}\]
In the setting of the quantum bipartite model [33], the system of \(N\) fermionic degree of freedoms can be decomposed into two subsystems \(A\) and \(B\) of dimension \(m\) and \(n\), respectively, with \(m+n=N\). We assume \(m\leq n\) without loss of generality. By restricting the matrix \(J\) to the entries from subsystem \(A\), the restricted covariance matrix \(J_{A}\) is the \(2m\times 2m\) left-upper block of \(J\). The entanglement capacity can be represented via the real positive eigenvalues \(x_{i},\ i=1,\ldots,m\) of \(\imath J_{A}\) as [25, 26, 30]
\[C=\,\sum_{i=1}^{m}u(x_{i}) \tag{8}\]
with
\[u(x)=\frac{1-x^{2}}{4}\ln^{2}\frac{1+x}{1-x}. \tag{9}\]
The resulting joint probability density of the eigenvalues \(x_{i},\ i=1,\ldots,m\) is proportional to [20]
\[\prod_{1\leq i<j\leq m}\left(x_{i}^{2}-x_{j}^{2}\right)^{2}\prod_{i=1}^{m} \left(1-x_{i}^{2}\right)^{n-m},\qquad x_{i}\in[0,1], \tag{10}\]
which is obtained by recursively applying the result in [34, proposition A.2].
- _Entanglement capacity over fermionic Gaussian states with particle number constraint_
For a fermionic Gaussian state \(\ket{F}\) with a fixed particle number \(p\), it is more convenient to formulate it with the fermionic creation and annihilation operators, and the corresponding covariance matrix can be expressed as [22, 35, 36]
\[H_{ij}=-\imath\left\langle F\right|\hat{a}_{i}^{\dagger}\hat{a}_{j}-\hat{a}_{ j}\hat{a}_{i}^{\dagger}\ket{F}. \tag{11}\]
Using the anti-commutation relation (1), the entries of the matrix \(H\) then become
\[H_{ij}=-2\imath G_{ij}+\imath\delta_{ij}, \tag{12}\]
where \(G_{ij}=\left\langle F\right|\hat{a}_{i}^{\dagger}\hat{a}_{j}\left|F\right\rangle\) denotes the entries of an \(N\times N\) matrix \(G\). There always exists a unitary transformation \(U\) that diagonalizes \(G\). In the resulting diagonal form, the first \(p\) elements are equal to \(1\) and the rest are \(0\). Therefore, one can write
\[G=U_{N\times p}U_{N\times p}^{\dagger}. \tag{13}\]
Denoting \(y_{i}\), \(i=1,\ldots,m\) the eigenvalues of the restricted matrix \(G_{A}=U_{m\times p}U_{m\times p}^{\dagger}\), the entanglement capacity can be represented as the function of \(y_{i}\) as [26]
\[C=-\sum_{i=1}^{m}u(2y_{i}-1),\qquad y_{i}\in[0,1]. \tag{14}\]
The eigenvalue distribution of the random matrix \(U_{m\times p}U_{m\times p}^{\dagger}\) is the well-known Jacobi unitary ensemble [37, 38]. Here, it is more convenient to use the the eigenvalues of matrix \(\imath H\). Denote \(x_{i}\), \(i=1,\ldots,m\), as the eigenvalues of the \(m\times m\) upper-left block of the matrix \(\imath H\), the change of variables \(x_{i}=2y_{i}-1\) in (14) leads to the entanglement capacity (8) for the case of fixed particle number. The resulting joint probability density of the eigenvalues \(x_{i},i=1,\ldots,m\), is proportional to [39]
\[\prod_{1\leq i<j\leq m}\left(x_{i}-x_{j}\right)^{2}\prod_{i=1}^{m}\left(1+x_{i} \right)^{p-m}\left(1-x_{i}\right)^{n-p},\qquad x_{i}\in[-1,1]. \tag{15}\]
It has been introduced in [23] that the joint probability densities (10) and (15) can be compactly represented by a single joint density as
\[f_{\rm FG}(x)\propto\prod_{1\leq i<j\leq m}\left(x_{i}^{\gamma}-x_{j}^{\gamma} \right)^{2}\prod_{i=1}^{m}\left(1-x_{i}\right)^{a}\left(1+x_{i}\right)^{b}. \tag{16}\]
The two considered scenarios of fermionic Gaussian states can now be conveniently identified by the above density (16), where we have
\[\gamma=1,\quad a=n-p\geq 0,\quad b=p-m\geq 0,\quad x\in[-1,1] \tag{17}\]
for fermionic Gaussian states with an arbitrary number of particles, and
\[\gamma=2,\quad a=b=n-m\geq 0,\quad x\in[0,1] \tag{18}\]
for fermionic Gaussian states with a fixed number of particles. Note that computing the average capacity for the two cases will be performed separately below since the computation for an arbitrary \(\gamma\) in (16) appears difficult. We omit the normalization constants in (16) as they will not be utilized in the calculation.
### Main results
We now present our main results on the exact and asymptotic average capacity of the fermionic Gaussian states for the cases of fixed and arbitrary number of particles.
**Proposition 1**: _Denote the summation \(\Phi_{c,d}\) as_
\[\Phi_{c,d}=\frac{c!}{(c+d)!}\sum_{k=1}^{c}\frac{(c+d-k)!}{(c-k)!}\frac{1}{k^{ 2}},\qquad c,d\in\mathbb{Z}^{+}, \tag{19}\]
_and the function \(F(a,b)\) as_
\[F(a,b) = \alpha_{0}\Big{(}2\Phi_{a+m,b}+2\Phi_{m,a}+\psi_{1}(a+b+m+1)+\psi _{1}(a+m+1)+(\psi_{0}(a+m+1) \tag{20}\] \[-\psi_{0}(a+b+m+1))^{2}-\psi_{1}(1)\Big{)}+\alpha_{1}\psi_{0}(a+ m+1)+\alpha_{2}\psi_{0}(a+1)+\alpha_{3},\]
_where the coefficients \(\alpha_{i}\) are_
\[\alpha_{0} = \frac{m(a+m)(b+m)(a+b+m)}{(a+b+2m-1)_{3}} \tag{21}\] \[\alpha_{1} = \frac{(a+b)(a+m-1)(a+m)}{(a+b+2m-1)_{2}} \tag{22}\]
\[\alpha_{2} = -\frac{a\left(a^{2}+ab+2am-a+2bm-b+2m^{2}-2m\right)}{(a+b+2m-1)_{2}} \tag{23}\] \[\alpha_{3} = \frac{m(a+m-1)}{(a+b+2m-1)_{2}}-\frac{m}{2}. \tag{24}\]
_Then, for any subsystem dimensions \(m\leq n\), the mean value of entanglement capacity (8) of fermionic Gaussian states with a fixed particle number (17) is given by_
\[\mathbb{E}[C]=F(p-m,n-p)+F(n-p,p-m). \tag{25}\]
In proposition 1,
\[\psi_{0}(x)=\frac{\mathrm{d}\ln\Gamma(x)}{\mathrm{d}x} \tag{26}\]
and
\[\psi_{1}(x)=\frac{\mathrm{d}^{2}\ln\Gamma(x)}{\mathrm{d}^{2}x} \tag{27}\]
denote respectively the digamma and trigamma functions, and
\[(a)_{n}=\frac{\Gamma(a+n)}{\Gamma(a)} \tag{28}\]
denotes the Pochhammer symbol. The proof of proposition 1 can be found in section 3.1. Note that the summation \(\Phi_{c,d}\) in (19) does not in general admit a closed-form representation for arbitrary \(c\) and \(d\). On the other hand, the sum \(\Phi_{c,d}\) may be further simplified in some special cases as discussed in the following remark.
**Remark 1**: _Substituting \(i\to k\), \(m\to c\), \(n\to c+d\) in the identity (B.12), the summation \(\Phi_{c,d}\) in (19) admits an alternative form_
\[\sum_{k=1}^{c}\frac{\psi_{0}(k+d)}{k}+\mathrm{CF}, \tag{29}\]
_where \(\mathrm{CF}\) denotes the closed-form terms in the bracket of (B.12). The sum in (29) may not be summable into a closed-form expression and is referred to as an unsimplifiable basis [6, 8, 10, 11, 21, 23, 29]. However, in the special cases of a given integer \(d\), it permits closed-form representation as a result of the identity (B.3). This corresponds to the case of fixed differences \(a=n-p\), \(b=p-m\), where the average capacity (25) admits more explicit expressions. The cases \(a=b=0,1,2\) are provided respectively in below as examples_
\[\mathbb{E}[C] = -\frac{2m^{3}}{(2m-1)(2m+1)}\left(\psi_{1}(m+1)-\frac{\pi^{2}}{4 }\right)-\frac{2m^{2}-2m+1}{2m-1} \tag{30}\] \[\mathbb{E}[C] = -\frac{2m(m+1)(m+2)}{(2m+1)(2m+3)}\left(\psi_{1}(m+1)-\frac{\pi^ {2}}{4}\right)-\frac{m(2m(m+3)+5)}{(m+1)(2m+3)}\] (31) \[\mathbb{E}[C] = -\frac{2m(m+2)(m+4)}{(2m+3)(2m+5)}\left(\psi_{1}(m+1)-\frac{\pi^ {2}}{4}\right)+\frac{4}{(m+1)(m+3)}\] (32) \[\times\left(\psi_{0}(m+1)-\psi_{0}(1)\right)-\frac{m\left(m^{2}+4 m+5\right)(4m^{3}+30m^{2}+72m+57)}{(2m+3)(2m+5)(m+1)_{3}}.\]
**Proposition 2**: _For any subsystem dimensions \(m\leq n\), the mean value of entanglement capacity (8) of fermionic Gaussian states with an arbitrary particle number (18) is given by_
\[\mathbb{E}[C] = \beta\left(\Phi_{2m-1,n-m}+\Phi_{m+n-1,n-m}\right)+\frac{1}{4} \left(\Phi_{m-1,n}+\Phi_{m-1,n-m}\right)+\left(\frac{\beta}{2}+\frac{1}{8}\right) \tag{33}\] \[\times\psi_{1}(m+n)+\frac{1}{8}\psi_{1}(n)+\frac{\beta}{2}\left( \left(\psi_{0}(2n)-\psi_{0}(m+n)\right){}^{2}+\psi_{1}(2n)-\psi_{1}(1)\right)\] \[+\frac{1}{8}\left(\psi_{0}(n)-\psi_{0}(m+n)\right){}^{2}+\frac{n- m}{2}\left(\psi_{0}(m+n)-\psi_{0}(n-m)\right)-m,\]
_where \(\Phi_{a,b}\) is defined in (19) and the coefficient \(\beta\) is given by_
\[\beta=\frac{(2m-1)(2n-1)}{4m+4n-2}. \tag{34}\]
Proposition 2 is proved in section 3.2. It is important to point out that in deriving the results (25) and (33), we make use of the lemmas 1-4 in [23] as will also be discussed in section 3.1.2. The four lemmas are examples of a new simplification framework recently developed in [23] when studying the exact variance of von Neumann entropy. This new framework consists of a set of novel tools useful in simplifying the summations involved, including (A.3), (A.4), and (A.7) in appendix A. These summations do not permit further simplifications when using the existing simplification tools for the computation over Hilbert-Schmidt ensemble [6, 8, 11] or the Bures-Hall ensemble [9, 10, 16]. For proposition 2, we also have the following remark.
**Remark 2**: _For the same reason as in remark 1, the result (33) admits closed-form representations for the special cases when the subsystem dimension difference \(a=n-m\) is fixed. For example, by fixing \(a=0,1,2,3\) in (33), we recover the recently obtained mean capacity values in [21, equations (27)-(30)]._
Based on the two propositions, the limiting behavior of the average capacity can now be obtained. The results are summarized in corollary 1 below, and the corresponding proof can be found in section 3.3.
**Corollary 1**: _For any subsystem dimensions \(m\leq n\) in the asymptotic regime_
\[m\rightarrow\infty,\quad n\rightarrow\infty,\quad with\ a\ fixed\ n-m, \tag{35}\]
_the average entanglement capacity of fermionic Gaussian states with a fixed particle number (25) and with an arbitrary particle number (33) approach to the same limit_
\[\frac{\mathbb{E}[C]}{m}\longrightarrow\frac{\pi^{2}}{8}-1. \tag{36}\]
In corollary 1, we note that for the case of fixed particle number, the particle number \(p\) also goes to infinity of the same rate as \(m\) and \(n\) in the limit (35). For the case of arbitrary number of particles, the limiting value (36), also known as the leading volume-law coefficient, was first obtained in [30] for equal subsystem dimensions. Here, we have extended it rigorously to a more general regime (35) starting from our explicit
result (33). We also observe the interesting fact that the limiting value (36) is the same for the cases (17) and (18) despite the fundamental difference of the two underlying models.
To illustrate the obtained results, we plot in figure 1 the exact formulas (25) and (33) per dimension \(m\) for fixed subsystem dimension differences \(n-m=0,\ 4,\ 8\), along with the asymptotic value (36). The left-hand side figure corresponds to the case of a fixed particle number \(p=(m+n)/2\), and the right-hand side corresponds to the case of an arbitrary particle number. It is observed that as the dimension difference \(n-m\) increases, the average capacity (25) and (33) approach to the limiting value (36) more slowly. This fact indicates that the finite-size capacity formulas are more useful when the dimension difference \(n-m\) is large, cf. [29], and otherwise the asymptotic value (36) serves as a reasonably accurate approximation. We also plot the simulated values of mean capacity in figure 1, which match well with the analytical results.
Figure 1: Average of entanglement capacity (per dimension) of fermionic Gaussian states with and without particle number constraints: analytical results versus simulations. The solid lines are drawn by the exact capacity formulas (25) and (33), while the dash-dot horizontal lines represent the limiting behaviors of average capacity (36). The corresponding scatters in the symbols of circle, diamond, and asterisk are obtained from numerical simulations.
## 3 Computation of average capacity
In this section, we prove the results presented in the previous section. The mean formula of entanglement capacity for fermionic Gaussian states with a fixed particle number in proposition 1 is calculated in section 3.1. The computation for the case of an arbitrary particle number in proposition 2 is performed in section 3.2. The limiting value of the average capacity in corollary 1 is proved in section 3.3.
### Average capacity over fermionic Gaussian states with particle number constraint
Here, we compute the mean value of entanglement capacity (8) over fermionic Gaussian states with particle number constraint (17). The computation mainly consists of two parts. The first part is to obtain a summation representation of the average capacity as shown in section 3.1.1. In section 3.1.2, we then simplify the summations in arriving at the desired result (25) in proposition 1.
#### 3.1.1 Correlation functions and integral calculations
Recall the definition (8) of entanglement capacity
\[C=\sum_{i=1}^{m}u(x_{i}) \tag{37}\]
with
\[u(x)=\frac{1-x^{2}}{4}\ln^{2}\frac{1+x}{1-x}, \tag{38}\]
computing its average requires the probability density function of one arbitrary eigenvalue of the fermionic Gaussian ensemble. Denoting \(g_{l}(x_{1},\ldots,x_{l})\) as the joint density of \(l\) arbitrary eigenvalues, the average capacity is written as
\[\mathbb{E}[C]=m\int_{-1}^{1}u(x)g_{1}(x)\,\mathrm{d}x. \tag{39}\]
When \(\gamma=1\), the ensemble (16) is the well-known Jacobi unitary ensemble. In this case, the joint density \(g_{l}(x_{1},\ldots,x_{l})\) can be written in terms of an \(l\times l\) determinant as [37, 38]
\[g_{l}(x_{1},\ldots,x_{l})=\frac{(m-l)!}{m!}\det\left(K\left(x_{i},x_{j}\right) \right)_{i,j=1}^{l}. \tag{40}\]
The determinant in (40) is known as the \(l\)-point correlation function [38], where
\[K\left(x,y\right)=\sqrt{w(x)w(y)}\sum_{k=0}^{m-1}\frac{J_{k}^{(a,b)}(x)J_{k}^{ (a,b)}(y)}{h_{k}} \tag{41}\]
is the correlation kernel with the weight function
\[w(x)=\left(\frac{1-x}{2}\right)^{a}\left(\frac{1+x}{2}\right)^{b}. \tag{42}\]
In (41), the polynomial \(J_{k}^{(a,b)}(x)\) is the Jacobi polynomial supported in \(x\in[-1,1]\), and
\[h_{k}=\frac{2\Gamma(k+a+1)\Gamma(k+b+1)}{(2k+a+b+1)\Gamma(k+1)\Gamma(k+a+b+1)} \tag{43}\]
is the normalization constant, which is obtained by the orthogonality relation of Jacobi polynomials [38]
\[\int_{-1}^{1}\left(\frac{1-x}{2}\right)^{a}\left(\frac{1+x}{2}\right) ^{b}J_{k}^{(a,b)}(x)J_{l}^{(a,b)}(x)\,\mathrm{d}x\] \[=\frac{2\Gamma(k+a+1)\Gamma(k+b+1)}{(2k+a+b+1)\Gamma(k+1)\Gamma(k +a+b+1)}\delta_{kl},\quad\Re(a,b)>-1. \tag{44}\]
By rewriting the function \(u(x)\) in (9) as
\[u(x) = \frac{1+x}{2}\ln^{2}\frac{1+x}{2}+\frac{1-x}{2}\ln^{2}\frac{1-x}{ 2} \tag{45}\] \[-\left(\frac{1+x}{2}\ln\frac{1+x}{2}+\frac{1-x}{2}\ln\frac{1-x}{ 2}\right)^{2},\]
the average capacity (39) boils down to computing two integrals involving the one-point correlation function, cf. [21], as
\[\mathbb{E}[C]=\mathrm{I}_{\mathcal{C}}-\mathrm{I}_{\mathcal{A}}, \tag{46}\]
where
\[\mathrm{I}_{\mathcal{C}} = \int_{-1}^{1}\left(\frac{1+x}{2}\ln^{2}\frac{1+x}{2}+\frac{1-x}{ 2}\ln^{2}\frac{1-x}{2}\right)K(x,x)\,\mathrm{d}x \tag{47}\] \[\mathrm{I}_{\mathcal{A}} = \int_{-1}^{1}\left(\frac{1+x}{2}\ln\frac{1+x}{2}+\frac{1-x}{2}\ln \frac{1-x}{2}\right)^{2}K(x,x)\,\mathrm{d}x. \tag{48}\]
By the definition of the correlation kernel (41), the integral \(\mathrm{I}_{\mathcal{C}}\) in (47) is further written as
\[\mathrm{I}_{\mathcal{C}} = \sum_{k=0}^{m-1}\frac{1}{h_{k}}\int_{-1}^{1}\left(\frac{1+x}{2} \ln^{2}\frac{1+x}{2}+\frac{1-x}{2}\ln^{2}\frac{1-x}{2}\right) \tag{49}\] \[\times\left(\frac{1-x}{2}\right)^{a}\left(\frac{1+x}{2}\right)^{b }J_{k}^{(a,b)}(x)^{2}\,\mathrm{d}x.\]
Similarly, the integral \(\mathrm{I}_{\mathcal{A}}\) in (48) now consists of two parts
\[\mathrm{I}_{\mathcal{A}}=\mathcal{A}_{1}+\mathcal{A}_{2}, \tag{50}\]
where
\[\mathcal{A}_{1} = \sum_{k=0}^{m-1}\frac{1}{h_{k}}\int_{-1}^{1}\left(\left(\frac{1+ x}{2}\right)^{2}\ln^{2}\frac{1+x}{2}+\left(\frac{1-x}{2}\right)^{2}\ln^{2} \frac{1-x}{2}\right) \tag{51}\] \[\times\left(\frac{1-x}{2}\right)^{a}\left(\frac{1+x}{2}\right)^{b }J_{k}^{(a,b)}(x)^{2}\,\mathrm{d}x\] \[\mathcal{A}_{2} = \sum_{k=0}^{m-1}\frac{2}{h_{k}}\int_{-1}^{1}\left(\frac{1-x}{2} \right)^{a+1}\left(\frac{1+x}{2}\right)^{b+1}\ln\frac{1-x}{2}\ln\frac{1+x}{2} J_{k}^{(a,b)}(x)^{2}\,\mathrm{d}x. \tag{52}\]
Here, we recall that \(a=n-p\geq 0\) and \(b=p-m\geq 0\) in (17). Due to the parity property of Jacobi polynomials [40]
\[J_{k}^{(a,b)}(-x)=(-1)^{k}J_{k}^{(b,a)}(x), \tag{53}\]
the integrals \(\mathrm{I}_{\mathcal{C}}\) and \(\mathcal{A}_{1}\) admit the following symmetric structures
\[\mathrm{I}_{\mathcal{C}} =\mathrm{I}_{\mathcal{C}}{}^{(a,b)}+\mathrm{I}_{\mathcal{C}}{}^{(b,a)} \tag{54}\] \[\mathcal{A}_{1} =\mathcal{A}_{1}^{(a,b)}+\mathcal{A}_{1}^{(b,a)}, \tag{55}\]
where
\[\mathrm{I}_{\mathcal{C}}{}^{(a,b)} = \tag{56}\] \[\mathcal{A}_{1}^{(a,b)} = \tag{57}\]
The summations in (52), (56), and (57) can be evaluated by using the confluent form of Christoffel-Darboux formula [38]
\[\sum_{k=0}^{m-1}\frac{J_{k}^{(a,b)}(x)^{2}}{h_{k}}=\alpha_{1}J_{m-1}^{(a+1,b+1) }(x)J_{m-1}^{(a,b)}(x)-\alpha_{2}J_{m-2}^{(a+1,b+1)}(x)J_{m}^{(a,b)}(x), \tag{58}\]
where
\[\alpha_{1} = \frac{m(a+b+m)(a+b+m+1)}{h_{m-1}(a+b+2m-1)_{2}} \tag{59}\] \[\alpha_{2} = \frac{m(a+b+m)^{2}}{h_{m-1}(a+b+2m-1)_{2}}. \tag{60}\]
Consequently, we have
\[\mathrm{I}_{\mathcal{C}}{}^{(a,b)} = \alpha_{1}\int_{-1}^{1}\left(\frac{1-x}{2}\right)^{a}\left(\frac{ 1+x}{2}\right)^{b+1}\ln^{2}\frac{1+x}{2}J_{m-1}^{(a+1,b+1)}(x)J_{m-1}^{(a,b)}( x)\,\mathrm{d}x \tag{61}\] \[-\alpha_{2}\int_{-1}^{1}\left(\frac{1-x}{2}\right)^{a}\left( \frac{1+x}{2}\right)^{b+1}\ln^{2}\frac{1+x}{2}J_{m-2}^{(a+1,b+1)}(x)J_{m}^{(a, b)}(x)\,\mathrm{d}x\] \[\mathcal{A}_{1}^{(a,b)} = \alpha_{1}\int_{-1}^{1}\left(\frac{1-x}{2}\right)^{a}\left(\frac{ 1+x}{2}\right)^{b+2}\ln^{2}\frac{1+x}{2}J_{m-1}^{(a+1,b+1)}(x)J_{m-1}^{(a,b)}( x)\,\mathrm{d}x\] (62) \[-\alpha_{2}\int_{-1}^{1}\left(\frac{1-x}{2}\right)^{a}\left( \frac{1+x}{2}\right)^{b+2}\ln^{2}\frac{1+x}{2}J_{m-2}^{(a+1,b+1)}(x)J_{m}^{(a, b)}(x)\,\mathrm{d}x\]
and
\[\mathcal{A}_{2}=2\alpha_{1}\mathcal{A}_{2}(m-1,m-1)-2\alpha_{2}\mathcal{A}_{2 }(m-2,m), \tag{63}\]
where
\[\mathcal{A}_{2}(m-1,m-1) = \int_{-1}^{1}\left(\frac{1-x}{2}\right)^{a+1}\left(\frac{1+x}{2} \right)^{b+1} \tag{64}\] \[\times\ln\frac{1-x}{2}\ln\frac{1+x}{2}J_{m-1}^{(a+1,b+1)}(x)J_{m- 1}^{(a,b)}(x)\,\mathrm{d}x\] \[\mathcal{A}_{2}(m-2,m) = \int_{-1}^{1}\left(\frac{1-x}{2}\right)^{a+1}\left(\frac{1+x}{2} \right)^{b+1}\] (65) \[\times\ln\frac{1-x}{2}\ln\frac{1+x}{2}J_{m-2}^{(a+1,b+1)}(x)J_{m} ^{(a,b)}(x)\,\mathrm{d}x.\]
Computing the above integrals \(\mathrm{I}_{\mathcal{C}}{}^{(a,b)}\) and \(\mathcal{A}_{1}^{(a,b)}\) in (61)-(62) requires the integral identity
\[\int_{-1}^{1}\left(\frac{1-x}{2}\right)^{a_{1}}\left(\frac{1+x}{2} \right)^{c}J_{k_{1}}^{(a_{1},b_{1})}(x)J_{k_{2}}^{(a_{2},b_{2})}(x)\,\mathrm{d}x\] \[=\frac{2\left(k_{1}+1\right)_{a_{1}}}{\left(b_{2}+k_{2}+1\right)_ {a_{2}}}\sum_{i=0}^{k_{2}}\frac{(-1)^{i+k_{2}}(i+1)_{c}\left(i+b_{2}+1\right)_ {a_{2}+k_{2}}}{\Gamma\left(k_{2}-i+1\right)\Gamma\left(a_{1}+c+i+k_{1}+2 \right)}\] \[\quad\times\left(c+i-b_{1}-k_{1}+1\right)_{k_{1}},\quad\Re(a_{1}, a_{2},b_{1},b_{2},c)>-1. \tag{66}\]
To show this identity, we first note that the Jacobi polynomial \(J_{k}^{(a,b)}(x)\) supported in \(x\in[-1,1]\) admits different representations [38, 40]
\[J_{k}^{(a,b)}(x) =\frac{(-1)^{k}(b+1)_{k}}{k!}\sum_{i=0}^{k}\frac{(-k)_{i}(k+a+b+1 )_{i}}{(b+1)_{i}\Gamma(i+1)}\left(\frac{1+x}{2}\right)^{i} \tag{67}\] \[=\,\sum_{i=0}^{k}\frac{(-1)^{i}\Gamma(a+k+1)(k+b-i+1)_{i}}{\Gamma (i+1)\Gamma(a+i+1)\Gamma(k-i+1)}\left(\frac{1-x}{2}\right)^{i}\left(\frac{1+x }{2}\right)^{k-i}. \tag{68}\]
The identity (66) is then obtained by using the definition (67) for the polynomial \(J_{k_{2}}^{(a_{2},b_{2})}\) before applying the well-known integral identity [38, 40]
\[\int_{-1}^{1}\left(\frac{1-x}{2}\right)^{a}\left(\frac{1+x}{2} \right)^{c}J_{k}^{(a,b)}(x)\,\mathrm{d}x\] \[=\frac{2\Gamma(c+1)(k+1)_{a}(c-b-k+1)_{k}}{\Gamma(a+c+k+2)},\quad \Re(a,b,c)>-1. \tag{69}\]
In (66), by specializing
\[a_{1}=a,\quad a_{2}=a+1,\quad b_{1}=b,\quad b_{2}=b+1,\quad k_{1}=k_{2}=m-1 \tag{70}\]
so that
\[J_{k_{1}}^{(a_{1},b_{1})}(x)\to J_{m-1}^{(a,b)}(x),\qquad J_{k_{2}}^{(a_{2},b _{2})}(x)\to J_{m-1}^{(a+1,b+1)}(x), \tag{71}\]
the first integral in (61) can now be computed by taking twice derivatives with respect to the parameter \(c\) of the specialized identity (66) before setting \(c=b+1\). Other integrals in (61)-(62) are calculated in the same manner.
To compute the integral \(\mathcal{A}_{2}\) in (63), one will need another integral identity
\[\int_{-1}^{1}\left(\frac{1-x}{2}\right)^{d}\left(\frac{1+x}{2} \right)^{c}J_{k_{1}}^{(a_{1},b_{1})}J_{k_{2}}^{(a_{2},b_{2})}(x)\,\mathrm{d}x\] \[=\frac{2\Gamma\left(a_{2}+k_{2}+1\right)\Gamma\left(b_{2}+k_{2}+1 \right)}{\Gamma\left(c+d+k_{1}+k_{2}+2\right)}\sum_{i=0}^{k_{2}}\frac{(-1)^{i }\Gamma\left(d-a_{1}+i+1\right)}{\Gamma(i+1)\Gamma\left(a_{2}+i+1\right)}\] \[\quad\times\frac{\Gamma\left(c-b_{1}-i+k_{2}+1\right)}{\Gamma \left(k_{2}-i+1\right)\Gamma\left(b_{2}-i+k_{2}+1\right)}\sum_{j=0}^{k_{1}} \frac{(-1)^{j}\left(k_{1}-j+1\right)_{d+i}}{\Gamma(j+1)}\] \[\quad\times\frac{(c-i+j-b_{1}-k_{1}+k_{2}+1)_{b_{1}+k_{1}}}{ \Gamma\left(d-a_{1}+i-j+1\right)},\quad\Re(a_{1},a_{2},b_{1},b_{2},c,d)>-1, \tag{72}\]
which is obtained by using the definition (68) for the polynomial \(J_{k_{2}}^{(a_{2},b_{2})}\) before applying the identity [21, equation (62)]
\[\int_{-1}^{1}\left(\frac{1-x}{2}\right)^{d}\left(\frac{1+x}{2} \right)^{c}J_{k}^{(a,b)}(x)\,\mathrm{d}x\] \[=\frac{2\Gamma(c-b+1)\Gamma(d-a+1)}{\Gamma(c+d+k+2)}\sum_{i=0}^{k }\frac{(-1)^{i}\Gamma(c+i+1)\Gamma(d-i+k+1)}{\Gamma(i+1)\Gamma(k-i+1)}\] \[\quad\times\frac{1}{\Gamma(d-a-i+1)\Gamma(c-b+i-k+1)},\quad\Re(a,b,c,d)>-1. \tag{73}\]
The two integrals (64) and (65) in \(\mathcal{A}_{2}\) are calculated by taking derivatives of \(c\) and \(d\) of identity (72) with the specialization (70) and the specialization
\[a_{1}=a,\quad b_{1}=b,\quad a_{2}=a+1,\quad b_{2}=b+1,\quad k_{1}=m,\quad k_{2 }=m-2, \tag{74}\]
respectively, before setting \(c=b+1\), \(d=a+1\).
In writing down the summation forms of \(\mathrm{I}_{\mathcal{C}}{}^{(a,b)}\), \(\mathcal{A}_{1}^{(a,b)}\), and \(\mathcal{A}_{2}^{(a,b)}\), one will also have to resolve the indeterminacy by using the following asymptotic expansions of gamma and polygamma functions of negative arguments [32] when \(\epsilon\to 0\),
\[\Gamma(-l+\epsilon) =\frac{(-1)^{l}}{l!\epsilon}\left(1+\psi_{0}(l+1)\epsilon+o\left( \epsilon^{2}\right)\right) \tag{75}\] \[\psi_{0}(-l+\epsilon) =-\frac{1}{\epsilon}+\psi_{0}(l+1)+\left(2\psi_{1}(1)-\psi_{1}(l+ 1)\right)\epsilon+o\left(\epsilon^{2}\right)\] (76) \[\psi_{1}(-l+\epsilon) =\frac{1}{\epsilon^{2}}-\psi_{1}(l+1)+\psi_{1}(1)+\zeta(2)+o\left( \epsilon\right). \tag{77}\]
The resulting summation forms of \(\mathrm{I}_{\mathcal{C}}{}^{(a,b)}\), \(\mathcal{A}_{1}^{(a,b)}\), and \(\mathcal{A}_{2}\) are summarized in (16)-(17) in appendix A.1.
#### 3.1.2 Simplification of summations
The remaining task in computing the average capacity
\[\mathbb{E}[C]=\mathrm{I}_{\mathcal{C}}-\mathrm{I}_{\mathcal{A}}, \tag{78}\]
is to simplify the summations in (16)-(17). In the subsequent calculation, we first simplify the summation (16) in obtaining \(\mathrm{I}_{\mathcal{C}}\), whereas \(\mathrm{I}_{\mathcal{A}}\) is obtained by simplifying the summations (17)-(17).
We first simplify the summations in (16). Note that the first two sums in (16) are single sums consisting of polygamma and rational functions, and the last sum can be directly reduced to a closed-form expression. The two single summations are simplified, by using the identities (17)-(18) while keeping in mind the symmetric structure (54)
\[\mathrm{I}_{\mathcal{C}}=\mathrm{I}_{\mathcal{C}}^{(\mathrm{a,b})}+\mathrm{I} _{\mathcal{C}}^{(\mathrm{b,a})}, \tag{79}\]
as
\[\mathrm{I}_{\mathcal{C}}^{(\mathrm{a,b})}=a_{0}\sum_{k=1}^{m}\frac{\psi_{0}(a+ b+k+m)}{b+k}-a_{1}\sum_{k=1}^{m}\frac{\psi_{0}(a+b+k+m)}{k}+a_{1}\sum_{k=1}^{m} \frac{\psi_{0}(b+k)}{k}+a_{2}\]
\[\times\Big{(}\psi_{0}^{2}(a+b+2m)-\psi_{0}(a+b+m)\psi_{0}(a+b+2m)- \psi_{0}(a+b+2m)\] \[\times\psi_{0}(b+m)\Big{)}+a_{0}\psi_{0}(b)\psi_{0}(a+b+m)+\frac{a_ {1}}{2}\Big{(}\psi_{1}(b)-\psi_{1}(a+b+m)\] \[+\psi_{0}(a+b+m)\left(\psi_{0}(a+b+m)+2\psi_{0}(m)-2\psi_{0}(1) \right)+2\psi_{0}(b)(\psi_{0}(b+m)\] \[-\psi_{0}(m)+\psi_{0}(1))-\psi_{0}^{2}(b)\Big{)}+a_{3}\psi_{0}(a+ b+2m)+a_{4}\psi_{0}(a+b+m)\] \[+a_{5}\psi_{0}(b+m)+a_{6}\psi_{0}(b)+a_{7}, \tag{80}\]
where the coefficients \(a_{i}\) are summarized in (C.1)-(C.8) of appendix C.1.
We now simplify the summations (A.2)-(A.4) in obtaining \({\rm I}_{\mathcal{A}}\). The summation (A.2) is simplified into a similar form as the result (80) by using the identities (B.1)-(B.8). The integral \({\mathcal{A}}_{1}\) is then obtained by adding the result of (A.2) and its symmetric form according to (55). Continue to simplify the summations (A.3) and (A.4) will require the following four lemmas.
**Lemma 1**: _For any complex numbers \(a,b,c\notin{\mathbb{Z}}^{-}\), we have_
\[\sum_{i=1}^{m}\frac{1}{\Gamma(i)\Gamma(a+i)\Gamma(m+1-i)\Gamma(m+ b+1-i)(c+i)}\] \[=\frac{1}{\Gamma(b+m)\Gamma(c+m+1)\Gamma(a+b+m)}\sum_{i=1}^{m} \frac{\Gamma(c-i+m+1)\Gamma(a+b-i+2m)}{\Gamma(m-i+1)\Gamma(a-i+m+1)}. \tag{81}\]
**Lemma 2**: _For any complex numbers \(a,b\notin{\mathbb{Z}}^{-}\), and any \(c\in{\mathbb{Z}}^{+}\), we have_
\[\sum_{i=1}^{m}\frac{1}{\Gamma(c+i)\Gamma(a+i)\Gamma(m+1-i)\Gamma( m+b+1-i)}\] \[=\frac{1}{\Gamma(m+b)\Gamma(m+a+b)\Gamma(c)\Gamma(m+c)}\sum_{i=1} ^{m}\frac{\Gamma(m+a+b+i-1)\Gamma(m+c-i)}{\Gamma(a+i)\Gamma(m-i+1)}. \tag{82}\]
**Lemma 3**: _For any complex numbers \(a,b\notin{\mathbb{Z}}^{-}\), and any \(c\in{\mathbb{Z}}^{+}\), we have_
\[\sum_{i=1}^{m}\frac{1}{\Gamma(c+i)\Gamma(a+i)\Gamma(m-i+1)\Gamma( b-i+m+1)i}\] \[=\frac{1}{\Gamma(a)\Gamma(a+m)\Gamma(1+b+m)\Gamma(b+c+m)}\sum_{i=1 }^{m}\frac{\Gamma(a-i+m)\Gamma(b+c+i+m)}{\Gamma(c+i)\Gamma(m-i+1)i}\] \[\quad+\frac{\psi_{0}(a)-\psi_{0}(a+m)}{\Gamma(a)\Gamma(c)\Gamma( m+1)\Gamma(b+m+1)}. \tag{83}\]
**Lemma 4**: _For any complex numbers \(a,b\notin{\mathbb{Z}}^{-}\), and any \(c,d\in{\mathbb{Z}}^{+}\), we have_
\[\sum_{i=1}^{m}\frac{1}{\Gamma(c+i)\Gamma(a+i)\Gamma(d+m-i+1)\Gamma( b+m-i+1)}\] \[=\frac{1}{\Gamma(c+i)\Gamma(a+i)\Gamma(d+m-i+1)\Gamma(b+m-i+1)}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1)}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1)}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1)}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1)}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1)}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1)}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)}\] \[=\frac{1}{\Gamma(c+i)\Gamma(d+m-i+1)\Gamma(d+m-i+1)\Gamma(d+m-i+1 )}\]
\[=\frac{1}{\Gamma(d)\Gamma(a+m)\Gamma(a+b+m)\Gamma(c+d+m)}\sum_{i=1}^{m} \frac{\Gamma(c+d+i-1)\Gamma(a+b-i+2m)}{\Gamma(c+i)\Gamma(b-i+m+1)}\] \[\quad+\frac{1}{\Gamma(c)\Gamma(b+m)\Gamma(a+b+m)\Gamma(c+d+m)} \sum_{i=1}^{m}\frac{\Gamma(c+d+i-1)\Gamma(a+b-i+2m)}{\Gamma(d+i)\Gamma(a-i+m+1)}. \tag{84}\]
Proof to the above four lemmas is based on a new simplification framework proposed in [23, section 2.2.2]. Equipped with these tools, the summations (A.3) and (A.4) can now be simplified. In the following, we will first present the simplification of (A.4), whereas (A.3) is simplified in the same manner.
Note that (A.4) consists of one single summation and two double summations. To proceed with the single summation
\[\sum_{i=1}^{m-1}\frac{1}{\Gamma(i)\Gamma(a+i+1)\Gamma(m-i)\Gamma( b-i+m+1)}\Big{(}(\psi_{0}(a+b+2m+2)-\psi_{0}(a+m+1)\] \[-\psi_{0}(i+1)+\psi_{0}(1))(\psi_{0}(m-i+1)-\psi_{0}(a+b+2m+2)+ \psi_{0}(b+m+1)-\psi_{0}(1))\] \[+\psi_{1}(a+b+2m+2)\Big{)}, \tag{85}\]
we first rewrite it as
\[(s_{0}-s_{1}s_{2})\sum_{i=1}^{m-1}\frac{1}{\Gamma(i)\Gamma(a+i+1) \Gamma(m-i)\Gamma(b-i+m+1)}\] \[+\left(s_{1}-\frac{1}{m}\right)\sum_{i=1}^{m-1}\frac{1}{\Gamma(i )\Gamma(a+i+1)\Gamma(m-i+1)\Gamma(b-i+m+1)}\] \[+\left(s_{2}-\frac{1}{m}\right)\sum_{i=1}^{m-1}\frac{1}{\Gamma(i +1)\Gamma(a+i+1)\Gamma(m-i)\Gamma(b-i+m+1)}\] \[+s_{1}\sum_{i=1}^{m-1}\frac{\psi_{0}(i)}{\Gamma(i)\Gamma(b+i+1) \Gamma(m-i)\Gamma(a-i+m+1)}\] \[-\sum_{i=1}^{m-1}\frac{\psi_{0}(i)}{\Gamma(i)\Gamma(b+i+1) \Gamma(m-i+1)\Gamma(b-i+m+1)}\] \[+s_{2}\sum_{i=1}^{m-1}\frac{\psi_{0}(i)}{\Gamma(i)\Gamma(a+i+1) \Gamma(m-i)\Gamma(b-i+m+1)}\] \[-\sum_{i=1}^{m-1}\frac{\psi_{0}(i)}{\Gamma(i)\Gamma(a+i+1) \Gamma(m-i)\Gamma(b-i+m+1)}\] \[-\sum_{i=1}^{m-1}\frac{\psi_{0}(i)\psi_{0}(m-i)}{\Gamma(i) \Gamma(a+i+1)\Gamma(m-i)\Gamma(b-i+m+1)}, \tag{86}\]
where
\[s_{0} =\psi_{1}(a+b+2m+2) \tag{87}\] \[s_{1} =\psi_{0}(a+b+2m+2)-\psi_{0}(a+m+1)+\psi_{0}(1) \tag{88}\]
\[s_{2}=\psi_{0}(a+b+2m+2)-\psi_{0}(b+m+1)+\psi_{0}(1). \tag{89}\]
The summations in (86) are then simplified into single sums of the forms
\[\sum_{j=1}^{m}\frac{\Gamma(a+b-j+2m-1)}{\Gamma(a-j+m)j} \tag{90}\] \[\sum_{j=1}^{m}\frac{\Gamma(a+b-j+2m-1)}{\Gamma(a-j+m)j^{2}} \tag{91}\]
by using lemma 2, lemma 4, and the closed-form identity [41]
\[\sum_{i=1}^{m}\frac{1}{\Gamma(i)\Gamma(a+i)\Gamma(m-i+1)\Gamma(m+ b+1-i)}\] \[=\frac{\Gamma(a+b+2m-1)}{\Gamma(m)\Gamma(a+m)\Gamma(b+m)\Gamma(a+ b+m)}. \tag{92}\]
More specifically, the first three summations in (86) are simplified into closed-form expressions by using the identity (92), and the next four summations are simplified by taking derivative of \(c\) of the identity (82) in lemma 2 before setting \(c=0\). The last summation in (86) is simplified by taking derivatives of \(c\) and \(d\) of the identity (84) in lemma 4 before setting \(c=d=0\).
We now move on to the double summations in (A.4), which are
\[\sum_{i=1}^{m-1}\frac{i(m-i)}{\Gamma(b+i+1)\Gamma(a-i+m+1)}\sum_{ j=1}^{m-i}\frac{\Gamma(a+j+m+1)\Gamma(b-j+m+1)}{j\Gamma(i+j+1)\Gamma(m-i-j+1)}\] \[\times(\psi_{0}(a+j+m+1)-\psi_{0}(a+b+2m+2)+\psi_{0}(m-i+1)-\psi_ {0}(j+1)) \tag{93}\]
and
\[\sum_{i=1}^{m-1}\frac{i(m-i)}{\Gamma(a+i+1)\Gamma(b-i+m+1)}\sum_{ j=1}^{m-i}\frac{\Gamma(a-j+m+1)\Gamma(b+j+m+1)}{j\Gamma(i+j+1)\Gamma(m-i-j+1)}\] \[\times(\psi_{0}(b+j+m+1)-\psi_{0}(a+b+2m+2)+\psi_{0}(m-i+1)-\psi_ {0}(j+1)). \tag{94}\]
The two summations (93) and (94) admit a similar symmetric structure as (54)-(55). Therefore, by simplifying the summation (93), the summation (94) can be directly obtained by switching \(a\) and \(b\). We start with the summation (93) by dividing it into two parts
\[\sum_{i=1}^{m-1}\frac{i(m-i)}{\Gamma(b+i+1)\Gamma(a-i+m+1)}\sum_{ j=1}^{m-i}\frac{\Gamma(a+j+m+1)\Gamma(b-j+m+1)}{j\Gamma(i+j+1)\Gamma(m-i-j+1)}\] \[\times(-\psi_{0}(a+b+2m+2)-\psi_{0}(j+1)) \tag{95}\]
and
\[\sum_{i=1}^{m-1}\frac{i(m-i)}{\Gamma(b+i+1)\Gamma(a-i+m+1)}\sum_{ j=1}^{m-i}\frac{\Gamma(a+j+m+1)\Gamma(b-j+m+1)}{j\Gamma(i+j+1)\Gamma(m-i-j+1)}\] \[\times(\psi_{0}(m-i+1)+\psi_{0}(a+j+m+1))\,. \tag{96}\]
In (95), after changing the summation order as
\[\sum_{j=1}^{m-1}\frac{\Gamma(a+j+m+1)\Gamma(b-j+m+1)}{j}\left(-\psi_ {0}(a+b+2m+2)-\psi_{0}(j+1)\right)\] \[\times\sum_{i=1}^{m-j}\frac{i(m-i)}{\Gamma(b+i+1)\Gamma(i+j+1) \Gamma(a-i+m+1)\Gamma(m-i-j+1)}, \tag{97}\]
we evaluate the sum over \(i\) by using lemma 2. The double becomes
\[\frac{1}{\Gamma(b)\Gamma(a+m)}\biggl{(}(1-a-m)\sum_{j=1}^{m-1} \frac{(a+j+m)(b-j+m)}{j}(\psi_{0}(a+b+2m+2)\] \[+\psi_{0}(j+1))\times\sum_{i=1}^{m-j}\frac{\Gamma(b+i-1)\Gamma(a-i +2m)}{\Gamma(i)\Gamma(m-i+2)}+\frac{a(a+m)}{a+b+m}\sum_{j=1}^{m-1}\frac{b-j+m} {j}(\psi_{0}(j+1)\] \[+\psi_{0}(a+b+2m+2))\sum_{i=1}^{m-j}\frac{\Gamma(b+i-1)\Gamma(a-i +2m+1)}{\Gamma(i)\Gamma(m-i+2)}+\frac{(a+m-1)(b+m)}{a+b+m}\] \[\times\sum_{j=1}^{m-1}\frac{(a+j+m)}{j}(\psi_{0}(a+b+2m+2)+\psi_ {0}(j+1))\sum_{i=1}^{m-j}\frac{\Gamma(b+i)\Gamma(a-i+2m)}{\Gamma(i)\Gamma(m-i +2)}\biggr{)}, \tag{98}\]
where the sums over \(j\) can be further simplified into closed-form expressions by using the identity (B.3). As a result, the remaining summations only involve single sums as in (86) that are simplified similarly.
The sum (96) is simplified by first using lemma 3 along with its derivative with respect to \(b\) to evaluate the inner sum over \(j\). As a result, the remaining sums are reduced to single sums after computing the sum over \(i\) except for the sum
\[\sum_{j=1}^{m}\frac{1}{\Gamma(j-1)\Gamma(a+j)\Gamma(m-j+1)\Gamma( b-j+m+2)}\] \[\times\sum_{i=1}^{m-j+1}\left(\frac{\psi_{0}(a+i+j)}{i}+\frac{ \psi_{0}(i+j)}{i}\right). \tag{99}\]
To proceed with (99), we first use the identity (B.9) to compute the inner sum
\[\sum_{i=1}^{m-j+1}\frac{\psi_{0}(a+i+j)}{i} \tag{100}\]
into
\[\sum_{i=1}^{m-j+1}\frac{\psi_{0}(i+j)}{i}-\sum_{l=1}^{a}\frac{ \psi_{0}(l+m+1)}{j+l-1}+\frac{1}{2}\Bigl{(}(\psi_{0}(a+j)-\psi_{0}(j))\] \[\times(\psi_{0}(a+j)+2\psi_{0}(m-j+2)+\psi_{0}(j)-2\psi_{0}(1))- \psi_{1}(a+j)+\psi_{1}(j)\Bigr{)}. \tag{101}\]
Inserting the result (101) into (99), the double sum in (99) now boils down to simplifying the three sums
\[\frac{1}{2}\sum_{j=1}^{m}\frac{1}{\Gamma(j-1)\Gamma(a+j)\Gamma(m-j+1)\Gamma( b-j+m+2)}\Bigl{(}\psi_{1}(j)+(\psi_{0}(a+j)-\psi_{0}(j))\]
\[(\psi_{0}(a+j)+2\psi_{0}(m-j+2)+\psi_{0}(j)-2\psi_{0}(1))-\psi_{1}(a+j)\Big{)}, \tag{102}\]
\[\sum_{l=1}^{a}\psi_{0}(l+m+1)\sum_{j=1}^{m}\frac{-1}{\Gamma(j-1)\Gamma(a+j) \Gamma(m-j+1)\Gamma(b-j+m+2)(j+l-1)}, \tag{103}\]
and
\[\sum_{j=1}^{m}\frac{2}{\Gamma(j-1)\Gamma(a+j)\Gamma(m-j+1)\Gamma(b-j+m+2)}\sum _{i=1}^{m-j+1}\frac{\psi_{0}(i+j)}{i}. \tag{104}\]
The single sum (102) is simplified in the same manner as (86). For the double summation in (103), after evaluating the inner sum over \(j\) by using lemma 1, we arrive at
\[-\frac{1}{\Gamma(b+m)\Gamma(a+b+m+1)}\sum_{l=1}^{a}\frac{\psi_{0} (l+m+1)}{\Gamma(l+m)}\] \[\times\sum_{j=1}^{m-1}\frac{\Gamma(m-j+l)\Gamma(a+b-j+2m)}{\Gamma (m-j)\Gamma(a-j+m+1)}. \tag{105}\]
The above sum (105) can now be simplified into single sums by using the identities (B.13)-(B.14) to evaluate the sum over \(l\), where the remaining single sums are
\[\sum_{j=1}^{m}\frac{\Gamma(a+b-j+2m-1)}{\Gamma(a-j+m)j}\psi_{0}(a+b-j+2m-1), \tag{106}\]
and
\[\sum_{j=1}^{m}\frac{\psi_{0}(a+b+j+m)}{j}. \tag{107}\]
So far, the only part that remains to be simplified in (93) is the double sum (104). We first point out that the sum (104) has to be treated together with its symmetric part in (94), which is
\[\sum_{j=1}^{m}\frac{2}{\Gamma(j-1)\Gamma(b+j)\Gamma(m-j+1)\Gamma(a-j+m+2)}\sum _{i=1}^{m-j+1}\frac{\psi_{0}(i+j)}{i}. \tag{108}\]
The two summations (104) and (108) may not be further simplified individually. However, we observe cancellations among the two sums by adding them up, where the key ingredient is the identity (B.9). Specifically, we evaluate the inner summation
\[\sum_{i=1}^{m-j+1}\frac{\psi_{0}(i+j)}{i} \tag{109}\]
in (104) by the identity (B.9) with the specialization
\[a\to j,\qquad b\to 0,\qquad m\to m-j+1. \tag{110}\]
The sum (104) becomes
\[-\sum_{j=1}^{m}\frac{2}{\Gamma(j-1)\Gamma(a+j)\Gamma(m-j+1)\Gamma(b+m- j+2)}\sum_{i=1}^{j-1}\frac{\psi_{0}(m-j+i+2)}{i}\] \[+\sum_{j=1}^{m}\frac{1}{\Gamma(j-1)\Gamma(a+j)\Gamma(m-j+1)\Gamma( b+m-j+2)}\Big{(}(\psi_{0}(m-j+2)+\psi_{0}(j))\] \[\times(\psi_{0}(m-j+2)+\psi_{0}(j)-2\psi_{0}(1))-\psi_{1}(m-j+2)- \psi_{1}(j)+2\psi_{1}(1)\Big{)}. \tag{111}\]
Shifting the index \(j\to m+2-j\) of the double sum in (111) as
\[-\sum_{j=1}^{m}\frac{2}{\Gamma(j-1)\Gamma(a+j)\Gamma(m-j+1)\Gamma( b+m-j+2)}\sum_{i=1}^{j-1}\frac{\psi_{0}(m-j+i+2)}{i}\] \[= -\sum_{j=2}^{m+1}\frac{2}{\Gamma(j-1)\Gamma(b+j)\Gamma(m-j+1) \Gamma(a-j+m+2)}\sum_{i=1}^{m-j+1}\frac{\psi_{0}(i+j)}{i}, \tag{112}\]
which is now the same form as (108). Inserting the result (112) into (104) before adding up (108), we obtain
\[\sum_{j=1}^{m}\frac{2}{\Gamma(j-1)\Gamma(a+j)\Gamma(m-j+1)\Gamma (b-j+m+2)}\sum_{i=1}^{m-j+1}\frac{\psi_{0}(i+j)}{i}\] \[+\sum_{j=1}^{m}\frac{2}{\Gamma(j-1)\Gamma(b+j)\Gamma(m-j+1) \Gamma(a-j+m+2)}\sum_{i=1}^{m-j+1}\frac{\psi_{0}(i+j)}{i}\] \[=\sum_{j=1}^{m}\frac{1}{\Gamma(j-1)\Gamma(a+j)\Gamma(m-j+1) \Gamma(b-j+m+2)}\Big{(}\left(\psi_{0}(m-j+2)+\psi_{0}(j)\right)\] \[\quad\times(\psi_{0}(m-j+2)+\psi_{0}(j)-2\psi_{0}(1))-\psi_{1}(m- j+2)-\psi_{1}(j)+2\psi_{1}(1)\Big{)}, \tag{113}\]
which is simplified into single sums of the forms (90), (91), (106), and
\[\sum_{j=1}^{m}\frac{\Gamma(a+b-j+2m-1)}{\Gamma(a-j+m)j}\psi_{0}(j), \tag{114}\]
by using lemma 2, lemma 4, and their derivatives with respect to \(c\). After inserting the simplified results of (85), (93), and (94) into (100), we observe complete cancellations of the single sums (90), (91), (106), and (114). The sum \(\mathcal{A}_{2}(m-2,m)\) is simplified to
\[\mathcal{A}_{2}(m-2,m) =\frac{4\Gamma(a+m+1)\Gamma(b+m+1)}{\Gamma(m-1)\Gamma(a+b+m+1)(a+ b+2m-1)_{3}}\] \[\quad\times\sum_{j=1}^{m}\frac{\psi_{0}(a+b+j+m)}{j}+\mathrm{CF}, \tag{115}\]
where the shorthand notation \(\mathrm{CF}\), different in each use, denotes some closed-form terms omitted due to the length. Using the same approach, one is able to simplify (111) into a similar form, which completes the simplification of \(\mathcal{A}_{2}\) as per (63).
Now inserting the resulting forms of \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\) into (50), \(\mathrm{I}_{\mathcal{A}}\) is finally obtained as
\[\mathrm{I}_{\mathcal{A}}=f_{\mathcal{A}}(a,b)+f_{\mathcal{A}}(b,a), \tag{116}\]
where
\[f_{\mathcal{A}}(a,b) = b_{0}\sum_{k=1}^{m}\frac{\psi_{0}(a+b+k+m)}{a+k}-m\sum_{k=1}^{m} \frac{\psi_{0}(a+b+k+m)}{k}+b_{1}\sum_{k=1}^{m}\frac{\psi_{0}(a+k)}{k} \tag{117}\] \[+\frac{m}{2}\left(\psi_{0}^{2}(a+b+m)-\psi_{1}(a+b+m)\right)+b_{2 }\left(\psi_{0}(a+b+2m)-\psi_{0}(a+b+m)\right)\] \[\times\psi_{0}(a+b+2m)+b_{3}\psi_{0}(a+m)\psi_{0}(a+b+2m)+b_{4} \psi_{0}(a+b+2m)\] \[+b_{0}\psi_{0}(a)\psi_{0}(a+b+m)+m\left(\psi_{0}(m)-\psi_{0}(1) \right)\psi_{0}(a+b+m)+b_{5}\psi_{0}(a+m)\] \[\times(2\psi_{0}(a+b+m)+\psi_{0}(b+m))+b_{6}\psi_{0}(a+b+m)+\frac {b_{1}}{2}(2\psi_{0}(a)\psi_{0}(a+m)\] \[-2\psi_{0}(a)\psi_{0}(m)-\psi_{0}^{2}(a)+2\psi_{0}(1)\psi_{0}(a)+ \psi_{1}(a))+b_{7}\psi_{0}(a+m)+b_{8}\psi_{0}(a)\] \[+b_{9}.\]
The coefficients \(b_{i}\) in (117) are summarized in (109)-(110) in appendix C.2.
By inserting (80) and (117) into (46), we obtain
\[\mathbb{E}[C] = \frac{2m(a+m)(b+m)(a+b+m)}{(a+b+2m-1)_{3}}\left(\sum_{k=1}^{m} \frac{\psi_{0}(a+k)}{k}+\sum_{k=1}^{m}\frac{\psi_{0}(b+k)}{k}\right. \tag{118}\] \[\left.+\sum_{k=1}^{m}\frac{\psi_{0}(a+b+k+m)}{a+k}+\sum_{k=1}^{m} \frac{\psi_{0}(a+b+k+m)}{b+k}\right)+\mathrm{CF}.\]
The remaining task in obtaining (25) is to represent the single summations
\[\sum_{k=1}^{m}\frac{\psi_{0}(a+k)}{k} \tag{119}\] \[\sum_{k=1}^{m}\frac{\psi_{0}(b+k)}{k}\] (120) \[\sum_{k=1}^{m}\frac{\psi_{0}(a+b+k+m)}{a+k}\] (121) \[\sum_{k=1}^{m}\frac{\psi_{0}(a+b+k+m)}{b+k} \tag{122}\]
in (118) into (19) as reproduced below
\[\Phi_{c,d}=\frac{c!}{(c+d)!}\sum_{k=1}^{c}\frac{(c+d-k)!}{(c-k)!}\frac{1}{k^{2 }},\qquad c,d\in\mathbb{Z}^{+}. \tag{123}\]
By utilizing the identity (112), the summations in (119) and (120) are respectively computed into the summations \(\Phi_{m,a}\) and \(\Phi_{m,b}\) as
\[\sum_{k=1}^{m}\frac{\psi_{0}(a+k)}{k} = \Phi_{m,a}+\mathrm{CF} \tag{124}\] \[\sum_{k=1}^{m}\frac{\psi_{0}(b+k)}{k} = \Phi_{m,b}+\mathrm{CF}. \tag{125}\]
To proceed with the summations (121) and (122), we have to consider their combination
\[\sum_{k=1}^{m}\frac{\psi_{0}(a+b+k+m)}{a+k}+\sum_{k=1}^{m}\frac{\psi_{0}(a+b+k+m)} {b+k}. \tag{126}\]
We first rewrite (121) as
\[\sum_{k=1}^{m}\frac{\psi_{0}(a+b+k+m)}{a+k}=\sum_{k=1}^{m}\frac{\psi_{0}(b+k)}{ a+k}+\sum_{k=1}^{m}\frac{1}{a+k}\sum_{l=0}^{a+m-1}\frac{1}{b+k+l}, \tag{127}\]
where we have used the finite sum form of digamma function [42]
\[\psi_{0}(l)=-\gamma+\sum_{k=1}^{l-1}\frac{1}{k} \tag{128}\]
to replace
\[\psi_{0}(a+b+k+m) \tag{129}\]
by
\[\psi_{0}(b+k)+\sum_{l=0}^{a+m-1}\frac{1}{b+k+l}. \tag{130}\]
We then change the order of summation of the double sum in (127) to evaluate the sum over \(k\) first, where the remaining sums are further evaluated by the identity (B.3), leading to
\[\sum_{k=1}^{m}\frac{\psi_{0}(a+b+k+m)}{b+k} = \sum_{k=1}^{a+m-1}\frac{\psi_{0}(b+k+1)}{k}-\sum_{k=1}^{a+m-1} \frac{\psi_{0}(b+k+m+1)}{k}\] \[+\frac{1}{2}\Big{(}(2\psi_{0}(1)-2\psi_{0}(a+m)-\psi_{0}(b+m+1)- \psi_{0}(b+1))\] \[\times(\psi_{0}(b+1)-\psi_{0}(b+m+1))-\psi_{1}(b+m+1)+\psi_{1}(b+ 1)\Big{)}.\]
Similarly, one has (122) manipulated to
\[\sum_{k=1}^{m}\frac{\psi_{0}(a+b+k+m)}{a+k}=\sum_{k=1}^{b+m-1}\frac{\psi_{0}(b +k+1)}{k}-\sum_{k=1}^{b+m-1}\frac{\psi_{0}(a+k+m+1)}{k}+\mathrm{CF}. \tag{132}\]
Here, we also need the result
\[\sum_{k=1}^{a+m-1}\frac{\psi_{0}(b+k+m+1)}{k}+\sum_{k=1}^{b+m-1} \frac{\psi_{0}(a+k+m+1)}{k}\] \[= -\frac{1}{2}(\psi_{1}(a+m+1)+\psi_{1}(b+m+1))-\frac{(a+b+2m)\psi_ {0}(a+b+2m)+1}{(a+m)(b+m)}\] \[-\frac{1}{2}\left(2\psi_{0}(1)-\psi_{0}(a+m+1)-\psi_{0}(b+m+1) \right)(\psi_{0}(a+m+1)+\psi_{0}(b+m+1))\] \[+\psi_{1}(1), \tag{133}\]
which is obtained by evaluating the summation
\[\sum_{k=1}^{a+m-1}\frac{\psi_{0}(b+k+m+1)}{k}=\sum_{k=1}^{a+m-1}\frac{\psi_{0}(k) }{k}+\sum_{k=1}^{a+m-1}\frac{1}{k}\sum_{l=0}^{b+m}\frac{1}{(k+l)} \tag{134}\]
in the same manner as we have processed (127). Finally, by adding (131) and (132) before using (133), we obtain
\[\sum_{k=1}^{m}\frac{\psi_{0}(a+b+k+m)}{a+k}+\sum_{k=1}^{m}\frac{ \psi_{0}(a+b+k+m)}{b+k} \tag{135}\] \[= \sum_{k=1}^{b+m}\frac{\psi_{0}(a+k)}{k}+\sum_{k=1}^{a+m}\frac{\psi _{0}(b+k)}{k}+\mathrm{CF}.\] \[= \Phi_{b+m,a}+\Phi_{a+m,b}+\mathrm{CF}, \tag{136}\]
where the last equality (136) is obtained by using the identity (B.12). Inserting the results (124), (125), and (136) into (118), we complete the proof of proposition 1.
### Average capacity over fermionic Gaussian states without particle number constraint
In this section, we compute the mean value of entanglement capacity (8) over fermionic Gaussian states without particle number constraint (18) in proving proposition 2. The same as the previous section, we first discuss the computation that leads to the summation representation in section 3.2.1. Simplification of the summations is performed in section 3.2.2.
#### 3.2.1 Correlation functions and integral calculations
For fermionic Gaussian states of arbitrary number of particles, by definition the average capacity is given by the integral
\[\mathbb{E}[C]=m\int_{0}^{1}u(x)g_{1}(x)\,\mathrm{d}x, \tag{137}\]
where \(g_{1}(x_{1},\ldots,x_{l})\) denotes the joint probability density of \(l\) arbitrary eigenvalues. Similar to the previous case, the density \(g_{1}(x_{1},\ldots,x_{l})\) can be written in terms of the \(l\)-point correlation function as
\[g_{l}(x_{1},\ldots,x_{l})=\frac{(m-l)!}{m!}\det\left(K\left(x_{i},x_{j}\right) \right)_{i,j=1}^{l}, \tag{138}\]
where
\[K\left(x,y\right)=\sqrt{w(x)w(y)}\sum_{k=0}^{m-1}\frac{J_{2k}^{(a,a)}(x)J_{2k }^{(a,a)}(y)}{h_{k}} \tag{139}\]
with the weight function being
\[w(x)=\left(\frac{1-x}{2}\right)^{a}\left(\frac{1+x}{2}\right)^{a}. \tag{140}\]
By rewriting the orthogonality relation (44) as
\[\int_{0}^{1}\left(\frac{1-x}{2}\right)^{a}\left(\frac{1+x}{2}\right) ^{a}J_{2k}^{(a,a)}(x)J_{2l}^{(a,a)}(x)\,\mathrm{d}x\] \[=\frac{\Gamma(2k+a+1)\Gamma(2k+a+1)}{(4k+2a+1)\Gamma(2k+1)\Gamma(2 k+2a+1)}\delta_{kl},\quad\Re(a)>-1, \tag{141}\]
we obtain the normalization constant \(h_{k}\) of the polynomials \(J_{2k}^{(a,a)}(x)\)
\[h_{k}=\frac{\Gamma(2k+a+1)\Gamma(2k+a+1)}{(4k+2a+1)\Gamma(2k+1)\Gamma(2k+2a+1)}. \tag{142}\]
By using (45) and (139), the computation of the average capacity (137) boils down to computing two integrals
\[\mathbb{E}[C]=\mathrm{I}_{\mathrm{C}}-\mathrm{I}_{\mathrm{A}}, \tag{143}\]
where
\[\mathrm{I}_{\mathrm{C}} =\sum_{k=0}^{m-1}\frac{1}{h_{k}}\int_{-1}^{1}\left(\frac{1-x}{2} \right)^{a}\left(\frac{1+x}{2}\right)^{a+2}\ln^{2}\frac{1+x}{2}J_{2k}^{(a,a)}( x)^{2}\,\mathrm{d}x \tag{144}\] \[\mathrm{I}_{\mathrm{A}} =\mathrm{A}_{1}+\mathrm{A}_{2} \tag{145}\]
with
\[\mathrm{A}_{1} =\sum_{k=0}^{m-1}\frac{1}{h_{k}}\int_{-1}^{1}\left(\frac{1-x}{2} \right)^{a}\left(\frac{1+x}{2}\right)^{a+2}\ln^{2}\frac{1+x}{2}J_{2k}^{(a,a)}( x)^{2}\,\mathrm{d}x \tag{146}\] \[\mathrm{A}_{2} =\sum_{k=0}^{m-1}\frac{1}{h_{k}}\int_{-1}^{1}\left(\frac{1-x}{2} \right)^{a+1}\left(\frac{1+x}{2}\right)^{a+1}\ln\frac{1-x}{2}\ln\frac{1+x}{2} J_{2k}^{(a,a)}(x)^{2}\,\mathrm{d}x. \tag{147}\]
The integral in \(\mathrm{I}_{\mathrm{C}}\) is calculated by applying the identity (66), where we need to assign
\[a_{1}=b_{1}=a_{2}=b_{2}=a,\qquad k_{1}=k_{2}=2k, \tag{148}\]
and take twice derivatives of \(c\) before setting \(c=a+1\). Under the same specialization (148), the integral in \(\mathrm{A}_{1}\) is calculated by taking twice derivatives of \(c\) of the identity (66) before setting \(c=a+1\), whereas the integral in \(\mathrm{A}_{2}\) is calculated by taking derivatives of both \(c\) and \(d\) of the identity (72) before setting \(c=d=a+1\). After resolving the indeterminacy of gamma and polygamma functions by using (75)-(77), one arrives at the summation representations (A.5)-(A.7) of the above integrals as listed in appendix A.2.
#### 3.2.2 Simplification of summations
The remaining task in computing the mean value (143) is to simplify the summation representations (A.5)-(A.7) of the integrals \(\mathrm{I}_{\mathrm{C}}\) and \(\mathrm{I}_{\mathrm{A}}\).
We first compute \(\mathrm{I}_{\mathrm{C}}\) by simplifying the summations in (A.5). Note that (A.5) consists of two double summations. The first double summation is readily reduced to a single sum by evaluating the inner sum over \(j\). The resulting single sum is further
simplified by using the identities (B.1)-(B.8) similarly to the simplification of (A.1). Here, one will also need the results
\[\psi_{0}(mk) =\ln m+\frac{1}{m}\sum_{i=0}^{m-1}\psi_{0}\bigg{(}k+\frac{i}{m} \bigg{)}\,,\qquad m\in\mathbb{Z}^{+} \tag{149}\] \[\psi_{1}(mk) =\frac{1}{m^{2}}\sum_{i=0}^{m-1}\psi_{1}\bigg{(}\frac{i}{m}+k \bigg{)}\,,\qquad m\in\mathbb{Z}^{+} \tag{150}\]
to evaluate the sums involving polygamma functions with even argument. In (A.5), the second double sum is
\[\sum_{k=1}^{m-1}2(2a+4k+1)\sum_{j=0}^{2k-2}\frac{2(j+1)(a+j+1)}{(2k -j-1)_{2}(2a+j+2k+1)_{2}}\] \[\times\left(\psi_{0}(a+j+2)-\psi_{0}(2a+j+2k+3)-\psi_{0}(2k-j-1)+ \psi_{0}(j+2)\right). \tag{151}\]
By the partial fraction decomposition
\[\frac{2(j+1)(a+j+1)}{(2k-j-1)_{2}(2a+j+2k+1)_{2}}\] \[=\frac{1}{2a+4k+1}\left(\frac{-2a-2k-1}{2a+j+2k+2}+\frac{2(a+k)} {2a+j+2k+1}-\frac{2k}{j-2k+1}+\frac{2k+1}{j-2k}\right), \tag{152}\]
we rewrite (151) as the sum of the following five double summations (153)-(157),
\[2\sum_{k=1}^{m-1}\sum_{j=0}^{2k-2}\left(\frac{2a+2k+1}{2a+j+2k+2 }-\frac{2(a+k)}{2a+j+2k+1}\right)\psi_{0}(2a+j+2k+3) \tag{153}\] \[2\sum_{k=1}^{m-1}\sum_{j=0}^{2k-2}\left(\frac{2k}{j-2k+1}-\frac{2 k+1}{j-2k}\right)\psi_{0}(2k-j-1)\] (154) \[2\sum_{k=1}^{m-1}\sum_{j=0}^{2k-2}\left(\frac{2k}{2k-j-1}-\frac{2 k+1}{2k-j}\right)\psi_{0}(2a+j+2k+3)\] (155) \[2\sum_{k=1}^{m-1}\sum_{j=0}^{2k-2}\left(\frac{2a+2k+1}{2a+j+2k+2 }-\frac{2(a+k)}{2a+j+2k+1}\right)\psi_{0}(2k-j-1)\] (156) \[2\sum_{k=1}^{m-1}\sum_{j=0}^{2k-2}\left(\frac{-2a-2k-1}{2a+j+2k+ 2}+\frac{2(a+k)}{2a+j+2k+1}-\frac{2k}{j-2k+1}+\frac{2k+1}{j-2k}\right)\] \[\times(\psi_{0}(a+j+2)+\psi_{0}(j+2)). \tag{157}\]
We now simplify each of the summations (153)-(157) into single sums. Specifically, the summation (153) is simplified by using the identity (B.3) to evaluate the sum over \(j\). The summation (154) is simplified similarly after shifting the index \(j\to 2k-2-j\). The summation (155) is simplified by using the identity (B.1) to evaluate the sum over \(k\) after shifting the index \(j\to 2k-2-j\) and changing the summation order as
\[2\sum_{j=0}^{m-1}\sum_{k=j+1}^{m-1}\left(\frac{2k}{2j+1}-\frac{2k+1}{2j+2} \right)\psi_{0}(2a-2j+4k+1)\]
\[+2\sum_{j=0}^{m-1}\sum_{k=j+1}^{m-1}\left(\frac{2k}{2j+2}-\frac{2k+1}{2j+3}\right) \psi_{0}(2a-2j+4k), \tag{158}\]
where one has divided the summation over \(j\) into even and odd ones. The remaining two sums (156)-(157) are simplified in a similar approach as (155). For (156), one needs to shift the index \(j\to 2k-2-j\) before changing the summation order to evaluate the sum over \(k\). For (157), one directly evaluates the sum over \(k\) by changing the summation order.
Putting together the results of (153)-(157), the summation (A.5) now consists of single sums, cf. (A.1), which are further simplified by the identities (B.1)-(B.8). This leads to
\[\mathrm{I}_{\mathrm{C}}= \sum_{k=1}^{m-1}\left(\left(-\frac{1}{4k}-\frac{1}{4k+2}\right) \psi_{0}(a+k)+\left(\frac{4m-3}{4k+2}+\frac{4m+1}{4k}\right)\psi_{0}(a+2k)\right.\] \[+\left(\frac{4a+4m-1}{2a+4k+2}+\frac{4a+4m-1}{2a+4k}-\frac{2a}{2k +1}+\frac{1-2a}{2k}+\frac{1}{2(a+k)}\right)\psi_{0}(2a+2k)\] \[+\left(\frac{2a-1}{2k}+\frac{2a+1}{2k+1}+\frac{-2a-1}{2a+2k}+ \frac{1-2a}{2a+2k+1}\right)\psi_{0}(2a+4k)+\left(\frac{1}{4k+2}\right.\] \[\left.-\frac{1}{4k}\right)\psi_{0}(a+k+m)+\left(\frac{-2a-2m+1}{ 2k}-\frac{2a+2m}{2k+1}\right)\psi_{0}(2a+2k+2m)\right)\] \[+c_{0}\psi_{1}(2a+2m)-\frac{1}{4}\psi_{1}(a+m)+c_{1}\left(\psi_{ 1}(2a)-\psi_{0}^{2}(2a)\right)+c_{2}\psi_{1}(a)\] \[+c_{3}\psi_{0}(2a+4m)\left(\psi_{0}(a+2m)+\psi_{0}(2a+2m)-\psi_{0 }(2a+4m)\right)-2c_{0}\psi_{0}(2a+2m)\] \[\times(\psi_{0}(a)+\psi_{0}(2m)-\psi_{0}(1))-c_{0}\psi_{0}^{2}(2a +2m)+c_{4}\psi_{0}^{2}(a+2m)+c_{5}\psi_{0}(a)\] \[\times\psi_{0}(a+2m)+\frac{1}{2}\left(\psi_{0}(a)+\psi_{0}(2m)- \psi_{0}(1)\right)\psi_{0}(a+m)-\frac{1}{2}\psi_{0}(a)\psi_{0}(m)\] \[+c_{6}\psi_{0}(a)\psi_{0}(2m)+c_{7}\psi_{0}^{2}(a)+c_{8}\psi_{0}(2 a+4m)+c_{9}\psi_{0}(2a+2m)+c_{10}\psi_{0}(a+2m)\] \[+c_{11}\psi_{0}(a+m)+c_{12}\psi_{0}(2a)+c_{13}\psi_{0}(1)\psi_{0} (a)+c_{14}\psi_{0}(a)+c_{15}\left(\psi_{0}\left(\frac{a}{2}+m+\frac{1}{4} \right)\right.\] \[\left.-\,\psi_{0}\left(\frac{a}{2}+\frac{1}{4}\right)\right)+c_{ 16}\psi_{0}\left(\frac{a}{2}+m\right)+c_{17}\left(\psi_{0}(m)-2\psi_{0}(2m)+ \psi_{0}(1)\right)+c_{18}\psi_{0}\left(\frac{a}{2}\right)\] \[-\,2m, \tag{159}\]
where the coefficients \(c_{i}\) are listed in (C.19)-(C.37) in appendix C.3.
The simplification of (A.6) and (A.7) in computing \(\mathrm{I}_{\mathrm{A}}\) is parallel to that of (A.5) and (A.4), respectively, where much of details are omitted here. However, we note that when first evaluating the inner summations over \(i\) and \(j\) in (A.7), the resulting sum simply becomes
\[-r(k)\frac{2\left(2a^{2}+4ak+a+4k^{2}+2k-1\right)}{(2a+4k-1)(2a+4k+3)}\sum_{j= 1}^{2k}\frac{\psi_{0}(2a+j+2k)}{j}+\mathrm{CF}, \tag{160}\]
where the term
\[r(k)=\frac{\Gamma(2a+4k+4)}{(2a+4k+1)\Gamma(2k+1)\Gamma(2a+2k+1)} \tag{161}\]
cancels completely with that in (A.7). The remaining sums now only consist of rational functions and polygamma functions, which are readily simplifiable. Inserting the resulting forms of (A.6) and (A.7) into (145), we obtain
\[\mathrm{I_{A}} = \sum_{k=1}^{m-1}\left(\left(-\frac{1}{2(2k+1)}-\frac{1}{4k}\right) \psi_{0}(a+k)+\left(\frac{2am-2a+6m^{2}-6m+1}{(2k+1)(2a+4m-1)}\right.\right. \tag{162}\] \[\left.\left.+\frac{1}{4(a+k)}+\frac{4am+2a+12m^{2}-1}{4k(2a+4m-1) }\right)\psi_{0}(a+2k)+\left(\frac{1-2a}{2k}+\frac{1}{2(a+k)}-\frac{2a}{2k+1}\right.\right.\] \[\left.\left.+\frac{2\left(2a^{2}+5am-a+3m^{2}-m\right)}{2a+4m-1} \left(\frac{1}{a+2k+1}+\frac{1}{a+2k}\right)\right)\psi_{0}(2a+2k)\right.\] \[\left.\left.+\left(\frac{2a-1}{2k}+\frac{-2a-1}{2(a+k)}+\frac{2a+ 1}{2k+1}+\frac{1-2a}{2a+2k+1}\right)\psi_{0}(2a+4k)+\left(\frac{1}{2(2k+1)} \right.\right.\right.\] \[\left.\left.-\,\frac{1}{4k}\right)\psi_{0}(a+k+m)+\left(\frac{-2 a-2m+1}{2k}-\frac{2a+2m}{2k+1}\right)\psi_{0}(2a+2k+2m)\right)\] \[\left.\left.+\,d_{0}\psi_{1}(2a+2m)+d_{1}\left(\psi_{1}(2a)-\psi _{0}^{2}(2a)\right)+d_{2}\psi_{1}(a)-\frac{1}{4}\psi_{1}(a+m)\right.\right.\] \[\left.\left.+\,d_{3}\left(\psi_{0}(a+2m)+\psi_{0}(2a+2m)-\psi_{0} (2a+4m)\right)\psi_{0}(2a+4m)+d_{0}\psi_{0}(2a+2m)\right.\right.\] \[\left.\left.\times\left(-\psi_{0}(2a+2m)-2\psi_{0}(2m)+2\psi_{0} (1)\right)+d_{4}\psi_{0}(a+2m)\psi_{0}(2a+2m)\right.\right.\] \[\left.\left.+\,d_{5}\psi_{0}^{2}(a+2m)+d_{6}\psi_{0}(a)\psi_{0}(2 a+2m)+a\psi_{0}(a)\left(\psi_{0}(a)-2\psi_{0}(a+2m)\right)\right.\right.\] \[\left.\left.+\,\frac{1}{4}\left(\psi_{0}(a)+2\psi_{0}(2m)-2\psi_{ 0}(1)\right)\psi_{0}(a+m)+d_{7}\psi_{0}(a)\psi_{0}(2m)-\frac{1}{4}\psi_{0}(a) \psi_{0}(m)\right.\right.\] \[\left.\left.+\,d_{8}\psi_{0}(2a+4m)+d_{9}\psi_{0}(2a+2m)+d_{10} \psi_{0}(a+2m)+d_{11}\psi_{0}(a+m)\right.\right.\] \[\left.\left.+\,d_{12}\left(\psi_{0}(m)-2\psi_{0}(2m)+\psi_{0}(1) \right)+d_{13}\psi_{0}(2a)+d_{14}\psi_{0}(1)\psi_{0}(a)+d_{15}\psi_{0}(a)+d_{ 16}\right.\right.\] \[\left.\left.\times\left(\psi_{0}\left(\frac{a}{2}+m+\frac{1}{4} \right)-\psi_{0}\left(\frac{a}{2}+\frac{1}{4}\right)\right)+d_{17}\left(\psi_{ 0}\left(\frac{a}{2}\right)-\psi_{0}\left(\frac{a}{2}+m\right)\right)-m,\right.\]
where the coefficients \(d_{i}\) are listed in (C.38)-(C.55) in appendix C.4.
Inserting the results (159) and (162) into (143), the mean capacity becomes
\[\mathbb{E}[C] = \frac{m(a+m)}{2a+4m-1}\sum_{k=1}^{m-1}\frac{\psi_{0}(a+2k)}{k}- \frac{1}{4}\sum_{k=1}^{m-1}\frac{\psi_{0}(a+2k)}{a+k}+\frac{(2m-1)(2a+2m-1)} {2(2a+4m-1)} \tag{163}\] \[\times\sum_{k=1}^{m-1}\left(\frac{\psi_{0}(a+2k+1)}{2k+1}+\frac{ \psi_{0}(2a+2k)}{a+2k}+\frac{\psi_{0}(2a+2k+1)}{a+2k+1}\right)+\mathrm{CF},\]
where we recall that the shorthand notation CF denotes the closed-form terms omitted. In the above result (163), we rewrite the single summations
\[\sum_{k=1}^{m-1}\frac{\psi_{0}(a+2k+1)}{2k+1} \tag{164}\]
and
\[\sum_{k=1}^{m-1}\frac{\psi_{0}(2a+2k+1)}{a+2k+1} \tag{165}\]
as
\[\sum_{k=1}^{m-1}\frac{\psi_{0}(a+2k+1)}{2k+1}=\sum_{k=2}^{2m}\frac{\psi_{0}(a+k)}{k }-\frac{1}{2}\sum_{k=1}^{m}\frac{\psi_{0}(a+2k)}{k} \tag{166}\]
and
\[\sum_{k=1}^{m-1}\frac{\psi_{0}(a+2k+1)}{2k+1} =\sum_{k=2}^{2m}\frac{\psi_{0}(2a+k)}{a+k}-\sum_{k=1}^{m}\frac{\psi _{0}(2a+2k)}{a+2k} \tag{167}\] \[=\sum_{k=1}^{a+2m}\frac{\psi_{0}(a+k)}{k}-\sum_{k=1}^{m}\frac{ \psi_{0}(2a+2k)}{a+2k}+\mathrm{CF}, \tag{168}\]
respectively. Here, the equality (168) is obtained by shifting the summation index as
\[\sum_{k=2}^{2m}\frac{\psi_{0}(2a+k)}{a+k}=\sum_{k=2+a}^{2m+a}\frac{\psi_{0}(a+ k)}{k}=\sum_{k=1}^{2m+a}\frac{\psi_{0}(a+k)}{k}-\sum_{k=1}^{a+1}\frac{\psi_{0}(a +k)}{k}, \tag{169}\]
before evaluating the last sum by the identity (B.5). Moreover, for the summation
\[\sum_{k=1}^{m-1}\frac{\psi_{0}(a+2k)}{k}, \tag{170}\]
we have
\[\sum_{k=1}^{m-1}\frac{\psi_{0}(a+2k)}{k}=\sum_{k=1}^{m-1}\left(\frac{\psi_{0} (a+k+m)}{k}+\frac{\psi_{0}(a+k)}{k}+\frac{\psi_{0}(a+2k)}{a+k}\right)+\mathrm{ CF}, \tag{171}\]
which is obtained by the fact that
\[\sum_{k=1}^{m-1}\frac{\psi_{0}(a+2k)}{a+k}=\sum_{k=1}^{m-1}\sum_{l=0}^{k-1} \frac{1}{(a+k)(a+k+l)}+\sum_{k=1}^{m-1}\frac{\psi_{0}(a+k)}{a+k} \tag{172}\]
similarly to the identity (127). By substituting in (163) the sums (164), (165), and (170) with their equivalent forms (166), (168), and (171), respectively, we arrive at
\[\mathbb{E}[C] =\frac{(2m-1)(2a+2m-1)}{4a+8m-2}\left(\sum_{k=1}^{2m-1}\frac{ \psi_{0}(a+k)}{k}+\sum_{k=1}^{2m+a-1}\frac{\psi_{0}(a+k)}{k}\right) \tag{173}\] \[\quad+\frac{1}{4}\left(\sum_{k=1}^{m-1}\frac{\psi_{0}(a+k+m)}{k} +\sum_{k=1}^{m-1}\frac{\psi_{0}(a+k)}{k}\right)+\mathrm{CF}.\]
Finally, replacing the single sums in (173) by the short-hand notation \(\Phi_{c,d}\) defined in (19) the claimed result (33) is obtained. This completes the proof of proposition 2.
### Asymptotic capacity
In this section, we compute the limiting average capacity in corollary 1. This is a rather straightforward task starting from the exact formula of average capacity. Specifically, the limiting values in (36) are obtained by computing the limits of the exact capacity (25)
and (33) in the regime (35). To this end, the following asymptotic results are needed. The first one is the limiting behavior of polygamma functions
\[\psi_{0}(x) = \ln(x)-\frac{1}{2x}-\sum_{l=1}^{\infty}\frac{B_{2l}}{2lx^{2l}}, \qquad x\to\infty, \tag{174}\] \[\psi_{1}(x) = \frac{1+2x}{2x^{2}}+\sum_{l=1}^{\infty}\frac{B_{2l}}{x^{2l+1}}, \qquad x\to\infty, \tag{175}\]
where \(B_{k}\) is the \(k\)-th Bernoulli number [32]. The second one is the fact that in the asymptotic regime
\[c\to\infty,\qquad\mbox{with a fixed }d, \tag{176}\]
one has
\[\Phi_{c,d}\longrightarrow\psi_{1}(1)=\frac{\pi^{2}}{6}. \tag{177}\]
For the exact capacity formula (25) of fermionic Gaussian states with fixed particle number (17), we now have in the limit (35),
\[\frac{\alpha_{0}}{m} = \frac{1}{8}+o\left(\frac{1}{m}\right) \tag{178}\] \[\frac{\alpha_{1}}{m} = o\left(\frac{1}{m}\right)\] (179) \[\frac{\alpha_{2}}{m} = o\left(\frac{1}{m}\right)\] (180) \[\frac{\alpha_{3}}{m} = -\frac{1}{2}+o\left(\frac{1}{m}\right), \tag{181}\]
and
\[\psi_{1}(a+b+m+1)+\psi_{1}(a+m+1) = o\left(\frac{1}{m}\right) \tag{182}\] \[\psi_{0}(a+m+1)-\psi_{0}(a+b+m+1) = o\left(\frac{1}{m}\right)\] (183) \[\psi_{0}(a+m+1)=o(\ln m), \tag{184}\]
where we recall \(a=n-p\) and \(b=p-m\). Consequently, we obtain
\[\mathbb{E}[C] = 2\left(\frac{1}{8}+o\left(\frac{1}{m}\right)\right)\left(\frac{ \pi^{2}}{2}+o\left(\frac{1}{m}\right)\right) \tag{185}\] \[+2o\left(\frac{1}{m}\right)o(\ln m)-1+o\left(\frac{1}{m}\right),\]
where, by using the fact that
\[\lim_{m\to\infty}\frac{\ln m}{m}=0, \tag{186}\]
one arrives at the claimed asymptotic result
\[\mathbb{E}[C]\stackrel{{\eqref{eq:2010}}}{{\longrightarrow}} \frac{\pi^{2}}{8}-1. \tag{187}\]
For the exact capacity (33) of fermionic Gaussian states with arbitrary particle number (18), similarly we have in the limit (35),
\[\psi_{1}(m+n)=o\left(\frac{1}{m}\right) \tag{188}\] \[\psi_{1}(n)=o\left(\frac{1}{m}\right)\] (189) \[\psi_{0}(2n)-\psi_{0}(m+n)=o\left(\frac{1}{m}\right)\] (190) \[\psi_{0}(m+n)-\psi_{0}(n)=\ln 2+o\left(\frac{1}{m}\right)\] (191) \[\psi_{0}(m+n)-\psi_{0}(n-m)=-\psi_{0}(n-m)+\ln 2+o(\ln m). \tag{192}\]
As a result, we have
\[\mathbb{E}[C] = \frac{1}{3}\pi^{2}\left(o\left(\frac{1}{m}\right)+\frac{1}{2} \right)+\left(o\left(\frac{1}{m}\right)+\frac{1}{4}\right)\left(o\left(\frac{ 1}{m}\right)-\frac{\pi^{2}}{6}\right) \tag{193}\] \[+o\left(\frac{1}{m}\right)o(\ln m)+o\left(\frac{1}{m}\right)-1,\]
which leads to the claimed result
\[\mathbb{E}[C]\stackrel{{\eqref{eq:
Summation representations of integrals \(\mathrm{I}_{\mathcal{C}}{}^{(a,b)}\), \(\mathcal{A}_{1}^{(a,b)}\), \(\mathcal{A}_{2}(m-1,m-1)\), and \(\mathcal{A}_{2}(m-2,m)\)
\[\mathrm{I}_{\mathcal{C}}{}^{(a,b)} =\frac{2m(b+m)}{a+b+2m}\sum_{i=1}^{m-2}\frac{i}{(m-i-1)_{2}}\left( \psi_{0}(b+i+1)-\psi_{0}(m-i-1)+\psi_{0}(i+1)\right.\] \[\quad-\psi_{0}(a+b+i+m+1))-\frac{(a+m)(a+b+m)}{a+b+2m}\sum_{i=1}^{ m-1}\frac{2i}{(a+b+i+m)_{2}}\] \[\quad\times\left(\psi_{0}(b+i+1)-\psi_{0}(m-i)+\psi_{0}(i+1)-\psi _{0}(a+b+i+m+2)\right)\] \[\quad+\frac{m(b+m)}{a+b+2m}\sum_{i=m}^{m+1}\frac{(i-1)(-1)^{i+m-1} }{\Gamma(i-m+1)\Gamma(m-i+2)}\Big{(}\psi_{1}(b+i)-\psi_{1}(i-m+1)\] \[\quad+\psi_{1}(i)-\psi_{1}(a+b+i+m)+(\psi_{0}(b+i)-\psi_{0}(i-m+1 )+\psi_{0}(i)\] \[\quad-\psi_{0}(a+b+i+m))^{2}\Big{)} \tag{24}\]
\[\mathcal{A}_{1}^{(a,b)} =\] \[\quad-\psi_{0}(a+b+i+m+2)-\psi_{0}(m-i-2))+\frac{2(a+m)(a+b+m)}{a +b+2m}\] \[\quad\times\sum_{i=1}^{m-2}\frac{(b+i+1)(i)_{2}}{(m-i-1)(a+b+i+m) _{3}}(\psi_{0}(b+i+2)-\psi_{0}(a+b+i+m+3)\] \[\quad-\psi_{0}(m-i-1)+\psi_{0}(i+2))-\frac{m(b+m)}{a+b+2m}\sum_{i =m-3}^{m-1}\frac{(b+i+2)(-1)^{i+m}(i+1)_{2}}{\Gamma(m-i)\Gamma(i-m+4)}\] \[\quad\times\frac{1}{a+b+i+m+2}(\psi_{1}(b+i+3)-\psi_{1}(i-m+4)- \psi_{1}(a+b+i+m+3)\] \[\quad+(\psi_{0}(i+3)-\psi_{0}(a+b+i+m+3)-\psi_{0}(i-m+4)+\psi_{0} (b+i+3))^{2})\] \[\quad+\psi_{1}(i+3)-\frac{(a+m)(a+b+m)(b+m)(m-1)_{2}}{(a+b+2m)(a +b+2m-1)_{3}}\Big{(}-\psi_{1}(a+b+2m+2)\] \[\quad+\psi_{1}(b+m+1)+\psi_{1}(m+1)-\psi_{1}(1)+\psi_{0}^{2}(1)+( \psi_{0}(b+m+1)+\psi_{0}(m+1)\] \[\quad-\psi_{0}(a+b+2m+2))(\psi_{0}(b+m+1)+\psi_{0}(m+1)-\psi_{0}(a +b+2m+2)\] \[\quad-2\psi_{0}(1))\Big{)} \tag{25}\]
\[\mathcal{A}_{2}(m-1,m-1)\] \[=\frac{2\Gamma(a+m+1)\Gamma(b+m+1)}{\Gamma(a+b+2m+2)}\bigg{(} \sum_{i=1}^{m}\frac{i(m-i+1)(-1)^{i}}{\Gamma(a+i+1)\Gamma(b-i+m+2)}\sum_{j=i-2 }^{i}(-1)^{j}\] \[\quad\times\frac{\Gamma(a+i-j+m)\Gamma(b-i+j+m+2)}{\Gamma(j+1) \Gamma(i-j+1)\Gamma(j-i+3)\Gamma(m-j)}\Big{(}\psi_{1}(a+b+2m+2)\] \[\quad+(\psi_{0}(a+i-j+m)-\psi_{0}(i-j+1)+\psi_{0}(i+1)-\psi_{0}( a+b+2m+2))\]
\[\times\left(\psi_{0}(a+b+2m+2)-\psi_{0}(b-i+j+m+2)+\psi_{0}(j-i+3)\right.\] \[-\psi_{0}(m-i+2))\Big{)}+\sum_{i=1}^{m-2}\frac{i(m-i+1)}{\Gamma(b+ i+1)\Gamma(a-i+m+2)}\sum_{j=1}^{m-i-1}\frac{\Gamma(b-j+m)}{\Gamma(m-i-j)}\] \[\times\frac{\Gamma(a+j+m+2)}{(j)_{3}\Gamma(i+j+1)}(\psi_{0}(a+j+m +2)+\psi_{0}(m-i+2)-\psi_{0}(j+3)\] \[-\psi_{0}(a+b+2m+2))+\sum_{i=1}^{m-2}\frac{i(m-i+1)}{\Gamma(a+i+1 )\Gamma(b-i+m+2)}\] \[\times\sum_{j=1}^{m-i-1}\frac{\Gamma(a-j+m)\Gamma(b+j+m+2)}{(j)_{ 3}\Gamma(i+j+1)\Gamma(m-i-j)}(\psi_{0}(b+j+m+2)+\psi_{0}(m-i+2)\] \[-\psi_{0}(j+3)-\psi_{0}(a+b+2m+2))\Bigg{)} \tag{23}\]
\[\mathcal{A}_{2}(m-2,m)\] \[=\frac{2\Gamma(a+m)\Gamma(b+m)}{\Gamma(a+b+2m+2)}\bigg{(}\sum_{i= 1}^{m-1}\frac{\Gamma(a+m+1)\Gamma(b+m+1)}{\Gamma(i)\Gamma(a+i+1)\Gamma(m-i) \Gamma(b-i+m+1)}\] \[\times\Big{(}(\psi_{0}(a+b+2m+2)-\psi_{0}(a+m+1)-\psi_{0}(i+1)+ \psi_{0}(1))(\psi_{0}(m-i+1)\] \[-\psi_{0}(a+b+2m+2)+\psi_{0}(b+m+1)-\psi_{0}(1))+\psi_{1}(a+b+2m+ 2)\Big{)}\] \[+\sum_{i=1}^{m-1}\frac{i(m-i)}{\Gamma(b+i+1)\Gamma(a-i+m+1)}\sum_ {j=1}^{m-i}\frac{\Gamma(a+j+m+1)\Gamma(b-j+m+1)}{j\Gamma(i+j+1)\Gamma(m-i-j+1)}\] \[\times(\psi_{0}(a+j+m+1)-\psi_{0}(a+b+2m+2)+\psi_{0}(m-i+1)-\psi_ {0}(j+1))\] \[+\sum_{i=1}^{m-1}\frac{i(m-i)}{\Gamma(a+i+1)\Gamma(b-i+m+1)}\sum_ {j=1}^{m-i}\frac{\Gamma(a-j+m+1)\Gamma(b+j+m+1)}{j\Gamma(i+j+1)\Gamma(m-i-j+1)}\] \[\times(\psi_{0}(b+j+m+1)-\psi_{0}(a+b+2m+2)+\psi_{0}(m-i+1)-\psi_ {0}(j+1))\Bigg{)} \tag{24}\]
_Appendix A.2. Summation representations of integrals \(\rm I_{C}\), \(\rm A_{1}\), and \(\rm A_{2}\)_
\[\rm I_{C}=\left(\psi_{0}(a+2)-\psi_{0}(2a+3)\right)^{2}+\psi_{1}(a +2)-\psi_{1}(2a+3)+\sum_{k=1}^{m-1}2(2a+4k+1)\] \[\times\Bigg{(}\sum_{j=2k-1}^{2k}\frac{(-1)^{j}(j+1)(a+j+1)}{(2a+j+ 2k+1)_{2}}\Big{(}(\psi_{0}(j+2)-\psi_{0}(2a+j+2k+3)\] \[+\psi_{0}(a+j+2)-\psi_{0}(j-2k+2))^{2}+\psi_{1}(a+j+2)-\psi_{1}(2 a+j+2k+3)\] \[+\psi_{1}(j+2)-\psi_{1}(j-2k+2)\Big{)}+\sum_{j=0}^{2k-2}\frac{2(j +1)(a+j+1)}{(2k-j-1)_{2}(2a+j+2k+1)_{2}}\]
\[\times\left(\psi_{0}(a+j+2)-\psi_{0}(2a+j+2k+3)-\psi_{0}(2k-j-1)+ \psi_{0}(j+2)\right)\Bigg{)}\] (A.5)
\[\mathrm{A_{1}}= \sum_{k=0}^{m-1}2(2a+4k+1)\left(\sum_{j=2k-2}^{2k}\frac{(-1)^{j}(j +1)_{2}(a+j+1)_{2}}{\Gamma(2k-j+1)\Gamma(j-2k+3)(2a+j+2k+1)_{3}}\right.\] \[\times\left((\psi_{0}(a+j+3)-\psi_{0}(2a+j+2k+4)-\psi_{0}(j-2k+3)+ \psi_{0}(j+3))^{2}\right.\] \[-\psi_{1}(2a+j+2k+4)+\psi_{1}(a+j+3)-\psi_{1}(j-2k+3)+\psi_{1}(j+ 3)\Big{)}\] \[+\sum_{j=0}^{2k-3}\frac{2(j+1)_{2}(a+j+1)_{2}}{(2k-j-2)_{3}(2a+j +2k+1)_{3}}(\psi_{0}(2a+j+2k+4)-\psi_{0}(a+j+3)\] \[\left.+\,\psi_{0}(2k-j-2)-\psi_{0}(j+3))\right)\] (A.6)
\[\mathrm{A_{2}}= \sum_{k=0}^{m-1}\frac{(2a+4k+1)\Gamma(2k+1)\Gamma(2a+2k+1)}{ \Gamma(2a+4k+4)}\left(\,\sum_{i=0}^{2k}\frac{2(i+1)(2k-i+1)}{\Gamma(i+1)\Gamma (a+i+1)}\right.\] \[\times\frac{\Gamma^{2}(a+2k+2)}{\Gamma(2k-i+1)\Gamma(a+2k-i+1)}( (\psi_{0}(a+2k+2)-\psi_{0}(2a+4k+4)-\psi_{0}(2)\] \[+\psi_{0}(2k-i+2))(\psi_{0}(a+2k+2)-\psi_{0}(2a+4k+4)+\psi_{0}(i+ 2)-\psi_{0}(2))\] \[-\psi_{1}(2a+4k+4))-\sum_{j=0}^{2k}\frac{(j+1)\Gamma(a+2k+1) \Gamma(a+2k+3)}{\Gamma(j)\Gamma(a+j+1)\Gamma(2k-j+1)\Gamma(a-j+2k+1)}\] \[\times((\psi_{0}(a+2k+1)-\psi_{0}(2a+4k+4)+\psi_{0}(2k-j+2)-\psi_ {0}(1))(\psi_{0}(a+2k+3)\] \[-\psi_{0}(2a+4k+4)+\psi_{0}(j+2)-\psi_{0}(3))-\psi_{1}(2a+4k+4))- \sum_{j=0}^{2k}\frac{(2k-j+1)}{\Gamma(j+1)}\] \[\times\frac{\Gamma(a+2k+1)\Gamma(a+2k+3)}{\Gamma(a+j+1)\Gamma(2k -j)\Gamma(2k-j+a+1)}((\psi_{0}(a+2k+3)-\psi_{0}(2a+4k+4)\] \[+\psi_{0}(2k-j+2)-\psi_{0}(3))(\psi_{0}(a+2k+1)-\psi_{0}(2a+4k+4)+ \psi_{0}(j+2)\] \[-\psi_{0}(1))-\psi_{1}(2a+4k+4))+4\sum_{j=0}^{2k}\frac{\Gamma(a-j +2k)\Gamma(a+j+2k+4)}{(j+1)_{3}}\] \[\times\sum_{i=0}^{2k-j-2}\frac{(2k-i-j-1)(i+j+3)}{\Gamma(i+1) \Gamma(2k-i+1)\Gamma(a+i+j+3)\Gamma(a-i-j+2k-1)}\] \[\times(\psi_{0}(a+j+2k+4)-\psi_{0}(2a+4k+4)+\psi_{0}(i+j+4)-\psi_ {0}(j+4))\Bigg{)}\,.\] (A.7)
## Appendix B List of summation identities
In this appendix, we list the finite sum identities useful in simplifying the summations in appendix A. Proofs to these identities can be found, for example, in [6, 8, 10, 11, 21, 23, 43].
Here, it is sufficient to assume \(a,b\geq 0,a\neq b\) in identities (B.1)-(B.3), (B.6)-(B.7), \(a>m\) in (B.8), and \(a,b\geq 0\), \(n>m\) in (B.9)-(B.14).
\[\sum_{i=1}^{m}\psi_{0}(i+a) =(m+a)\psi_{0}(m+a+1)-a\psi_{0}(a+1)-m\] (B.1) \[\sum_{i=1}^{m}\psi_{1}(i+a) =(m+a)\psi_{1}(m+a+1)-a\psi_{1}(a+1)+\psi_{0}(m+a+1)-\psi_{0}(a+1)\] (B.2) \[\sum_{i=1}^{m}\frac{\psi_{0}(i+a)}{i+a} =\frac{1}{2}\left(\psi_{1}(m+a+1)-\psi_{1}(a+1)+\psi_{0}^{2}(m+a+1 )-\psi_{0}^{2}(a+1)\right)\] (B.3) \[\sum_{i=1}^{m}\frac{\psi_{0}(m+1-i)}{i} =\psi_{0}^{2}(m+1)-\psi_{0}(1)\psi_{0}(m+1)+\psi_{1}(m+1)-\psi_{1 }(1)\] (B.4) \[\sum_{i=1}^{m}\frac{\psi_{0}(m+1+i)}{i} =\psi_{0}^{2}(m+1)-\psi_{0}(1)\psi_{0}(m+1)-\frac{1}{2}\psi_{1}(m+ 1)+\frac{\psi_{1}(1)}{2}\] (B.5) \[\sum_{i=1}^{m}\psi_{0}(i+a)\psi_{0}(i+b) =(b-a)\sum_{i=1}^{m-1}\frac{\psi_{0}(a+i)}{b+i}+(m+a)\psi_{0}(m+a) \psi_{0}(m+b)-a\times\] \[\psi_{0}(a+1)\psi_{0}(b+1)-(m+a-1)\psi_{0}(m+a)+a\psi_{0}(a+1)-\] \[(m+b)\psi_{0}(m+b)+(b+1)\psi_{0}(b+1)+2m-2\] (B.6) \[\sum_{i=1}^{m}\frac{\psi_{0}(i+b)}{i+a} = -\sum_{i=1}^{m}\frac{\psi_{0}(i+a)}{i+b}+\psi_{0}(m+a+1)\psi_{0}(m +b+1)-\psi_{0}(a+1)\times\] (B.7) \[\psi_{0}(b+1)+\frac{1}{a-b}(\psi_{0}(m+a+1)-\psi_{0}(m+b+1)-\psi_ {0}(a+1)+\] \[\psi_{0}(b+1))\] \[\sum_{i=1}^{m}\frac{\psi_{0}(a+1-i)}{i} = -\sum_{i=1}^{m}\frac{\psi_{0}(i+a-m)}{i}+(\psi_{0}(a-m)+\psi_{0}( a+1))(\psi_{0}(m+1)-\] (B.8) \[\psi_{0}(1))+\frac{1}{2}\left((\psi_{0}(a-m)-\psi_{0}(a+1))^{2}+ \psi_{1}(a+1)-\psi_{1}(a-m)\right)\]
\[\sum_{i=1}^{m}\frac{\psi_{0}(a+b+i)}{i} = \sum_{i=1}^{m}\frac{\psi_{0}(b+i)}{i}-\sum_{i=1}^{a}\frac{\psi_{0 }(b+i+m)}{b+i-1}+\frac{1}{2}\Big{(}\psi_{1}(b)+(\psi_{0}(a+b)-\psi_{0}(b))\] (B.9) \[\times(\psi_{0}(a+b)+\psi_{0}(b)+2(\psi_{0}(m+1)-\psi_{0}(1)))- \psi_{1}(a+b)\Big{)}\]
\[\sum_{i=1}^{m}\frac{(n-i)!}{(m-i)!} = \frac{n!}{(m-1)!(n-m+1)}\] (B.10)
\[\sum_{i=1}^{m}\frac{(n-i)!}{(m-i)!i^{2}}=\frac{n!}{m!}\bigg{(}\sum_{i=1}^{m}\frac{ \psi_{0}(i+n-m)}{i}+\frac{1}{2}(\psi_{1}(n-m+1)-\psi_{1}(n+1)-\psi_{0}^{2}(n+1)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\quad+ \psi_{0}^{2}(n-m+1))+\psi_{0}(n-m)(\psi_{0}(n+1)-\psi_{0}(n-m+1)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\quad\quad\quad-\psi_{0}(m+ 1)+\psi_{0}(1))\bigg{)}\] (B.12)
\[\sum_{i=1}^{m}\frac{(n-i)!}{(m+a-i)!}=\frac{1}{n-m-a+1}\left(\frac{n!}{(a+m-1)! }-\frac{(n-m)!}{(a-1)!}\right)\] (B.13)
\[\sum_{i=1}^{m}\frac{(n-i)!}{(m+a-i)!}\psi_{0}(m+a-i+1)\] \[=\frac{1}{1-a-m+n}\left(\frac{n!}{(a+m-1)!}\left(\psi_{0}(a+m)- \frac{1}{1-a-m+n}\right)-\frac{(n-m)!}{(a-1)!}\right.\] \[\quad\left.\times\left(\psi_{0}(a)-\frac{1}{1-a-m+n}\right)\right)\] (B.14)
## Appendix C Coefficients of results in section 3
In this appendix, we list the coefficients in the results (80), (117), (159), and (162).
### Coefficients in (80)
\[a_{0} =\frac{2(a+m)(a+b+m)}{a+b+2m}\] (C.1) \[a_{1} =\frac{2m(b+m)}{a+b+2m}\] (C.2) \[a_{2} =\frac{2\left(a^{2}+b(a+2m)+2am+2m^{2}\right)}{a+b+2m}\] (C.3) \[a_{3} =\ -\frac{2}{(b+m)(a+b+2m)^{2}}\Big{(}b^{2}\left(a^{2}+8am+a+10m^{2} \right)+2a^{2}m(m+2)+a^{3}+b^{4}\] \[\quad+b^{3}(2a+5m)+b\left(a^{2}(3m+2)+6am(2m+1)+2m^{2}(5m+1) \right)+6am^{2}(m+1)\] \[\quad+2m^{3}(2m+1)\Big{)}\] (C.4) \[a_{4} =\frac{2}{b(a+b+2m)^{2}}\Big{(}b^{2}\left(a^{2}+a(5m+2)+m(5m+3) \right)+b(m+2)\left(a^{2}+3am+2m^{2}\right)\] \[\quad+b^{3}(2a+4m+1)+(a+m)^{2}(a+2m)+b^{4}\Big{)}\] (C.5)
\[a_{5} = \frac{2(b+m)\left(a^{2}+b(2a+3m)+3am+b^{2}+m(2m-1)\right)}{(a+b+2m)^{2}}\] (C.6) \[a_{6} = -\frac{2b(a+b+2m+1)}{a+b+2m}\] (C.7) \[a_{7} = -\frac{2m(a+m)\left(a^{2}+2ab+4am+a+b^{2}+4bm+b+4m^{2}+2m+1\right) }{(a+b+2m)^{3}}\] (C.8)
### Coefficients in (117)
\[b_{0} = \frac{2(b+m)}{(a+b+2m-1)_{3}}\left(a^{2}(3b+4m)+a^{3}+a\left(3b^{2 }+9bm+6m^{2}-1\right)+5b^{2}m+b^{3}\right.\] (C.9) \[\left.+7bm^{2}-b+3m^{3}-m\right)\] \[b_{1} = \frac{2m(a+m)\left(a^{2}+a(b+3m)+2bm+3m^{2}-1\right)}{(a+b+2m-1) _{3}}\] (C.10) \[b_{2} = 2(a+m)\] (C.11) \[b_{3} = -\frac{2\left(a(b+2m)+b^{2}+2bm+2m^{2}\right)}{a+b+2m}\] (C.12) \[b_{4} = \frac{m-m^{2}}{2(a+b+2m-1)}+\frac{m^{2}+m}{2(a+b+2m+1)}-\frac{2b }{a+m}-\frac{2m}{a+b+2m}-2a-2m\] (C.13) \[b_{5} = \frac{m(a+m)(b+m)(a+b+m)}{(a+b+2m-1)_{3}}\] (C.14) \[b_{6} = \frac{1}{8a}\left(\frac{-2a^{2}b^{2}+a^{4}+4a^{2}+b^{4}+4b^{2}}{ a+b+2m}-\frac{1}{2}(a-b-1)(a-b+1)(a+b-1)(a+b+1)\right.\] (C.15) \[\left.\times\left(\frac{1}{a+b+2m+1}+\frac{1}{a+b+2m-1}\right)+1 6ab+8am+7a+11b+6m\right)\] \[b_{7} = \frac{b}{a+b+2m}+\frac{(a-b)(a+b)}{2(a+b+2m)^{2}}+\frac{1}{4}(5a+ b-1)+m+\frac{1}{8}((a-b-1)(a-b+1)\] (C.16) \[\left.\times\left(a+b+1\right)\right)\left(\frac{1}{a+b+2m+1}- \frac{1}{a+b+2m-1}\right)\] (C.17) \[b_{8} = -\frac{a\left(a^{2}+3a(b+2m+1)+2b^{2}+b(6m+3)+6m^{2}+6m+2\right)}{ (a+b+2m)(a+b+2m+1)}\] (C.18) \[b_{9} = -\frac{m}{(a+b+2m)^{2}}-\frac{m}{2(a+b+2m+1)}-\frac{m}{2}\] (C.19)
### Coefficients in (159)
\[c_{0} = -\frac{1}{2}(2a+2m-1)\] (C.20) \[c_{1} = \frac{1}{2}(2a-1)\] (C.21) \[c_{2} = \frac{1}{4}(4m+1)\] (C.22) \[c_{3} = -2a-4m+1\] (C.23)
\[c_{4} = \frac{1}{4}(4a+4m-1)\] (C.23) \[c_{5} = -2a\] (C.24) \[c_{6} = 1-2m\] (C.25) \[c_{7} = \frac{1}{4}(4a-1)\] (C.26) \[c_{8} = -2(a+2m)\] (C.27) \[c_{9} = -\frac{-12a^{3}-6a^{2}+4a+1}{4a^{3}+6a^{2}+2a}\] (C.28) \[c_{10} = \frac{a^{2}(4m-1)+4a^{3}+a-1}{2(a-1)a}\] (C.29) \[c_{11} = \frac{a^{4}(12-8m)+a^{3}(3-8m)+a^{2}(2m-13)-12a^{5}+2am+1}{2\left( 4a^{5}-5a^{3}+a\right)}\] (C.30) \[c_{12} = \frac{a(8(a+1)m+2a+3)+2m}{a(a+1)(2a+1)}\] (C.31) \[c_{13} = \frac{1}{2}(4m-1)\] (C.32) \[c_{14} = \frac{-4m-3}{4(a-1)}-2a-\frac{3}{4(a+1)}-\frac{1}{2a-1}-\frac{1}{ 2a+1}-\frac{1}{a}+\frac{1}{2}(4m-3)\] (C.33) \[c_{15} = -\frac{4a^{2}+1}{4a^{2}-1}\] (C.34) \[c_{16} = -\frac{a^{2}(4m+1)+4a^{3}+a(4m-1)-1}{2a\left(a^{2}-1\right)}\] (C.35) \[c_{17} = \frac{m}{1-a}\] (C.36) \[c_{18} = \frac{8m+3}{4(a-1)}+\frac{1}{2a}-\frac{3}{4(a+1)}+2\] (C.37)
### Coefficients in (162)
\[d_{0} = -\frac{1}{2}(2a+2m-1)\] (C.38) \[d_{1} = a-\frac{1}{2}\] (C.39) \[d_{2} = \frac{4a^{2}-1}{16(2a+4m-1)}-\frac{a}{8}+\frac{3m}{4}+\frac{3}{16}\] (C.40) \[d_{3} = -(2a+4m-1)\] (C.41) \[d_{4} = \frac{(2m-1)(2a+2m-1)}{4a+8m-2}\] (C.42) \[d_{5} = \frac{1}{4}(4a+4m-1)\] (C.43) \[d_{6} = \frac{(2a+2m-1)(4a+6m-1)}{2(2a+4m-1)}\] (C.44)
\[d_{7} = \frac{1-4a^{2}}{8(2a+4m-1)}+\frac{a}{4}+\frac{1}{8}(5-12m)\] (C.45) \[d_{8} = -2(a+2m)\] (C.46) \[d_{9} = -\frac{-12a^{3}-6a^{2}+4a+1}{4a^{3}+6a^{2}+2a}\] (C.47) \[d_{10} = \frac{4a^{2}\left(4m^{2}-2m+1\right)+a^{3}(20m-3)+6a^{4}-4a(m-1)^ {2}-4m+1}{2(a-1)a(2a+4m-1)}\] (C.48) \[d_{11} = \frac{88a^{5}-76a^{4}-10a^{3}+83a^{2}-3a-4}{16(1-a)a(a+1)(2a-1)(2 a+1)}-\frac{(2a-1)(2a+1)}{16(a-1)(2a+4m-1)}-\frac{3m}{4(a-1)}\] (C.49) \[d_{12} = -\frac{a(4m+2)+12m^{2}-1}{4(a-1)(2a+4m-1)}\] (C.50) \[d_{13} = \frac{8a^{3}+4a^{2}-2a-1}{8a(a+1)(2a+4m-1)}+\frac{3(4m-1)}{8(a+1) }+\frac{3(4m+1)}{8a}+\frac{4}{2a+1}-\frac{1}{2}\] (C.51) \[d_{14} = \frac{1-4a^{2}}{-16a-32m+8}-\frac{a}{4}+\frac{1}{8}(12m-3)\] (C.52) \[d_{15} = \frac{8a^{3}-12a^{2}-2a+3}{16(a-1)(2a+4m-1)}-\frac{36a^{3}+24a^{2 }-5a-2}{4a(a+1)(2a-1)(2a+1)}-\frac{12m+13}{16(a-1)}-\frac{7a}{4}\] (C.53) \[+\frac{1}{4}(6m-4)\] \[d_{16} = \frac{1+4a^{2}}{1-4a^{2}}\] (C.54) \[d_{17} = \frac{4a^{2}-1}{8(a-1)(2a+4m-1)}+\frac{12m+7}{8(a-1)}+\frac{1}{4a }-\frac{3}{4(a+1)}+\frac{7}{4}.\] (C.55)
|
2308.08039
|
Pore-resolved investigation of turbulent open channel flow over a
randomly packed permeable sediment bed
|
Pore-resolved direct numerical simulations (DNS) are performed to investigate
the interactions between streamflow turbulence and groundwater flow through a
randomly packed porous sediment bed for three permeability Reynolds numbers,
$Re_K$, of 2.56, 5.17, and 8.94, representative of natural stream or river
systems. Time-space averaging is used to quantify the Reynolds stress,
form-induced stress, mean flow and shear penetration depths, and mixing length
at the sediment-water interface (SWI). The mean flow and shear penetration
depths increase with $Re_K$ and are found to be nonlinear functions of
non-dimensional permeability. The peaks and significant values of the Reynolds
stresses, form-induced stresses, and pressure variations are shown to occur in
the top layer of the bed, which is also confirmed by conducting simulations of
just the top layer as roughness elements over an impermeable wall. The
probability distribution functions (PDFs) of normalized local bed stress are
found to collapse for all Reynolds numbers and their root mean-squared
fluctuations are assumed to follow logarithmic correlations. The fluctuations
in local bed stress and resultant drag and lift forces on sediment grains are
mainly a result of the top layer, their PDFs are symmetric with heavy tails,
and can be well represented by a non-Gaussian model fit. The bed stress
statistics and the pressure data at the SWI can potentially be used in
providing better boundary conditions in modeling of incipient motion and
reach-scale transport in the hyporheic zone.
|
Shashank K. Karra, Sourabh V. Apte, Xiaoliang He, Timothy Scheibe
|
2023-08-15T20:59:21Z
|
http://arxiv.org/abs/2308.08039v1
|
[
###### Abstract
Pore-resolved direct numerical simulations (DNS) are performed to investigate the interactions between streamflow turbulence and groundwater flow through a randomly packed porous sediment bed for three permeability Reynolds numbers, \(Re_{K}\), of 2.56, 5.17, and 8.94, representative of natural stream or river systems. Time-space averaging is used to quantify the Reynolds stress, form-induced stress, mean flow and shear penetration depths, and mixing length at the sediment-water interface (SWI). The mean flow and shear penetration depths increase with \(Re_{K}\) and are found to be nonlinear functions of non-dimensional permeability. The peaks and significant values of the Reynolds stresses, form-induced stresses, and pressure variations are shown to occur in the top layer of the bed, which is also confirmed by conducting simulations of just the top layer as roughness elements over an impermeable wall. The probability distribution functions (PDFs) of normalized local bed stress are found to collapse for all Reynolds numbers and their root mean-squared fluctuations are assumed to follow logarithmic correlations. The fluctuations in local bed stress and resultant drag and lift forces on sediment grains are mainly a result of the top layer, their PDFs are symmetric with heavy tails, and can be well represented by a non-Gaussian model fit. The bed stress statistics and the pressure data at the SWI can potentially be used in providing better boundary conditions in modeling of incipient motion and reach-scale transport in the hyperheic zone.
Pore-resolved investigation of turbulent open channel flow over a randomly packed permeable sediment bed]Pore-resolved investigation of turbulent open channel flow over a randomly packed permeable sediment bed
S. K. Karra, S. V. Apte, X. He, and T. D. Scheibe]Shashank K. Karra\({}^{1}\), Sourabh V. Apte\({}^{1}\)+, Xiaoliang He\({}^{2}\), and Timothy D. Scheibe\({}^{2}\)+
Footnote †: Email address for correspondence: [email protected]
## 1 Introduction
The interchange of mass and momentum between streamflow and ground water occurs across the sediment-water interface (SWI) and into the porous bed underneath, termed as the hyperheic zone. Hyperheic transient storage or retention and transport of solutes such as chemicals and pollutants, dissolved oxygen, nutrients, and heat across the SWI is one of the most important concepts for stream ecology, and has enormous societal value in predicting source of fresh drinking water, transport, biogeochemical processing of nutrients, and sustaining diverse aquatic ecosystems (Bencala _et al._, 1983; D'angelo _et al._, 1993; Valett _et al._, 1996; Harvey _et al._, 1996; Anderson _et al._, 2008; Briggs _et al._, 2009; Grant _et al._, 2018).
A broad range of spatio-temporal scales corresponding to disparate physical and chemical processes contribute to the mixing within the hyperheic zone (Hester _et al._, 2017). Turbulent transport across the SWI, coherent flow structures, and non-Darcy
flow within the sediment bed have been hypothesized as critical mechanisms impacting transient storage. The importance of penetration of turbulence within the bed and near bed pressure fluctuations are crucial and their impact on the hyporheic transient storage is poorly understood (Hester _et al._, 2017). Moreover, turbulence characteristics over a permeable bed are different compared to a rough, impermeable wall (Jimenez, 2004); impacting long time-scales of retention within the bed.
Mass and momentum transport in turbulent flow over a naturally occurring permeable sediment bed are characterized by bed permeability, \(K\) (which depends upon its porosity and average grain size according to the Carman-Kozeny relation), sediment bed arrangement (flat versus complex bedforms), and friction or shear velocity, \(u_{\tau}\) (based on equation 3). Permeability Reynolds number, \(Re_{K}=u_{\tau}\sqrt{K}/\nu\), (where \(\nu\) is the kinematic viscosity) representing the ratio between the permeability scale (\(\sqrt{K}\)) to the viscous scale (\(\nu/u_{\tau}\)), is typically used for granular beds to identify different flow regimes based on the dominant transport mechanisms across the SWI. For flat beds, three flow regimes (see figure 1) characterized by Voermans _et al._ (2017, 2018); Grant _et al._ (2018) have been identified: (i) the molecular regime, \(Re_{K}<0.01\), where the bed is nearly impermeable and the transport is governed by molecular diffusion; (ii) the dispersive regime, \(0.01<Re_{K}<1\), where dispersive transport associated with laminarization of stream turbulence is important; and (iii) the turbulent regime, \(Re_{K}>1\), where turbulence is dominant near the highly permeable interface. Based on the data collected for flat beds from several local streams and rivers near Oregon State University by coworkers (Jackson _et al._, 2013, 2015), it is found that the gravel grain sizes varied over the range 5-70mm, with friction velocity, \(u_{\tau}=0.004\)-0.088 m/s, and \(Re_{K}=2\)-70 (figure 1). Free surface flow and waves typically do not affect the hyporheic exchange in natural stream and rivers under subcritical conditions with small Froude numbers.
Few experimental studies have evaluated turbulence characteristics over flat permeable beds (Zagni & Smith, 1976; Zippe & Graf, 1983; Manes _et al._, 2009, 2011; Suga _et al._, 2010; Voermans _et al._, 2017; Kim _et al._, 2020; Rousseau & Ancey, 2022). Bed permeability was found to increase friction coefficient, reduce the wall-blocking effect due to impermeable rough walls, and reduce near-bed anisotropy in turbulence intensities. Manes _et al._ (2009) studied turbulent flow over uniform cubic pattern of single and multiple layers of spheres at \(Re_{K}\) of 31.2 and 44.6. Permeability was shown to influence flow resistance dramatically,
Figure 1: Effective dispersion coefficient versus \(Re_{K}\) (modified based on Voermans _et al._ (2017); Grant _et al._ (2018))
and the conventional assumption of hydraulically-rough regime, wherein the friction factor is dependent upon the relative submergence or the ratio of the roughness size to flow thickness, does not apply to permeable beds. Friction factor was shown to progressively increase with increasing Reynolds number.
Voermans _et al._ (2017) studied the influence of different \(Re_{K}\) on the interaction between surface and subsurface flows at the SWI of a synthetic sediment bed composed of randomly-arranged monodispersed spheres using refractive-index matched particle tracking velocimetry. Their experiments covered a wide range of \(Re_{K}=0.36\)-\(6.3\) and varied the permeability of the beds by investigating three different sphere sizes. The results demonstrated a strong relationship between the structure of the mean and turbulent flow at the SWI and \(Re_{K}\). Their data shows that for \(Re_{K}=\mathcal{O}(1-10)\), the turbulence shear penetration depth, a measure of true roughness felt by the flow, normalized by the permeability scale (\(\sqrt{K}\)) is a non-linear function of \(Re_{K}\), as opposed to a commonly assumed linear relationship for \(Re_{K}<\mathcal{O}(100)\)(Ghisalberti, 2009; Manes _et al._, 2012). Kim _et al._ (2020) also investigated, through experimental observations at \(Re_{K}=50\), the dynamic interplay between surface and subsurface flow in the presence of smooth and rough permeable walls, composed of a uniform cubic arrangement of packed spheres. They confirmed the existence of amplitude modulation, a phenomenon typically identified in impermeable boundaries, whereby the outer large scales modulate the intensity of the near-wall small scale turbulence. They postulated that amplitude modulation of subsurface flow is driven by large-scale pressure fluctuations at the SWI and are generated by the passage of large-scale motions in the log-law region of the surface flow. However, detailed data on the pressure field at the SWI is needed to confirm these findings, a task difficult for experimental measurements, and thus requiring a need for pore-resolved direct simulations.
There have been a few pore-resolved DNS or large-eddy simulations (LES) of turbulent flow over permeable flat beds (Breugem & Boersma, 2005; Breugem _et al._, 2006; Kuwata & Suga, 2016; Leonardi _et al._, 2018; Fang _et al._, 2018). However, all of these studies used structured, arranged packings of either non-touching particles or compactly packed particles. Although these numerical studies provide considerable insights into the fundamental aspects of turbulent flow over permeable beds, natural systems involve randomly packed beds with varying particle shapes and sizes. Furthermore, with structured packings such as simple cubic or body-centered cubic, there are open flow pathways that can lead to significant flow penetration and transport, which are not generally present in randomly packed natural systems (Finn, 2013; Finn & Apte, 2013; Fang _et al._, 2018). At the time of writing, there has only been one pore-resolved DNS study (Shen _et al._, 2020) involving randomly packed arrangement of monodispersed spheres with multiple layers at \(Re_{K}=2.62\) and bed porosity of \(0.41\). They provided significant insights into the flow physics of turbulence over randomly packed beds. To investigate the effect of bed-roughness in regular versus randomly packed spheres, they changed only the top layer of the bed to regularly arranged spheres. More intense mixing was observed near the random interface due to increased Reynolds and form-induced stresses, which resulted in a deeper penetration of turbulence (\(44\%\) higher) than the uniform, regularly arranged interface. Although this is one of the first studies of flat beds closely resembling natural systems, the investigation was only carried out at low \(Re_{K}\) that falls in the marginally turbulent regime. In addition, since only the top layer of the sediment bed was changed from random to arranged, the open flow pathways that are characteristics of arranged packing were potentially absent in their study.
To the authors' best knowledge, pore-resolved DNS of randomly packed flat beds over \(Re_{K}\sim\mathcal{O}(10)\) have not been conducted. This range is of critical importance to stream
flows as measurements in local creeks show \(Re_{K}\) values on this order (Jackson _et al._, 2013\(b\), 2015) (see figure 1). The sweep-ejection events over permeable beds can cause significant spatio-temporal variability in the bed shear stresses and pressure forces on the sediment grains as well as at the sediment water interface. Direct measurements of these quantities in experiments pose significant challenges and hence have not been conducted. The present pore-resolved DNS studies aim to provide detailed data on local distribution of the bed stresses as well as drag and lift forces which can be of importance for incipient motion models. Similarly, models for spatio-temporal pressure variation on the sediment bed can be used as boundary conditions in reach-scale modeling of hyporheic exchange (Chen _et al._, 2018). Thus, conducting a detailed analysis of the data for a range of \(Re_{K}\) representative of stream and creek flows are of direct relevance in modeling transport across the sediment-water interface.
In the present study, pore-resolved DNS of flow over a bed of randomly packed, monodispersed, spherical particles for \(Re_{K}=2.56\), \(5.17\), and \(8.94\), representative of transitional to fully turbulent flows are performed. The main goals of this study are to (i) first characterize the nature of the turbulent flow, Reynolds and form-induced stresses, turbulence penetration depths, and sweep-ejection events as a function of \(Re_{K}\), (ii) quantify the spatio-temporal variability of the bed stress and resultant forces on the sediment grains and pressure fluctuations at the SWI using higher-order statistics and propose a model fit for the probability distribution functions that can be used in reduced-order models, and (iii) quantify the contribution of the top sediment layer on the turbulent flow characteristics over permeable beds.
The rest of the paper is arranged as follows. The methodology, flow domain, and simulation parameters are described in section 2. To focus on main results and insights from this work, details on grid refinement and validation studies are presented in the Appendix. Details of turbulence structure, mean, turbulent, and dispersive stresses, turbulence penetration depths, quadrant analysis, and the role of the top sediment layer followed by detailed statistics of the bed stress, drag and lift forces on the sediment grains, and pressure variations at the SWI are presented in section 3. Importance of the results to hyporheic exchange and transport across the SWI is summarized in section 4.
## 2 Simulation Setup and Mathematical Formulation
In this section, the flow domain and parameters for cases studied, numerical approach, grid resolution, and averaging procedure for analysis are described.
### Simulation domain and parameters
Various non-dimensional parameters relevant to the turbulent flow over a permeable bed, made of monodispersed spherical particles are permeability Reynolds number (\(Re_{K}=u_{\tau}\sqrt{K}/\nu\)), the friction or turbulent Reynolds number (\(Re_{\tau}=u_{\tau}\delta/\nu\)), the roughness Reynolds number \(D^{+}=D_{p}u_{\tau}/\nu\), the bulk Reynolds number (\(Re_{b}=\delta U_{b}/\nu\)), the ratio of sediment depth to the free-stream height (\(H_{s}/\delta\)), the ratio of the sediment grain diameter to the free-stream height (\(D_{p}/\delta\)), bed porosity (\(\phi\)), type of particle packing (random versus arranged), and the domain lengths in the streamwise and spanwise directions normalized by the free-stream height (\(L_{x}/\delta\), \(L_{z}/\delta\)). Here, \(u_{\tau}\) is the friction velocity, \(U_{b}\) is the bulk velocity, \(K\) is the bed permeability and \(\nu\) is the kinematic viscosity. For monodispersed, spherical particles, the of size of the roughness element, \(k_{s}\), scales with the permeability (\(k_{s}/\sqrt{K}\approx 9\)) (Wilson _et al._, 2008; Voermans _et al._, 2017)). It should be noted that for a given mono-dispersed, randomly packed bed of certain porosity and flow rate, only one of the non-dimensional Reynolds numbers is an independent parameter.
Figure 2a shows the schematic of the sediment bed and the computational domain used in the present study. A doubly periodic domain in streamwise (\(x-\)) and spanwise (\(z-\)) directions with four layers of randomly packed, monodispersed sediment grains of porosity \(\phi=0.41\) is shown in figure 2c. The random packing of monodispersed, spherical particles is generated using the code developed by Dye _et al._ (2013). The bed porosity profile for low, medium and high permeability Reynolds number cases is shown in figure 2b. In order to quantify the role and influence of the roughness due to only the top layer of the sediment bed on turbulence structure, and bed pressure and shear stresses, a rough impermeable wall configuration is generated as shown in figure 2d. The roughness elements match exactly with the top layer of the porous sediment bed and are placed on top of a no-slip solid wall.
Table 1 shows detailed simulation parameters for the cases used to investigate the structure and dynamics of turbulence over a porous sediment bed. Four permeable bed cases (VV, PBL, PBM, and PBH) are studied by varying the permeability Reynolds number. Case VV is used to verify and validate the DNS simulations of turbulent flow over a sediment bed with experimental data from Voermans _et al._ (2017) as well as simulation data of Shen _et al._ (2020) and its details are given in Appendix C. The free-stream height for the VV case is based on the DNS study of Shen _et al._ (2020).
Cases PBL, PBM, and PBH correspond to low (2.56), medium (5.17), and high (8.94) permeability Reynolds numbers, and are used to investigate the influence of \(Re_{K}\) over
Figure 2: (a) Schematic of the permeable bed, (b) porosity profile, (c) permeable bed with randomly packed sediment particles (inset shows close-up view in xy-plane), and (e) impermeable-wall medium Reynolds number case with full layer of particles. Periodicity is used in the streamwise (\(x\)) and spanwise (\(z\)) directions. Top and bottom surface are defined as slip boundaries for the permeable bed cases, whereas, for the impermeable-wall full layer (IWM-F) case, the bottom surface is a no-slip wall.
the turbulent flow regime shown in figure 1. Finally, case IWM-F is an impermeable-wall case with only the top layer of the PBM sediment bed used as roughness elements over a no-slip surface. The free-stream height, \(\delta\), for the PBL, PBM, PBH and IWM-F cases is set be \(3.5D_{p}\) and is similar to the experimental domains of Voermans _et al._ (2017); Manes _et al._ (2009). The length of the streamwise and spanwise domains is based on the DNS of smooth channel turbulent flow (Moser _et al._, 1999). Moreover, since roughness is expected to break the long, elongated streamwise flow structures commonly observed in smooth channel flow, the domain size used is sufficient to impose the periodicity condition, which was also confirmed by evaluating integral length scales (see Appendix A).
### Numerical method
The numerical approach is based on a fictitious domain method to handle arbitrary shaped immersed objects without requiring the need for body-fitted grids (Apte _et al._, 2009). Cartesian grids are used in the entire simulation domain, including both fluid and solid phases. An additional body force is imposed on the solid part to enforce the rigidity constraint and satisfy the no-slip boundary condition. The absence of highly skewed unstructured mesh at the bead surface has been shown to accelerate the convergence and lower the uncertainty (Finn & Apte, 2013). The following governing equations are solved over the entire domain, including the region within the solid bed, and a rigidity constraint force, \(\mathbf{f}\), that is non-zero only in the solid region is applied to enforce the no-slip condition on the immersed object boundaries. The governing equations are
\[\nabla\cdot\mathbf{u} =0, \tag{1}\] \[\rho\biggl{[}\frac{\partial\mathbf{u}}{\partial t}+\left(\mathbf{ u}\cdot\nabla\right)\mathbf{u}\biggr{]} =-\nabla p+\mu\nabla^{2}\mathbf{u}+\mathbf{f}\;, \tag{2}\]
where \(\mathbf{u}\) is the velocity vector (with components given by \(\mathbf{u}=(u,v,w)\), \(\rho\) the fluid density, \(\mu\) the fluid dynamic viscosity, and \(p\) the pressure. A fully parallel, structured, collocated grid, finite volume solver has been developed and thoroughly verified and validated for a range of test cases including flow over a cylinder and sphere for different Reynolds numbers, flow over touching spheres at different orientations, flow developed by an oscillating cylinder, among others. The details of the algorithm as well as very detailed verification and validation studies have been published elsewhere (Apte _et al._, 2009; Finn, 2013; Finn & Apte, 2013). The solver was used to perform direct one-to-one comparison with a body-fitted solver with known second-order accuracy for steady inertial, unsteady inertial, and turbulent flow through porous media (Finn & Apte, 2013) to show very good predictive capability. It has also been recently used for direct numerical simulations of
\begin{table}
\begin{tabular}{l c c c c c c c c c} Case & Domain & \(Re_{K}\) & \(Re_{\tau}\) & \(Re_{b}\) & \(D^{+}\) & \(\phi\) & \(H_{s}/\delta\) & \(D_{p}/\delta\) & \((L_{x},L_{z})/\delta\) \\ VV & permeable & 2.56 & 180 & 1,886 & 77 & 0.41 & 1.71 & 0.43 & (4\(\pi\),2\(\pi\)) \\ PBL & permeable & 2.56 & 270 & 2,823 & 77 & 0.41 & 1.14 & 0.29 & (4\(\pi\),2\(\pi\)) \\ PBM & permeable & 5.17 & 545 & 5,681 & 156 & 0.41 & 1.14 & 0.29 & (2\(\pi\),\(\pi\)) \\ PBH & permeable & 8.94 & 943 & 9,965 & 270 & 0.41 & 1.14 & 0.29 & (2\(\pi\),\(\pi\)) \\ IWM-F & impermeable & - & 545 & 5,683 & 156 & - & 0.29 & 0.29 & (2\(\pi\),\(\pi\)) \\ \end{tabular}
\end{table}
Table 1: Parameters used in the present pore-resolved DNS: \(D_{p}\) is the sphere diameter, \(\delta\) is the free-stream height, \(H_{s}\) is the sediment depth, \(\phi\) is the porosity, \(L_{x}\) and \(L_{z}\) are the streamwise and spanwise domain lengths, \(Re_{K}\), \(Re_{\tau}\), \(Re_{b}\) and \(D^{+}\) are the permeability, friction, bulk and roughness Reynolds numbers, respectively. ( )\({}^{+}\) denotes wall units.
oscillatory, turbulent flow over a sediment layer (Ghodke & Apte, 2016, 2018_a_), and pore-resolved simulations of turbulent flow within a porous unit cell with face-centered cubic packing (He _et al._, 2018, 2019).
### Averaging
Since the flow properties are highly spatially heterogeneous near rough sediment bed boundary, time averaging followed by spatial averaging is applied. Flow statistics such as Reynolds stresses, form-induced disturbances, shear stress and pressure fluctuations, among others are computed using the time-space averaging. This consecutive time-space averaging involves Reynolds decomposition (\(\psi=\overline{\psi}+\psi^{\prime}\)) accompanied by spatial decomposition of the time averaged variable \(\overline{\psi}=\langle\overline{\psi}\rangle+\widetilde{\overline{\psi}}\)(Nikora _et al._, 2007, 2013). Here \(\psi\) represents an instantaneous flow variable, \(\overline{\psi}\) is its temporal average, \(\psi^{\prime}=\psi-\overline{\psi}\) is the instantaneous turbulent fluctuation. The angular brackets, \(\langle\rangle\), denote spatial averaging operator. The quantity \(\widetilde{\overline{\psi}}\) known as the form-induced or dispersive disturbance in space is defined as \(\widetilde{\overline{\psi}}=\overline{\psi}-\langle\overline{\psi}\rangle\). This represents the deviation of the time-averaged variable, \(\overline{\psi}\), from its spatially averaged value, \(\langle\overline{\psi}\rangle\). Nikora _et al._ (2013) proposed to denote the form-induced disturbance quantity as \(\widetilde{\overline{\psi}}\), whereby a horizontal overbar is added, to emphasize that time-averaging has been done prior to spatial-averaging. This modified notation is adopted in this work. The original notation of this quantity proposed in Nikora _et al._ (2007) was without the overbar, \(\widetilde{\psi}\), and has been used in the literature published in this field (Voermans _et al._, 2017; Fang _et al._, 2018; Shen _et al._, 2020).
Quantities are averaged over the fluid domain, giving the intrinsic spatial average, of the time averaged variable, \(\langle\overline{\psi}\rangle=1/V_{f}\int_{V_{f}}\overline{\psi}dV\), where \(V_{f}\) is the volume occupied by the fluid. In other words, while calculating the volume average of variables in the particle bed regions, only the portion of the volume occupied by the fluid is taken into account. The representative averaging-volume lengths used for spatial averaging in the streamwise and spanwise directions are the same as the grid resolution in those directions. Since the grids are uniform in \(x\) and \(z\), this implies that the averaging volumes have the same lengths in the streamwise and spanwise direction. However, in the bed-normal direction, a variable volume averaging is used. In the boundary-layer region, especially near the crest of the bed where steep gradients in flow quantities are present, thin-volumes are used for averaging, whereas, deeper inside the bed thicker averaging volumes are used as described in detail in Appendix B.
### Grid resolution and flow setup
Table 2 gives the details of grid resolution used for all cases. The grid resolutions required for these configurations are based on two main considerations: (i) minimum bed-normal grid resolution near the bed to capture the bed shear stress, and (ii) minimum resolution required to capture all details of the flow over spherical particles. Since the intensity of turbulence and mean flow penetration into the bed is expected to reduce further deep into the bed, finer grid resolutions are used in the bed-normal region in the top layer compared to other layers.
For DNS of channel flows, the bed-normal grid resolution in wall units should be \(\Delta y^{+}<1\), in order to accurately capture the bed shear stress in the turbulent flow. The grid resolutions in the streamwise and spanwise directions are typically 3-4 times coarser, following the smooth channel flow simulations by Moser _et al._ (1999). Note that, the roughness features and permeability are known to break the elongated flow structures along the streamwise direction in smooth walls, reducing the anisotropy in the near-bed
region (Ghodke & Apte, 2016). To capture the inertial flow features within the pore and around spherical particles, grid refinement studies were conducted on flow over a single sphere at different Reynolds numbers representative of the cases studied here (details in Appendix A). Accordingly, roughly 90 (PBL), 180 (PBM), and 548 (PBH) grid points are used in the bed-normal direction in a region covering the _top layer_ and extending slightly into the free-stream. In the \(x\) and \(z\) directions, a uniform grid with a minimum of 26 (PBL), 38 (PBM), and 40 (PBH) grid points per-sediment grain are used to resolve the bed geometry. Effect of uniform, but non-cubic grids within the sediment bed was thoroughly evaluated by comparing drag coefficients to those obtained from cubic grids over a single and a layer of particles to show no discernible differences over the range of Reynolds numbers studied here (see appendix A).
Below the top layer, the bed-normal resolution is slowly reduced and nearly uniform grid in all directions is used deep inside the bed, as the frictional Reynolds number in the bottom layers of sediment decreases significantly. From the crest of the top sediment layer, the grid is stretched, coarsening it gradually towards the top of the channel using a standard hyperbolic tangent function (Moser _et al._, 1999). Based on these grid resolutions, the total grid count for the PBL case is \(\sim\) 232 million cells, for the PBM case is \(\sim\) 200 million cells, and for the PBH case is \(\sim\) 428 million cells as given in Table 2.
The flow in the simulations is driven by constant mass flow rate. A target mass flow rate is adjusted until the friction velocity, \(u_{\tau}\), which results in the required \(Re_{K}\) is obtained. \(Re_{\tau}\) is then calculated based on the free-stream height \(\delta\). Pokrajac _et al._ (2006) noted that there is lack of a general definition of \(u_{\tau}\) applicable to the boundary layers with variable shear stress where the roughness height is comparable with the boundary layer thickness. Accordingly, \(u_{\tau}\) can be specified based on (i) bed shear stress, (ii) total fluid shear stress based on the roughness crest, (iii) fluid shear stress extrapolated to the zero-displacement plane, or (iv) shear stress obtained by fitting data to log-law. Pokrajac _et al._ (2006) proposed to use \(u_{\tau}\) based on the the fluid shear stress at the roughness crest, to obtain least ambiguous definition. Since the present work builds upon the experimental data of Voermans _et al._ (2017), the friction velocity, \(u_{\tau}\), is calculated from the maximum value of the time-space averaged total fluid stress which happens to be very close to the sediment crest. The friction velocity is then based on the sum of the viscous, turbulent, and the form-induced shear stresses (Nikora _et al._, 2004; Voermans _et al._, 2017),
\[\tau(y)=\rho\nu\partial(\phi\overline{u})/\partial y-\rho\phi\overline{\langle u ^{\prime}v^{\prime}\rangle}-\rho\phi\overline{\langle\widetilde{u}\widetilde {v}\rangle}. \tag{3}\]
Following smooth wall DNS studies by Moser _et al._ (1999), between 20-25 flow-through times (computed as the length over average bulk velocity \(L_{x}/U_{b}\)) are needed for the turbulent flow to reach stationary state. Once a stationary flow field is obtained, computations were performed for an additional time period of \(T=25\delta/u_{\tau}\) to collect single-point and
\begin{table}
\begin{tabular}{l l l l l l} \hline Case & \(N_{x}\times N_{y}\times N_{z}\) & \multicolumn{2}{c}{Bed-Normal Grid Distribution} & \multicolumn{1}{c}{\((\Delta x^{+},\Delta y^{+},\Delta z^{+})\)} \\ & Channel region & Top layer & Bottom layers & \\ VV & \(768\times 288\times 384\) & 96 & 86 & 106 & (2.94, 0.95, 2.94) \\ PBL & \(1152\times 350\times 576\) & 150 & 90 & 110 & (2.94, 0.95, 2.94) \\ PBM & \(846\times 530\times 448\) & 184 & 180 & 166 & (4.01, 0.95, 3.8) \\ PBH & \(882\times 1082\times 448\) & 342 & 548 & 192 & (6.74, 0.55, 6.63) \\ IWM-F & \(846\times 364\times 448\) & 184 & 180 & - & (4.01, 0.95, 3.8) \\ \hline \end{tabular}
\end{table}
Table 2: Grid parameters used in the present DNS. \((\ )^{+}\) denotes wall units. Details on the grid transition regions are given in appendix B.
two-points statistics. For the PBL case, where the domain in the streamwise and spanwise directions is twice as that used in the PBM and PBH cases, the stats were collected for over a time period \(T=13\delta/u_{\tau}\). The flow statistics were monitored to obtain statistically stationary values over the above averaging periods.
## 3 Results
The main results for the different cases studied in this work are discussed. The Reynolds and form-induced stresses are first compared for different Reynolds numbers (section 3.1). The structure of turbulence is visualized using vorticity contours followed by a detailed quadrant analysis describing the sweep and ejection events (section 3.2). Variation of the turbulence penetration depth, interfacial mixing length, and similarity relations are investigated as a function of \(Re_{K}\) (section 3.3). Next, the role of the top layer of the sediment is quantified by comparing the permeable bed statistics with an impermeable rough wall with roughness equivalent to the permeable bed (section 3.4). Finally, probability distribution function (PDF) of the viscous and pressure components of the normalized bed shear stress are presented in section 3.5 followed by the pressure fluctuations at the sediment-water interface in section 3.6.
### Reynolds and form-induced stresses
Figure 3 shows the bed-normal variation of all components of Reynolds and form-induced stresses for the PBL, PBM, and PBH cases. The zero-displacement plane (see Appendix D for details) defines the sediment-water interface (SWI) in this study and is used as a virtual origin while comparing and contrasting the primary and secondary statistics. The variables are normalized by \(u_{\tau}^{2}\) (pressure by \(\rho u_{\tau}^{2}\)), and \(y\) is shifted by \(d\), the zero-displacement thickness, and then normalized by \(\delta\), effectively making virtual origin the same for all the cases. This implies that SWI planes for all cases align. It is observed that the magnitude of the form-induced stresses is generally smaller than the Reynolds stresses. The location of the peaks are also different, with the form induced stresses peaking below the sediment-water interface, whereas the Reynolds stresses peak close to the bed crest.
The profiles of streamwise (\(x\)-direction), \(\langle\overline{u^{\prime 2}}\rangle^{+}\), and bed-normal (\(y\)-direction) Reynolds stress, \(\langle\overline{v^{\prime 2}}\rangle^{+}\), (figure 3a,b) exhibit similarity for \((y+d)/\delta>=0.4\), substantiating the wall similarity hypothesis reported by Raupach _et al._ (1991), Breugem _et al._ (2006), and Fang _et al._ (2018). Near the bed, the streamwise stress decreases and bed-normal and spanwise (\(z\)-direction, figure 3c) stresses increase with increasing Reynolds number, suggesting weakening of the wall-blocking effect with increase in non-dimensional bed permeability. Moreover, not only does the flow penetrate deeper inside the bed (quantified in section 3.3) but the intensity also increases with increasing \(Re_{K}\) as seen from the bed-normal component of the stress. However, this comes at the expense of loss in intensity in the streamwise direction. The peak values and their locations for the Reynolds stresses are shown in table 3. The peak values of \(\langle\overline{u^{\prime 2}}\rangle^{+}\) are larger than \(\langle\overline{v^{\prime 2}}\rangle^{+}\) for all Reynolds numbers, similar to turbulent flows over smooth or rough impermeable wall cases. The location of peak in \(\langle\overline{u^{\prime 2}}\rangle^{+}\) and \(\langle\overline{w^{\prime 2}}\rangle^{+}\) is close to the crest for all \(Re_{K}\), whereas that in \(\langle\overline{v^{\prime 2}}\rangle^{+}\) shifts downward starting from above the crest level for the PBL case and moving close to the crest for the PBH case. The Reynolds shear stress shown in figure 3d also peaks near the crest, increases with \(Re_{K}\), and shows similarity in the outer region.
The bed-normal variation of the form-induced stresses for the PBL, PBM and PBH cases are shown in figure 3d-g. The influence of \(Re_{K}\) is much more pronounced on the
form-induced stresses. While the magnitudes for the streamwise stresses are comparable, the bed-normal and spanwise stresses are much smaller than the corresponding Reynolds stresses. More importantly, the peak values occur significantly below sediment crest. Inside the bed, \(\langle\widetilde{\overline{u}}^{2}\rangle^{+}\) decreases with \(Re_{K}\), while both \(\langle\widetilde{\overline{v}}^{2}\rangle^{+}\) and \(\langle\widetilde{\overline{w}}^{2}\rangle^{+}\) values increase with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress increases with \(Re_{K}\), while the Reynolds stress increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress increases with \(Re_{K}\), while the Reynolds stress increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress increases with \(Re_{K}\), while the Reynolds stress increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress increases with \(Re_{K}\), while the Reynolds stress increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress increases with \(Re_{K}\), while the Reynolds stress increases with \(Re_{K}\), while the Reynolds stress increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress increases with \(Re_{K}\), while the Reynolds stress increases with \(Re_{K}\), while the Reynolds stress increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress increases with \(Re_{K}\), while the Reynolds stresses increases with \(Re_{K}\), while the Reynolds stresses increases with \(Re_{K}\), while the Reynolds stresses increases with \(Re_{K}\). The Reynolds stress tensor increases with \(Re_{K}\), while the Reynolds stress increases with \(Re_{K}
with \(Re_{K}\) mimicking the trend with Reynolds stresses. However, in contrast to \(\langle\overline{v^{\prime 2}}\rangle^{+}\), the peaks in \(\langle\overline{v}^{2}\rangle^{+}\) occur at a similar location (table 3) for all Reynolds numbers. This suggests that the penetration depth of \(\langle\overline{v}^{2}\rangle^{+}\) is independent of \(Re_{K}\) and is dependent on the local porosity distribution in the top layer of the bed, which is similar for the three \(Re_{K}\) cases studied in this work. It is interesting to note that, the values of \(\langle\overline{w}^{2}\rangle^{+}\) are larger than the \(\langle\overline{v}^{2}\rangle^{+}\) at lower \(Re_{K}\), but are comparable and slightly smaller than \(\langle\overline{v}^{2}\rangle^{+}\) at higher \(Re_{K}\). This may be attributed to the increased flow penetration, causing the bed-normal form-induced stresses peak further below the bed crest.
The turbulent fluctuations and form-induced disturbances in pressure, shown in figure 3h, exhibit a very strong correlation with \(Re_{K}\). Again, the form-induced pressure disturbances are generally much smaller than the turbulent fluctuations. There is more than a ten-fold increase in the peak value between the PBL and PBH cases for both the turbulent fluctuations and formed induced disturbances in pressure. The location of the peak in form-induced disturbances as well as turbulent fluctuations occur just below the crest and is almost the same for all \(Re_{K}\) studied. The fluctuations quickly decay for \((y+d)/\delta<-0.3\), indicating that a significant magnitude contribution comes from the top layer of the sediment bed. This suggests that the local protrusions of partially exposed sediment particles in the top layer and resultant stagnation flow are responsible in altering the flow structures that in turn produce larger form-induced disturbances and pressure fluctuations. The increased pressure disturbances due to the bed roughness elements can lead to enhanced mass transport rates at the sediment-water interfaces with potentially reduced residence times for pollutants and contaminants.
### Turbulence structure and quadrant analysis
Distinct variations in the characteristics of primary turbulence structure are first shown in this section followed by the quadrant analysis. Contours of instantaneous bed-normal vorticity, \(\omega_{y}^{+}=\omega_{y}\nu/u_{\tau}^{2}\), just above the crest (\(y/\delta=0.005\)) are shown in figure 4. Results from a simulated smooth wall case at \(Re_{\tau}=270\) (same \(Re_{\tau}\) as the PBL case) are also shown in figure 4a for comparison. In the smooth wall case, distinct long elongated streaky structures, which are a result of low and high speed streaks generated quasi-streamwise vortices, are visible. The influence of strong mean gradient and an impenetrable smooth wall results in these long streaky structures (Lee _et al._, 1990). In the low Reynolds number permeable bed case (PBL), shown in figure 4b, the starting of the breakdown of these structures can be seen. The roughness and permeability of the bed help in the breakdown.
\begin{table}
\begin{tabular}{l c c c c c} \hline Case & \(\langle\overline{u^{\prime 2}}\rangle^{+}\) & \(\langle\overline{v^{\prime 2}}\rangle^{+}\) & \(\langle\overline{w^{\prime 2}}\rangle^{+}\) & \(\langle\overline{u^{\prime}v^{\prime}}\rangle^{+}\) & \(\langle\overline{p^{\prime 2}}\rangle^{+}\) \\ PBL & 3.46 (0.15) & 1.03 (0.25) & 1.79 (0.18) & 0.80 (0.16) & 9.41 (0.094) \\ PBM & 3.19 (0.15) & 1.18 (0.19) & 1.88 (0.16) & 0.84 (0.15) & 40.25 (0.095) \\ PBH & 2.97 (0.15) & 1.23 (0.16) & 2.05 (0.16) & 0.88 (0.15) & 120.88 (0.097) \\ \hline Case & \(\langle\overline{u}^{2}\rangle^{+}\) & \(\langle\overline{v}^{2}\rangle^{+}\) & \(\langle\overline{w}^{2}\rangle^{+}\) & \(\langle\overline{\widetilde{w}}\rangle^{+}\) & \(\langle\overline{\widetilde{p}}^{2}\rangle^{+}\) \\ PBL & 2.71 (0.08) & 0.25 (-0.04) & 0.38 (0.014) & 0.21 (0.05) & 3.7 (0.07) \\ PBM & 2.45 (0.07) & 0.47 (-0.05) & 0.42 (0.01) & 0.29 (0.04) & 16.5 (0.063) \\ PBH & 2.33 (0.07) & 0.53 (-0.05) & 0.46 (0.009) & 0.31 (0.04) & 49.9 (0.06) \\ \hline \end{tabular}
\end{table}
Table 3: The peak value and location \([(y+d)/\delta]\) given in brackets, of Reynolds stresses and form-induced stresses for the PBL, PBM, and PBH cases. The peak values of Reynolds and form-induced stresses are normalized by \(u_{\tau}^{2}\) (pressure by \(\rho u_{\tau}^{2}\)).
Although the long elongated streaks in the PBL case are shortened due to roughness, at this \(Re_{K}\) the flow anisotropy is somewhat retained. With further increase in Reynolds number, the streaks are broken down even more and in the PBH case (figure 4d), the streaky structures are significantly less pronounced with the flow becoming more intermittent. As \(Re_{K}\) increases, weakening of the wall-blocking effect due strong bed-normal velocities prevents the formation of these long streaky structures and lead to reduction in flow anisotropy.
Quadrant analysis is performed to understand the influence of near bed flow structures on Reynolds stress. Joint probability distribution functions (PDFs) for turbulent velocity fluctuations, \(u^{\prime}\) and \(v^{\prime}\) calculated at different elevations from the sediment bed are shown in figure 5. The product of \(u^{\prime}v^{\prime}\) is negative in the second and fourth quadrants, representing turbulence production. The second quadrant, where \(u^{\prime}<0,v^{\prime}>0\), corresponds to an ejection event whereas, the fourth quadrant, where \(u^{\prime}>0,v^{\prime}<0\), corresponds to a sweep event. Figures 5(a-l) show the correlation for different \(Re_{K}\) at four bed-normal elevations within one particle diameter below and above the crest. Figures 5a-c are one diameter above the crest in the free-stream, figures 5d-f are at the bed crest, figures 5g-i are half-way into the top layer of the bed, and figures 5j-l are at the bottom of the top layer (\(y/D_{p}=-1\)), respectively.
Figure 5: Quadrant analysis of probability distribution functions of turbulent velocity fluctuations for the PBL (left panel), PBM (middle panel), and PBH (right panel) cases at different elevations: \(y/\delta\) (\(y/D_{p}\)) of (a–c) 0.143 (0.5), (d–f) 0 (0), (g–i) -0.143 (-0.5), (j–l) -0.286 (-1). The velocities have been normalized by \(u_{\tau}\).
Reynolds number, sweep events are enhanced, increasing transport of momentum towards the bed. At the bed crest, figures 5d-f, both ejection and sweep events become dominant over a greater range of velocity fluctuations showcasing the interaction between fluid flow and permeable sediment bed. For lower \(Re_{K}\), the joint PDFs are more concentrated in a narrow band, indicating large fluctuations in \(u^{\prime}\) are associated with smaller excursions in \(v^{\prime}\). With increase in Reynolds number, the PDFs appear more diffused highlighting increase in bed-normal velocity fluctuations, and reduction in anisotropy structures at the SWI, which is confirmed from the vorticity contours discussed earlier. Further away from the bed at \(y/D_{p}=0.5\) as shown in figures 5a-c, the ejection and sweep events again decrease in intensities compared to those at the SWI.
### Penetration depths, mixing length, and similarity relations
Passage of turbulent structures over the sediment bed and resulting sweep events were shown to penetrate into the sediment bed in section 3.2. These sweep events induce momentum fluxes that can be on the order of the mean bed shear stress. The penetration of turbulence increases the flow resistance and effective roughness. The depth of turbulent shear penetration is associated to the characteristic size of the turbulent eddy across the sediment-water interface (SWI). Knowing how these scales are related to the permeability (\(\sqrt{K}\)) or the mean particle size (\(D_{p}\)) at different \(Re_{K}\) is important for reduced order models of turbulent momentum and mass transport across the interface.
The mean flow penetration depth, \(\delta_{b}\), known as the Brinkman layer thickness, is calculated from the mean velocity profiles below the sediment crest. Deep inside the bed, the mean velocity reaches a constant value (Darcy velocity), which is denoted as \(U_{p}\). The Brinkman layer thickness is then calculated by measuring the the vertical distance from the SWI (\(y=-d\)) to a location inside the bed, where the difference between the local mean velocity (\(\langle\overline{u}\rangle(y)\)) and Darcy velocity (\(U_{p}\)) has decayed to 1% of the velocity value at the SWI (\(U_{i}\)), i.e \(\langle\overline{u}\rangle(y)_{y+d=-\delta_{b}}=0.01(U_{i}-U_{p})+U_{p}\) (where subscript '\(i\)' indicates the SWI location). The corresponding value of the Brinkman layer thickness measured from the crest of the bed is \(\delta_{b}^{*}=\delta_{b}+d\). Figure 6a shows strong correlation of the normalized Brinkman layer thickness with permeability Reynolds number. Similar trends are observed in experimental data of Voermans _et al._ (2017) despite the fact that the actual position and size of the particles in the random arrangement used in the present study are different than those in the experiments.
The turbulent shear stress penetration is defined as the depth at which the Reynolds stress is 1% of its value at the SWI, i.e. \(\delta_{p}=\langle\overline{u^{\prime}v^{\prime}}\rangle_{y=-\delta_{p}}=0.0 1\langle\overline{u^{\prime}v^{\prime}}\rangle_{i}\), and the value measured from the crest of the bed is \(\delta_{p}^{*}=\delta_{p}+d\). Following the work of Ghisalberti (2009) on obstructed shear flows, Manes _et al._ (2012) defined the penetration depth from the crest of the sediment (\(\delta_{p}^{*}\)), and showed that it is proportional to the drag length scale, i.e., \(\delta_{p}^{*}\sim f(C_{d}a)^{-1}\), where \(C_{d}\) is the drag coefficient of the medium, and \(a\) is a length scale obtained based on the frontal area per unit volume of the solid medium. For monodispersed spheres, \(a\) is proportional to the particle size which in turn is related to the permeability. Thus, using a drag-force balance, Manes _et al._ (2012) argued for a linear relation between \(\delta_{p}^{*}\) and \(\sqrt{K}\). However, figure 6b shows that the normalized \(\delta_{p}\) is a function of \(Re_{K}\). Both the mean flow (\(\delta_{b}\)) and turbulent shear stress penetration (\(\delta_{p}\)) show non-linear correlation with the permeability, and increase with increasing \(Re_{K}\). A deterministic relation is observed for the ratio, \(\delta_{b}^{*}/\delta_{p}^{*}\), with the permeability Reynolds number as the ratio approaches a constant value of 1.1 as shown in figure 6c.
The dominant scale of the turbulent structures at the interface is affected by the interstitial spacing within the pore of which the permeability is a geometric measure, but it does not introduce any physical flow dependent measure. Hence, quantifying
the dependence of interfacial mixing length, \(\langle L_{m,i}\rangle=(\langle\overline{u^{\prime}v^{\prime}}\rangle_{i}/(\partial _{y}\langle\overline{u}\rangle)_{i}^{2})^{1/2}\), on the permeability Reynolds number is important. The mixing length can be thought of as a representative length scale of the turbulent eddies at the SWI responsible for turbulent transport of mass and momentum. It is on the order of the bed permeability and is greater than the Kolmogorov length scale (\(L_{k}=(\nu^{3}/\varepsilon)^{1/4}\), where \(\varepsilon\approx u_{\tau}^{2}/\langle L_{m,i}\rangle\) is dissipation rate of the turbulent kinetic energy (Tennekes _et al._, 1972). Figure 6d shows the interfacial mixing length, \(\langle L_{m,i}\rangle\), normalized by the Kolmogorov length scale (\(L_{k}\)) as well as normalized by the permeability (\(\sqrt{K}\)). The Brinkman layer thickness, the turbulent shear stress penetration, and the mixing length at the interface show very similar dependence on \(Re_{K}\) over the Reynolds numbers studied, suggesting that that the mixing length is a relevant characteristic scale for transport of momentum and mass.
Flows over highly permeable boundaries share statistical similarities over different types of permeable geometries as shown by Ghisalberti (2009). Asymptotic values predicted from present simulations for \(\sigma_{v,c}/\sigma_{u,c}\sim 0.6\), \(\sigma_{v,c}/u_{\tau}\sim 1.1\), \(\sigma_{u,c}/u_{\tau}\sim 1.8\), and \(U_{c}/u_{\tau}\sim 2.6\) match well with those observed by Voermans _et al._ (2017). Here. \(\sigma_{u,c}=\langle\overline{u^{\prime 2}}\rangle_{c}^{1/2}\), and \(\sigma_{v}=\langle\overline{v^{\prime 2}}\rangle_{c}^{1/2}\), and \(U_{c}\) is the mean velocity at the crest. The data are normalized by \(-\langle\overline{u^{\prime}v^{\prime}}\rangle_{c}^{1/2}\), where the subscript '\(c\)' indicates that these similarity relations are defined at the crest of the sediment bed, \(y=0\). The successful comparison of various turbulent quantities with the experimental work of Voermans _et al._ (2017, 2018) has important implications for flows over monodispersed sediment beds in general.
Voermans _et al._ (2017) studied a range of \(Re_{K}\) between \((1-6.3)\) by varying the permeability, through the use of medium and large diameter particles, along with flow rates. In the present DNS, the particle sizes and permeability are kept constant and \(Re_{K}\) is varied by changing the bulk flow rate. Geometrical features of the sediment beds, such as size of particles, hence the permeability, local porosity variations, and tortuosity, between the experiments and the DNS are very different. The fact that the DNS predictions follow closely with the experimental data suggests that, for turbulent flow over randomly arranged monodispersed sediment beds, the actual location of sediment particles in the bed has little influence on the turbulence statistics. However, matching the \(Re_{K}\) is a necessary but not a sufficient condition. A random as opposed to structured arrangement of monodispersed particles in the top layer is also important to achieve statistical similarity and is investigated below.
### Role of the top layer of the sediment bed
To quantify the influence of the top layer of the sediment bed on flow turbulence and statistics at the SWI, a rough impermeable wall case is investigated by matching the entire top layer of the permeable bed at medium Reynolds number (PBM). The top layer of the sediment is placed over a no-slip wall as shown in figure 2d corresponding to the case IWM-F in table 1.
Figures 7a-h compare the bed-normal variation of mean velocity, Reynolds and form-induced stresses, and pressure disturbances for the PBM and impermeable wall IWM-F cases. The mean velocity, Reynolds stress, and turbulent pressure fluctuation profiles for both the cases overlap each other with no noticeable difference due to the presence of an underlying solid wall in IWM-F. The majority of high-magnitude bed-normal fluctuations are restricted to the top layer of the bed for both cases. The presence of a solid wall underneath the full layer of spherical roughness elements in IWM-F has minimal influence on both the magnitude and penetration of the mean flow and turbulent fluctuations. This is because the full layer of roughness elements creates pockets underneath where the flow can penetrate. Since the turbulent kinetic energy within this layer is still small, the flow characteristics and momentum transport mechanisms resemble that of a permeable bed. Similar behavior was observed and reported by Manes _et al._ (2009) in their experimental work.
As in the permeable bed cases, the form-induced stresses (figures 7e-g) have lower magnitudes compared to their corresponding Reynolds stresses and the peak values occur significantly below the sediment crest even for IWM-F case. While the peak value for streamwise component (\(\widehat{(\overline{w}^{2})}^{+}\)) is well captured, the peaks in bed-normal (\(\widehat{(\overline{v}^{2})}^{+}\)) and spanwise (\(\widehat{(\overline{w}^{2})}^{+}\)) components show differences between the two cases. Deeper penetration and higher magnitude of form-induced normal stresses are observed in PBM compared to the IWM-F case. The additional layers of sediment grains underneath the top layer in PBM provide connected pathways for the bed-induced flow disturbances to penetrate deeper with a gradual loss in intensity. While in IWM-F, the underlying solid wall blocks the flow penetration and redistributes the stresses tangentially into the spanwise direction, which is evident by larger peak magnitude of \(\widehat{(\overline{w}^{2})}^{+}\) compared to \(\widehat{(\overline{v}^{2})}^{+}\).
The form-induced pressure disturbances (\(\widehat{(\overline{p}^{2})}^{+}\)), shown in figure 7h, penetrate deeper into the bed in PBM, resulting in a reduction of its peak value compared to the IWM-F case. The wall blocking effect in IWM-F results in pressure disturbances extending above the crest level, up to about \((y+d)\,/\delta<0.3\). The turbulent pressure fluctuations, however, are well captured and are typically much larger than the form-induced pressure disturbances. Presence of the high magnitude pressure fluctuations only in the top layer
is an important observation provided by the present DNS data, as it showcases that inclusion of the effect of a single layer of roughness elements with random arrangement for reach-scale hyporheic exchange models can potentially better capture the turbulent fluctuations.
Figure 7: Comparison of the mean velocity, Reynolds (lines) and form-induced (lines and symbols) stress profiles for PBM (-), -) and IWM-F (-) cases: (a) mean velocity, (b-c) streamwise, and bed-normal components of spatially-averaged Reynolds stress tensor, (d) spatially averaged Reynolds stress and shear form-induced stress, (e-g) streamwise, bed-normal and spanwise components of form induced stresses, and (h) mean-square pressure fluctuations and form-induced pressure disturbances. The crest lines for PBM and IWM-F cases overlap and are shown by the horizontal lines (-). Pressure is normalized by \(\rho u_{\tau}^{2}\).
### Stress and force statistics on the particle bed
Direct measurements of shear stress or the drag and lift forces on sediment grains in a laboratory or in the field are challenging. The present pore-resolved simulations provide access to the spatio-temporal variations in these variables. Specifically, knowing higher order statistics of bed shear stress as a function of Reynolds number can critically influence reduced order models for mass and momentum transport that are based on the friction velocity (\(u_{\tau}\)). Furthermore, incipient motion, sliding, rolling and saltation, driving the bedload transport are modeled based on the bed shear stress exceeding a critical value. Higher-order statistics of bed shear stress are important in stochastic modeling of incipient motion (Ghodke & Apte, 2018_b_). Motion and rearrangement of sediments can alter local bed porosity and effective permeability, directly impacting hyporheic exchange. The DNS data are used to compute probability distribution functions (pdfs) and statistics of the local variation of net bed shear stress on particle surfaces as well as the net drag and lift forces on the particle bed at different \(Re_{K}\).
The net stress (\(\boldsymbol{\tau}^{t}=\boldsymbol{\tau}^{v}-p\mathbf{I}\)) on the particle surface includes contribution from the viscous (skin-friction, \(\boldsymbol{\tau}^{v}\)) as well as pressure (form, \(p\mathbf{I}\)) stresses. The normal and tangential components of the net stress can be directly evaluated on the particle surface from the velocity and the pressure fields and then transformed into the Cartesian frame using the surface normal vector to obtain the streamwise (\(\tau_{x}^{t}\)), the bed-normal (\(\tau_{y}^{t}\)), and the spanwise (\(\tau_{z}^{t}\)) components, respectively. The local distribution of the net stress on the particle surface can be integrated over its surface area to obtain the net force on the particle,
\[\mathbf{F}=\int_{\Gamma}\boldsymbol{\tau}^{t}\cdot\mathbf{n}\ d\Gamma\equiv \int_{\Gamma}\left(\boldsymbol{\tau}^{v}-p\mathbf{I}\right)\cdot\mathbf{n}\ d\Gamma, \tag{1}\]
where \(d\Gamma\) is the unit surface area of the particle, \(\mathbf{n}\) is the particle surface normal vector. Accordingly, statistics of the local distribution of the net stress as well as the drag and lift force components are evaluated.
#### 3.5.1 Local distribution of net stress on the particle surface
The probability distribution functions of the streamwise (\(\tau_{x}^{t}\)) and bed-normal (\(\tau_{y}^{t}\)) net local stress normalized by the total stress in the \(x\)-direction integrated over all particles in the bed (\(\tau_{w}^{t}\)), are shown in figures 8a-b. It is found that these distributions collapse nicely for all Reynolds numbers, suggesting that they are independent of \(Re_{K}\). The dominance of positively skewed streamwise shear stress (\(\tau_{x}^{t}\)) events is clearly visible by this normalization. Positive skewness in the PDFs is associated with the exposed protrusions of the spherical roughness elements to the free-stream, as instantaneous streamwise velocity generally increases in the bed-normal direction. For wall-bounded flows, the probability of negative wall shear stress is generally linked to the near-wall low and high speed regions, however, with protrusions from the rough sediment bed, these structures are destroyed as was shown in section 3.2. The probability of negative \(\tau_{x}^{t+}\) fluctuations is then highly associated with trough regions between the roughness elements and are representative of reverse flow behind the exposed sediment particles. As the net bed stress increases with increase in \(Re_{K}\), the probability of extreme streamwise stress events increases. The PDFs of bed-normal (\(\tau_{y}^{t+}\)) and spanwise (\(\tau_{z}^{t+}\), not shown) stress are more symmetric due to the absence of directional influence of a strong mean flow gradient.
The relationship between the two viscous components of shear stress can be understood by their yaw angle, \(\psi_{\tau}=\mathrm{atan}(\tau_{z}^{v}(t)/\tau_{x}^{v}(t))\). Jeon _et al._ (1999) reported that the shear-stress yaw angles for smooth walls are within the range of range of \(-45\) to \(45\) degrees,
indicating that events with large values of \(\tau_{x}^{v}\) are associated with small \(\tau_{z}^{v}\). However, in flow over rough permeable beds, probability of yaw angles above 45 degrees is much higher as seen in figure (c)c. This shows that comparable magnitudes of \(\tau_{x}^{v}\) and \(\tau_{z}^{v}\) occur more frequently in flows over sediment beds. However, the yaw angle is independent of \(Re_{K}\), suggesting that it is influenced more by the roughness distribution of the bed rather than the flow. The roughness elements reduce the directional bias of near bed vortex streaks, resulting in more isotropic vorticity fields, wherein the probability of large scale fluctuations occurring simultaneously in both components of shear stress increases.
As the normalized PDFs of the local distribution of the net bed stress collapse for different \(Re_{K}\), it is conjectured that a simple deterministic fit to the PDFs of fluctuations in stress is possible and is investigated. The PDFs of the local distribution of the net stress fluctuations on the particle surface normalized by their rms values are shown in figure (a)a-c. The higher order statistics for the net stress are shown in the table 4. The mean (not shown) and standard deviation of shear stresses increase with \(Re_{K}\) suggesting higher probability of extreme events for larger \(Re_{K}\). The fluctuation PDFs are symmetric, but non-Gaussian, and show peaky distribution with heavy tails and high Kurtosis values given in table 4. A \(t\)-location model fit based on the variance, zero skewness, and shape factor determined by excess kurtosis represents the distributions very well.
For smooth wall-bounded flows, the root mean-squared fluctuations of streamwise
Figure 8: PDFs of the net bed stress components (a) streamwise (\(\tau_{x}^{t}\)) and (b) bed-normal (\(\tau_{y}^{t}\)). Superscript \((.)^{+}\) denotes normalized quantity by the total bed stress in the \(x\) direction (\(\tau_{w}^{t}\)). Also shown is the yaw angle \(\psi_{\tau}\) based on the viscous components of the stresses (c). Legend: PBL (), PBM (), and PBH () cases.
Figure 9: PDFs of net stress fluctuations normalized by their root-mean-square values for (a) streamwise (\(\tau_{x}^{t}\)), (b) bed-normal (\(\tau_{y}^{t}\)), and (c) spanwise (\(\tau_{z}^{t\prime}\)) components for PBL (), PBM (), and PBH () cases. Also shown is a non-Gaussian \(t\)-location scale fit ().
stress follow a logarithmic correlation as proposed by Orlu & Schlatter (2011). Accordingly, a logarithmic correlation between the root mean-squared fluctuations and the friction Reynolds number is also assumed in the present permeable bed DNS,
\[\tau^{t+}_{x,rms}=\tau^{t}_{x,rms}/\tau^{t}_{w} =2.10\ln Re_{\tau}-8.11, \tag{10}\] \[\tau^{t+}_{y,rms}=\tau^{t}_{y,rms}/\tau^{t}_{w} =2.4\ln Re_{\tau}-9.83,\] (11) \[\tau^{t+}_{z,rms}=\tau^{t}_{z,rms}/\tau^{t}_{w} =2.4\ln Re_{\tau}-10.10, \tag{12}\]
and is shown in figures 9(a)-c. Here, the root mean-squared fluctuations are obtained by computing the net stress fluctuations on particle surface over the entire bed and then time-averaging. For the present monodispersed bed, the friction (\(Re_{\tau}\)) and permeability Reynolds numbers (\(Re_{K}\)) are related to each other, and thus the above relation can also be plotted in terms of the permeability Reynolds number as shown.
The logarithmic dependence of shear stress fluctuations together with a symmetric, non-Gaussian distribution for local stress fluctuations, is an important result, as in the field measurements for natural stream or river bed studies, typically the friction velocity, \(u_{\tau}\), is measured (Jackson _et al._, 2013\(a\), 2015) to compute \(Re_{\tau}\). Equations 10-12 can then be used to evaluate the bed stress variability for different \(Re_{K}\). Together with the non-Gaussian model distribution for the PDFs of stress fluctuations, a stochastic approach for mobilization and incipient motion of sediment grain can be developed.
\begin{table}
\begin{tabular}{l|c c c|c c|c c|c c} \hline & \multicolumn{4}{c}{\(\tau^{t^{\prime}}_{x}\)} & \multicolumn{4}{c}{\(\tau^{t^{\prime}}_{y}\)} & \multicolumn{4}{c}{\(\tau^{t^{\prime}}_{z}\)} \\ \hline Case & \(\hat{\sigma}(\cdot)\) & \(Sk(\cdot)\) & \(Ku(\cdot)\) & \(\hat{\sigma}(\cdot)\) & \(Sk(\cdot)\) & \(Ku(\cdot)\) & \(\hat{\sigma}(\cdot)\) & \(Sk(\cdot)\) & \(Ku(\cdot)\) \\ PBL & 3.3e-1 & -1.28e-1 & 26.51 & 3.2e-1 & 7.86e-2 & 26.67 & 3.0e-1 & 8.0e-3 & 22.75 \\ PBM & 1.38 & -5.16e-1 & 19.92 & 1.41 & 1.57e-1 & 20.95 & 1.29 & -7.89e-2 & 19.5 \\ PBH & 2.7 & 1.65e-1 & 14.86 & 2.79 & 1.3e-1 & 15.85 & 2.71 & -8.1e-2 & 15.2 \\ \end{tabular}
\end{table}
Table 4: Higher order statistics for streamwise (\(\tau^{t^{\prime}}_{x}\)), bed-normal (\(\tau^{t^{\prime}}_{y}\)), and spanwise (\(\tau^{t^{\prime}}_{z}\)) stress fluctuations showing the standard deviation \(\hat{\sigma}(\cdot)\), skewness \(Sk(\cdot)\), and kurtosis \(Ku(\cdot)\).
Figure 10: Variation of the root-mean squared fluctuations of the net bed stress normalized by the total bed stress (\(\tau^{t}_{w}\)) with the friction and permeability Reynolds numbers: (a) \(\tau^{+}_{x,rms}\), (b) \(\tau^{+}_{y,rms}\), and (c) \(\tau^{+}_{z,rms}\). Function of \(Re_{\tau}\): DNS data (\(\mathsf{O}\)), fitted logarithmic correlation from DNS data for permeable sediment beds - -, and Function of \(Re_{K}\): DNS data (\(\mathsf{\Theta}\)), fitted logarithmic correlation from DNS data for permeable sediment beds - -.
#### 3.5.2 Net drag and lift force distribution
The local stress distributions on particle surface are integrated over each individual particle and then time averaged to obtain the mean and fluctuating drag and lift forces. Percentage contributions of the viscous and pressure stresses to the average drag and lift forces in the full bed as well as just the top layer of the bed are given in table 5. Majority of the contribution to the drag and lift forces comes from pressure distribution. It was also found that the top layer of the bed results in average forces that are 3-4 times the average in the full bed indicating that a significant contribution to the lift and drag force comes from the top layer of the bed.
The probability distribution functions of the fluctuations of drag (\(F_{d}^{\prime}\), \(x-\)component) and lift forces (\(F_{\ell}^{\prime}\), \(y-\)component) on all particles within the bed normalized by their respective standard deviation values are shown in figures 11a,b. Similar to the local stress distributions, the drag and lift force fluctuations also exhibit a symmetric, non-Gaussian distribution with heavy tails. Higher-order statistics of the forces (see table 5) indicate minimal skewness, and very high kurtosis suggesting probability of extreme forces due to turbulence. A model \(t\)-location fit function based on the variance and excess kurtosis data fits well for all Reynolds numbers.
Finally, to assess the contribution of the top layer on force statistics, figures 11c,d compare the PDFs of the drag and lift forces for PBM full bed as well as only the top layer of the bed, referred to as the PBMTL data. Note that for the PBMTL data, force fluctuations are normalized by the total force only in the top layer. A close match suggests that the top layer of the bed contributes to the majority of the net drag and lift force fluctuations in the bed. This has implications for large scale Reynolds-averaged Navier-Stokes modeling where a low-Reynolds number model is used to estimate shear stress on the bottom solid wall of the domain. Including roughness effects through a single layer of sediments may help improve prediction of reduced-order models.
### Pressure distributions at SWI
Pressure fluctuations at the SWI play a critical role in hyperheic transport even for a flat bed. Specifically, pressure fluctuations due to turbulence are conjectured to have significant impact on mass transport within the hyperheic zone as it can directly influence the residence times through turbulent advection. Reach-scale modeling of hyperheic zone transport typically use a one-way coupling approach, wherein the pressure
\begin{table}
\begin{tabular}{l l|c c c c c|c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{Drag force} & \multicolumn{6}{c}{Lift force} \\ \hline \multirow{4}{*}{Full Bed} & Case & \(\%F_{d}^{v}\) & \(\%F_{d}^{p}\) & \(\hat{\sigma}(\cdot)\) & \(Sk(\cdot)\) & \(Ku(\cdot)\) & \(\%F_{\ell}^{v}\) & \(\%F_{\ell}^{p}\) & \(\hat{\sigma}(\cdot)\) & \(Sk(\cdot)\) & \(Ku(\cdot)\) \\ & PBL & 39.4 & 60.6 & 0.36 & -0.44 & 14.84 & 0.9 & 99.1 & 0.31 & 0.35 & 12.09 \\ & PBM & 23.7 & 76.3 & 1.59 & -0.88 & 14.49 & 9.9 & 90.1 & 1.26 & -0.15 & 11.15 \\ & PBH & 16.7 & 83.25 & 2.82 & 0.46 & 11.20 & 3.2 & 96.8 & 2.83 & 0.58 & 8.46 \\ \hline \multirow{4}{*}{Top Layer} & PBL & 36.8 & 63.2 & 0.6 & -0.35 & 6.79 & 20.1 & 79.9 & 0.62 & -3.0e-4 & 4.10 \\ & PBM & 22.15 & 77.85 & 2.53 & -0.63 & 5.35 & 19.3 & 80.7 & 2.5 & 0.11 & 3.47 \\ & PBH & 13.4 & 86.6 & 4.50 & -0.18 & 4.42 & 4.6 & 95.4 & 4.99 & -0.08 & 3.81 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Drag and lift force statistics for the full bed and the top layer showing the percentage contributions of viscous (\(F_{d}^{v}\), \(F_{\ell}^{p}\)) and pressure (\(F_{d}^{p}\), \(F_{\ell}^{p}\)) components to the mean force, and the standard deviation (\(\sigma\)), skewness (\(Sk\)), and kurtosis (\(Ku\)) of the fluctuations in the force.
fields obtained from the streamflow calculations are used as boundary conditions for a separate mass-transport computation within the hyperbiche zone using Darcy-flow like models (Chen _et al._, 2020). These studies have shown that better characterization of the pressure distributions at the SWI can have a significant impact on predicting transport. Specifically, using the present DNS data, the variation of pressure at the SWI with Reynolds number is quantified.
PDFs of pressure fluctuations and disturbances normalized by their respective standard deviations and averaged over multiple flow through times for the PBL, PBM and PBH cases are compared at their respective zero-displacement planes (\(y=-d\)) or the sediment-water interface. Figures 12(a,b) show the pdf distributions for the normalized turbulent pressure fluctuations, \(p^{\prime}\), and normalized form-induced pressure disturbances, \(\widetilde{\underline{p}}\). The turbulent fluctuations exhibit close to a normal distribution, whereas the form-induced pressure disturbances have skewed distributions with longer positive tails. This is attributed to the roughness protrusions that create positive pressure stagnation regions. Figure 12c shows the pdfs of the sum of the normalized turbulent and form-induced, \(p^{\prime}+\widetilde{\overline{p}}\), pressure PDFs for the three cases. The pressure sum PDFs for all cases are statistically similar and symmetric, slightly heavy-tailed than a Gaussian, however, the Gaussian distribution nearly captures the pressure data within \(\pm 3\hat{\sigma}\). This result suggests that the pressure behavior inside the bed can be approximated with a simpler Gaussian distribution across a range of permeability Reynolds numbers typical of natural stream or river beds. Table 6 lists all higher order statistics for turbulent fluctuations, form-induced disturbances, and their sum. The skewness and kurtosis for \(\widetilde{\underline{p}}\) is higher than
Figure 11: PDFs of drag and lift force fluctuations (a,c) drag force and (b,d) lift force normalized by their respective standard deviation values. Top panel shows permeable bed results for PBL (), PBM (), and PBH () cases. Also shown is a \(t\)-location scale model fit (). Bottom panel compares the values for the full permeable bed (PBM, ) and the top layer of the permeable wall (PBMTL, ) for the medium \(Re_{K}\).
in alignment with the skewed distribution. The mean and variance values for the two are roughly of the same order at the SWI. However, the peak in turbulent pressure fluctuations is much larger than the peak in the form-induced disturbances as seen from the bed-normal variations shown earlier in figure 3h.
The PDFs of normalized pressure fluctuations and disturbances for the PBM and IWM-F cases are also compared at their respective zero-displacement planes (\(y=-d\)) or SWI in figures 12d-f. The variance in turbulent pressure fluctuations and form-induced disturbances is generally larger in the IWM-F case than those in the permeable bed case (PBM). The probability of higher negative \(\widetilde{\overline{p}}\) increases in the IWM-F case due to the blocking effect of the impermeable wall. However, the sum of the normalized distributions of \(p^{\prime}\) and \(\widetilde{\overline{p}}\) show a reasonable match between the PBM and IWM-F cases as shown in figure 12f. Therefore, the distribution of the sum of normalized pressure fluctuation and
\begin{table}
\begin{tabular}{l|c c c|c c c|c c c} \hline & \multicolumn{3}{c}{\(p^{\prime}\)} & \multicolumn{3}{c}{\(\widetilde{\overline{p}}\)} & \multicolumn{3}{c}{\(\widetilde{\overline{p}}+p^{\prime}\)} \\ \hline Case & \(\hat{\sigma}(\cdot)\) & \(Sk(\cdot)\) & \(Ku(\cdot)\) & \(\hat{\sigma}(\cdot)\) & \(Sk(\cdot)\) & \(Ku(\cdot)\) & \(\hat{\sigma}(\cdot)\) & \(Sk(\cdot)\) & \(Ku(\cdot)\) \\ PBL & 1.12 & -1.84e-1 & 4.22 & 1.60 & 1.70 & 8.04 & 1.86 & 1.09 & 6.76 \\ PBM & 2.98 & -5.15e-1 & 5.68 & 2.38 & 2.07 & 9.76 & 3.68 & 0.48 & 6.65 \\ PBH & 6.11 & -5.66e-1 & 7.52 & 4.27 & 1.87 & 8.77 & 7.54 & 0.68 & 8.52 \\ \end{tabular}
\end{table}
Table 6: Higher order statistics for turbulent fluctuations and form induced pressure disturbances, showing the standard deviation \(\hat{\sigma}(\cdot)\), skewness \(Sk(\cdot)\), and kurtosis \(Ku(\cdot)\).
Figure 12: PDFs of (a,d) turbulent pressure fluctuations (\(p^{\prime}\)), (b,e) form-induced pressure disturbances (\(\widetilde{\overline{p}}\)) (c,f) sum of \(p^{\prime}\) and \(\widetilde{\overline{p}}\). Top panel shows data for the permeable bed cases PBL (\(\boldsymbol{-}\)), PBM (\(\boldsymbol{\cdot\cdot\cdot}\)), and PBH (\(\boldsymbol{\cdot\cdot\cdot}\)), and a Gaussian model fit (\(\mathsf{O}\)). Bottom panel compares the PDFs for the full permeable bed (PBM) and the impermeable rough wall (IWF-M, \(\boldsymbol{-\cdot\cdot}\)).
disturbances in the top layer of the sediment bed is found to be sufficient to capture the pressure variation at the sediment-water interface.
## 4 Conclusions
Pore-resolved direct numerical simulations of turbulent flow over a randomly packed, monodispersed bed of spherical particles are performed for three permeability Reynolds numbers, \(Re_{K}=2.56\), \(5.17\), and \(8.94\) (\(Re_{\tau}=270,545\), and \(943\)), in the hydrodynamically fully rough regime representative of natural stream systems. A thoroughly validated fictitious domain based numerical approach (Ape _et al._, 2009; Finn & Apte, 2013; Ghodke & Apte, 2016, 2018\(a\); He _et al._, 2019) is used to conduct these simulations. The numerical computations are first validated against the experimental data by Voermans _et al._ (2017) at \(Re_{K}\sim 2.56\), and are then used to investigate different Reynolds numbers. The time-space averaging methodology is used to compute the mean velocity, Reynolds stresses, and form-induced stresses. Differences in the near-bed turbulence structure, statistics of the local distribution of the net bed stress on the sediment grains and the resultant drag and lift forces, pressure distributions at the sediment-water interface, and the contribution of the top layer of sediment grains to turbulence statistics were quantified in detail. The key findings of this work are summarized below.
(i) The peak and significant values of the Reynolds stresses occur in the top layer of the bed for all three \(Re_{K}\) cases; decreasing quickly below one grain diameter from the sediment crest. While peak values in streamwise stress decrease, those in bed-normal and spanwise stresses increase with increasing Reynolds number. Streamwise, bed-normal, and shear Reynolds stresses exhibit similarity in the free-stream region, substantiating the wall similarity hypothesis.
(ii) Form-induced stresses are typically lower in magnitude than their respective Reynolds stress counterparts, with the locations of their peak values occurring further below the crest. For low \(Re_{K}\), the spanwise form-induced stress is typically larger than the bed-normal values, a result similar to a rough, impermeable wall. However, at higher \(Re_{K}\), the bed-normal stresses are comparable to the spanwise stresses due to increased flow penetration.
(iii) Mean flow penetration (Brinkman layer thickness) and shear penetration show non-linear, increasing correlation with \(Re_{K}\), their ratio approaching a constant deterministic value. The length scale of dominant turbulent eddies at the sediment-water interface is better represented by the mixing length obtained from the Reynolds stress and mean flow gradient. The normalized interfacial mixing length at the sediment-water interface increases with \(Re_{K}\) and shows similar behavior as the Brinkman layer thickness and shear penetration depth, suggesting that the mixing length is relevant as the characteristic length scale for transport of momentum and mass across the SWI. Quadrant analysis for turbulent fluctuations showcases domination of ejection and sweep events at the SWI. Within one particle diameter inside the bed, the turbulent structures lose both their directional bias and strength, becoming more isotropic in nature.
(v) To quantify the role of the top layer of the sediment bed on the flow, an impermeable rough wall with the same roughness elements as the top layer was investigated. The mean and Reynolds stress profiles show very little differences between the permeable bed and impermeable rough wall. Form-induced stresses are, however, influenced by the impermeability, redistributing stresses tangential along the wall.
(vi) The form-induced disturbances and the turbulent fluctuations in pressure are strongly dependent on Reynolds number, with a ten-fold increase in the peak value
between the lowest and highest \(Re_{K}\) cases studied. This increase is attributed to the nature of the flow over the exposed particles in the top layer. A majority of the high magnitude pressure fluctuations are restricted to the top layer of the bed for all Reynolds numbers. The standardized PDFs of the sum of the pressure fluctuations and form-induced pressure disturbances at the sediment-water interface are statistically similar, symmetric, and collapse for different \(Re_{K}\).
(vii) The PDFs of local distribution of the net bed stress computed directly on the sediment grains and normalized by the total bed stress in the streamwise direction, collapse for all Reynolds numbers. The PDFs of fluctuations in the bed stress normalized by their root-mean-squared values are symmetric and exhibit a peaky, non-Gaussian distribution with heavy tails. A logarithmic correlation between the root mean-squared stress fluctuations and the friction Reynolds number (as well as \(Re_{K}\)) is observed, which together with the non-Gaussian distribution for fluctuations in stress can be used to develop mechanistic force balance models for incipient motion of sediment grains.
(viii) The mean and fluctuations in drag and lift forces on the particle are computed by integrating the local bed stress on the particle surface. Majority of the contribution to the drag and lift forces comes from the pressure distribution for all \(Re_{K}\). In addition, the top layer of the bed results in average forces that are 3-4 times the average value in the full bed indicating that a significant contribution to the lift and drag force comes from the top layer of the bed. Fluctuations in drag and lift forces have minimal skewness and high kurtosis indicative of a symmetric, non-Gaussian distribution with heavy tails. A \(t\)-location shape function model based on the skewness and excess kurtosis data fits well for all \(Re_{K}\). Since the local distribution of the net bed stress and the drag and lift forces on particles are mainly influenced by the top layer of the sediment grain, including the roughness effects through a single layer of randomly arranged sediments can potentially improve reach-scale predictions based on reduced-order models.
## Acknowledgements
This work was initiated as part of SKK's internship at Pacific Northwest National Laboratory. Simulations were performed at the Texas Advanced Computing Center's (TACC) Frontera system. Computing resource from Pacific Northwest National Laboratory's EMSL (Environmental Molecular Sciences Laboratory) is also acknowledged.
## Funding
SKK acknowledges support from Pacific Northwest National Laboratory (PNNL) as part of an internship program. SVA and SKK gratefully acknowledge funding from US Depart of Energy, Office of Basic Energy Sciences (Geosciences) under award number DE-SC0021626 as well as US National Science Foundation award #205324. The computing resources used were made available under NSF's Leadership Resources Allocation (LRAC) award. XH and TDS acknowledge funding from the DOE Office of Biological and Environmental Research, Subsurface biogeochemical Research program, through the PNNL Subsurface Science Scientific Focus Area project ([http://sbrsfa.pnnl.gov/](http://sbrsfa.pnnl.gov/)).
## Declaration of Interests
The authors report no conflict of interest.
## Appendix A Grid refinement, integral scales, and domain sizes
The solver has been thoroughly verified and validated for a range of cases (Apte & Patankar, 2008) and has also been used for large-scale, parallel simulations of oscillatory flow over a layer of sediment particles (Ghodke & Apte, 2016, 2018_a_) and flow through porous media (Finn & Apte, 2013; He _et al._, 2019).
In turbulent flow over a sediment bed, there is a need to use non-isotropic and high-aspect ratio grids to minimize the total control volumes and yet provide sufficient resolution needed to capture all scales of turbulence. Specifically, for DNS of open channel flows, the resolution near the sediment bed and in the bed-normal direction should be such that \(\Delta y^{+}<1\), where \(\Delta y^{+}\) represents resolution in wall units. The code was used to predict flow over an isolated sphere at different Reynolds numbers using isotropic and non-isotropic, rectilinear grids. The drag force was compared with published data (Apte & Patankar, 2008) and is given in Table 7. It is observed that the high-aspect ratio grids are capable of predicting the drag forces accurately for \(Re_{D}\) up to 350, where \(Re_{D}=UD_{p}/\nu\) is Reynolds number based on the sphere diameter, \(D_{p}\), and the uniform undisturbed upstream velocity, \(U\). Also, the effectiveness of the non-isotropic grids in capturing vortex shedding was verified using the Strouhal number for vortex shedding at \(Re_{D}=350\). The obtained value for Strouhal number was 0.131, which compared reasonably well with the range of values between 0.135-0.14 predicted on finer, isotropic grids in literature (Mittal _et al._, 2008; Mittal, 1999; Bagchi _et al._, 2001).
In the present permeable bed cases, the flow velocity near the bed is much smaller than the free-stream velocity and hence the relevant Reynolds number for the spherical roughness elements is the roughness Reynolds number \(D^{+}=D_{p}u_{\tau}/\nu\), where \(D_{P}\) the particle diameter is a measure of the roughness height for monodispersed particle beds. \(D^{+}\) values for the PBL, PBM and PBH cases are 77, 156, and 270, respectively, which fall well within the range of Reynolds numbers of the above grid refinement study. The grid resolution used in the present DNS is thus sufficient to capture the inertial flow features around the particle, including flow separation and wakes.
Eulerian two-point auto-correlations are used to compute the integral length scales in streamwise and spanwise directions, and are compared with the domain sizes in those directions. With increase in Reynolds number, the vortical structures are broken down by the roughness elements, and the integral length scales in the streamwise and spanwise directions are expected to decrease. The time averaged Eulerian two-point auto-correlations were computed as
\[\rho_{ij}^{E}\left(|\mathbf{s}|\right)=\frac{\overline{\langle u_{i}^{\prime}(\bm {x},t)\ u_{j}^{\prime}(\mathbf{x}+\mathbf{s},t)\rangle}}{\overline{\langle u_{i}^{ \prime}(\mathbf{x},t)\ u_{j}^{\prime}(\mathbf{x},t)\rangle}},\]
here \(\rho_{ij}^{E}\) is the Eulerian auto-correlation, \(\mathbf{s}\) represents the set of all possible vector displacements for which the auto-correlation is calculated. The correlations were first
\begin{table}
\begin{tabular}{l l l l} \hline \(Re_{D}\) & \multicolumn{1}{c}{50} & 150 & 350 \\ & Drag & Coefficient, \(C_{D}\) \\ \cline{2-3} Isotropic, \(D_{p}/\Delta y=100\) & \(1.54\) & \(0.9\) & \(0.65\) \\ \(\Delta x=\Delta z=3\Delta y\) & \(1.58\) & \(0.91\) & \(0.66\) \\ \(\Delta x=\Delta z=4\Delta y\) & \(1.56\) & \(0.9\) & \(0.66\) \\ \hline \end{tabular}
\end{table}
Table 7: Grid refinement study for flow over an isolated sphere with non-isotropic, rectilinear grids similar to those used in the present study.
computed at 100,000 randomly picked locations (\(\mathbf{x}\)) in the fluid domain at one instant of time and then spatially averaged to obtain the overall representation. This procedure is then repeated over several flow through times to get a temporal average of the spatially averaged values. The Eulerian integral length scales, \(\rho_{ij}^{E}\), are then calculated by integrating the correlations over the respective abscissas and the results are presented in table 8. The length scales for the PBL case are comparable to values obtained by Krogstad and Antonia (1994); Shen et al. (2020) at similar Reynolds number. The domain sizes for the PBL and PBM cases are approximately 11-13 times the integral length scale in the streamwise and 20-32 times that in the spanwise direction. The domain size for PBH is same as PBM. The domain lengths \(L_{x}\times L_{y}\) are \(12.56\delta\times 6.28\delta\) for PBL and \(6.28\delta\times 3.14\delta\) for PBM are sufficient for the periodicity assumption to obtain mean and turbulence statistics without any domain confinement effects. In comparison, DNS of turbulent flow over rough, impermeable walls by Ma et al. (2021) used \(4\delta\times 2.4\delta\) for similar Reynolds numbers.
## Appendix B Variable volume averaging
Pokrajac and De Lemos (2015) used a variable volume averaging methodology, to spatially average time-averaged quantities, wherein thinner volumes were used to average the flow in the free-stream and thicker volumes were used to average inside the porous region with a smooth transition in volume height between the two regions. Following a similar approach, in this study thin volumes are used for averaging in the free-stream region near the crest of the bed where steep gradients in flow quantities are present. The averaging volume is gradually coarsened away from this region and deeper inside the bed thicker averaging volumes are used. For averaging purposes the domain in the vertical direction is divided into four regions: (Region 1) uniform averaging-volume height deep inside the bed, (Region 2) a transitioning averaging-volume height between the uniform region below and the top layer of the bed, (Region 3) a refined uniform averaging-volume height in the top layer and the crest region, (Region 4) a transitioning averaging-volume height in the free-stream region. The variable volume averaging approach across the various segments is given as
\[l_{y} =l_{1}\eta_{1} -1.14\leqslant y/\delta\leqslant-0.57\] \[l_{y} =\frac{l_{2}\tanh\left(\gamma_{1}\eta_{2}\right)}{\tanh\left( \gamma_{1}\right)} -0.57<y/\delta\leqslant-0.28\] \[l_{y} =l_{3}\eta_{3} -0.28<y/\delta\leqslant 0.031\] \[l_{y} =\frac{l_{4}\left(1-\tanh\left(\gamma_{2}-\gamma_{2}\eta_{3} \right)\right)}{\tanh\left(\gamma_{2}\right)} -0.031<y/\delta\leqslant 1,\]
where \(l_{1},l_{2},l_{3}\) and \(l_{4}\) are the vertical heights of each region, \(\gamma_{1}\) and \(\gamma_{2}\) control the rate of transitioning of the averaging volume height in the bed-normal direction,
\begin{table}
\begin{tabular}{c c c c c c c} Case & \(L_{11}/\delta\) & \(L_{33}/\delta\) & \(L_{11}/D_{p}\) & \(L_{33}/D_{p}\) & \(L_{x}/L_{11}\) & \(L_{z}/L_{33}\) \\ PBL & 0.953 & 0.2 & 3.33 & 0.7 & 13.18 & 31.45 \\ PBM & 0.558 & 0.151 & 1.95 & 0.53 & 11.25 & 20.79 \\ \end{tabular}
\end{table}
Table 8: Eulerian length scales in the streamwise (\(\rho_{11}^{E}\)) and spanwise (\(\rho_{33}^{E}\)) directions normalized by the free-stream height, \(\delta\), and the particle diameter, \(D_{P}\). Also shown are the streamwise (\(L_{x}\)) and spanwise (\(L_{z}\)) domain lengths normalized by \(L_{11}\) and \(L_{33}\), respectively.
\((\eta-w_{1})/w_{2},\eta_{3}=(\eta-(w_{1}+w_{2}))/w_{3}\), and \(\eta=j/gnj\). Here, values \(w\) are weights based on the ratio of number of assigned volumes for averaging in each region and the total number of volumes, \(j\) is the index of the averaging volume, \(gnj\) is the total number of averaging volumes used in the bed normal direction. Typical \(\gamma_{1}\) and \(\gamma_{2}\) values are between \(1.5-3\) and \(0.7-1.3\), respectively. Similar variable volume averaging has been carried out in previous studies by Karra _et al._ (2022b,a)
## Appendix C Validation Study
Pore-resolved direct numerical simulation of turbulent flow over a sediment bed (case VV) was first validated with experimental data of Voermans _et al._ (2017) as well as DNS data of Shen _et al._ (2020). Permeable bed case with porosity of \(0.41\), \(Re_{K}=2.56\) and \(Re_{\tau}\sim 180\) matches with case L12 in Voermans _et al._ (2017). In the present work, the numerical algorithm developed by Dye _et al._ (2013) is used to generate a random distribution of monodispersed spheres for a given porosity. It uses the collective rearrangement algorithm introduced by Williams & Philipse (2003), coupled with a mechanism for controlling the overall system porosity, providing a periodic arrangement in the streamwise and spanwise directions. Although the average porosity in the sediment bed is matched, the actual random configuration of the sediment particles is likely different compared to both the experimental (Voermans _et al._, 2017) and published DNS data (Shen _et al._, 2020).
Voermans _et al._ (2017) defined the origin of the sediment bed to be the inflection point in the porosity profile, that is, where \(\partial_{yy}^{2}\phi=0\). Therefore, in order to facilitate comparison of the current DNS results with the experimental work the origin for case VV is taken to be the inflection point (\(\partial_{yy}^{2}\phi=0\) ) of its porosity profile. Shown also in the schematic in figure 13 is the zero-displacement plane, \(y=-d\), whose physical meaning and values are given in Appendix D. The time-space averaged mean velocity profile normalized by channel free-stream velocity, \(U_{\delta}\), is shown in figure 14a. Excellent agreement is seen between the DNS data, experimental measurements and DNS data from Shen _et al._ (2020). Figures 14b, 14c, and 14d show a comparison of turbulence intensities, namely streamwise, bed-normal and shear stresses. Again very good agreement between DNS and experiment is observed. The slight deviation in Reynolds stress between the DNS and experimental in the outer channel flow region can be attributed to the high measurement uncertainty (between \(6-30\%\)Voermans _et al._ (2017)) in sampling this variable in the experiment.
Figure 13: Schematic showing positions of the sediment crest (\(y=0\)), the zero-displacement plane (\(y=-d\)), and particle diameter (\(D_{p}\)).
Figure 14: Comparison of (a) mean streamwise, velocity and (b) streamwise, (c) wall-normal, and (d) shear components of spatially averaged Reynolds stress tensor. Experimental data by Voermans _et al._ (2017) (O), DNS by Shen _et al._ (2020) (O), present DNS ().
Figure 15: Comparison of (a) streamwise, (b) wall-normal, and (c) shear components of form induced stress tensor. Experimental data by Voermans _et al._ (2017) (O), emulating experimental sampling (), Shen _et al._ (2020) (O), present DNS ().
The normalized form-induced or dispersive stresses are shown in figures 15a, 15b, and 15c. The differences between the present DNS and the experimental results can be explained based on the sampling procedures used. Firstly, as mentioned in the previous section, spatial averaging is carried out over an entire \(x-z\) volume at a given \(y\) location for DNS results. However, for the experimental data, spatial averaging was performed over three different spanwise locations over six different measurements. To quantify the differences in the sampling procedures between the experiments and DNS, the experimental sampling process is replicated in the DNS data whereby spatial averaging is carried out at a few finite uncorrelated spanwise locations and repeated over different streamwise locations. A family of curves, shown by grey squares, indicates the associated uncertainty with the sampling locations of the experimental data. The averaged experimental and DNS data are within this scatter for all streamwise locations. Secondly, it is has been reported in literature (Nikora _et al._, 2002; Fang _et al._, 2018) that the spanwise averaging is highly sensitive to the geometry at the sediment-water interface. For the present DNS, only the mean porosity of the randomly distributed arrangement of monodispersed spherical particles is matched with the experimental geometry. However, the exact sediment-grain distribution in the experiments is unknown and is likely different compared to that used in the DNS. This difference, especially near the top of the bed can also contribute to differences in the form-induced or dispersive stresses.
In spite of the potential differences in the sediment bed distribution between DNS and experimental work, the present results reproduce the mean flow and turbulence stresses observed in the experiment. The form-induced stresses fall within the uncertainty associated with sampling locations in the experiments. In addition, turbulence statistics from the current work are compared with DNS predictions from Shen _et al._ (2020). Good agreement between the two sets of DNS results is observed. The consistency in predicted results with the published experimental and numerical studies persuasively validates the numerical approach used in this work.
## Appendix D The log-law and zero-displacement thickness
In turbulent flows over rough walls and permeable beds, the log-law has the following form
\[\frac{U(y)}{u_{\tau}}=\frac{1}{\kappa}\log\left(\frac{y+d}{y_{0}}\right),\]
where \(\kappa\) is the von-Karman constant, \(d\) is distance between the zero-displacement plane and the top of the sediment crest (see figure 13), and \(y_{0}\) is the equivalent roughness height which is related to the measure of the size of the roughness elements.
Although several techniques have been used to determine these parameters in litera
\begin{table}
\begin{tabular}{l c c c c c c c c} Case & \(\kappa\) & \(d/\delta\) & \(d/D_{p}\) & \(d^{+}\) & \(y_{0}/\delta\) & \(y_{0}/D_{p}\) & \(y_{0}^{+}\) & \(\delta_{b}/D_{p}\) & \(\delta_{p}/D_{p}\) \\ PBL & 0.325 & 0.1686 & 0.59 & 45 & 0.0244 & 0.085 & 6.57 & 0.76 & 0.76 \\ PBM & 0.31 & 0.1743 & 0.61 & 95 & 0.0298 & 0.10 & 16.22 & 0.94 & 0.94 \\ PBH & 0.2875 & 0.1829 & 0.64 & 172 & 0.0364 & 0.127 & 34.35 & 1.31 & 1.11 \\ \end{tabular}
\end{table}
Table 9: The von-Karman constant (\(\kappa\)), zero-displacement thickness (\(d\)), and equivalent roughness height (\(y_{0}\)) normalized by \(\delta\), \(D_{p}\) and \(\nu/u_{\tau}\). The last two columns shown Brinkman layer thickness \(\delta_{b}\), and shear penetration depth \(\delta_{p}\). ( )\({}^{+}\) denotes wall units. Results are shown for PBL, PBM, and PBH cases.
ture (Raupach _et al._, 1991), the procedure described by Breugem _et al._ (2006) is followed here. First, the extent of the logarithmic layer is determined by plotting \((y+d)^{+}\partial_{y^{+}}U^{+}\) against \(y^{+}\) for several values of \(d\). From the equation of log-law, it is easy to see that the value of \((y+d)^{+}\partial_{y^{+}}U^{+}\) is a constant equal to \(1/\kappa\) in the logarithmic layer. Therefore, the value of \(d\) is the one that gives a horizontal profile in the logarithmic layer. The values of \(d\), \(\kappa\), and \(y_{0}\) determined from a least squares fit of log-law equation to the velocity profile in the logarithmic layer are given in table 9. The von-Karman constant (\(\kappa\)) for the three permeable bed cases is lower than the value of 0.4 for flows over smooth walls. This decrease in \(\kappa\) has also been observed in flows over permeable walls by Suga _et al._ (2010); Manes _et al._ (2011); Breugem _et al._ (2006). Both the zero-displacement thickness and equivalent roughness height show a dependency on \(Re_{K}\) and increase with increasing Reynolds number. Their values in wall units, \(d^{+}\) and \(y_{0}^{+}\), compare reasonably well with the studies by Manes _et al._ (2011); Suga _et al._ (2010); Breugem _et al._ (2006).
From Nikuradse (1933) experiments of flows over impermeable fully rough walls, the ratio of \(y_{0}/D_{p}\) was found be approximately 1/30. The values of \(y_{0}/D_{p}\) for the PBL, PBM, and PBH cases given in table 9 are approximately 2-4 times larger than the value observed in fully rough walls. Importantly, they show a correlation with \(Re_{K}\), due to the influence of the bed permeability. Hinze (1975) (p. 637) report that for fully rough impermeable walls the ratio of \(d/D_{p}\) is approximately 0.3. This ratio for the three permeable bed cases is roughly 2 times larger as shown in table 9 and also shows a dependency on \(Re_{K}\). Therefore, the permeable bed cases in the current study show the characteristics of a fully rough wall regime (based on their roughness Reynolds number (\(D^{+}>70\)) as shown in table 1) influenced by permeability. Breugem _et al._ (2006) in their study for cases with \(1<Re_{K}<10\) found the values of \(y_{0}/D_{p}\) and \(d/D_{p}\) to be orders of magnitude greater than 1/30 and 0.3 respectively as their \(D^{+}\) values were \(<7\) which meant that the effects of surface roughness were negligible.
|
2301.04308
|
Self-consistent thermodynamic potential for magnetized QCD matter
|
Within the two-flavor Nambu--Jona-Lasinio model, we derive a self-consistent
thermodynamic potential $\Omega$ for a QCD matter in an external magnetic field
$B$. To be consistent with Schwinger's renormalization spirit, counter terms
with vacuum quark mass are introduced into $\Omega$ and then the explicit
$B$-dependent parts can be regularized in a cutoff-free way. Following that,
explicit expressions of gap equation and magnetization can be consistently
obtained according to the standard thermodynamic relations. The formalism is
able to reproduce the paramagnetic feature of a QCD matter without ambiguity.
For more realistic study, a running coupling constant is also adopted to
account for the inverse magnetic catalysis effect. It turns out that the
running coupling would greatly suppress magnetization at large $B$ and is
important to reproduce the temperature enhancement effect to magnetization. The
case with finite baryon chemical potential is also explored: no sign of
first-order transition is found by varying $B$ for the running coupling and the
de Haas-van Alphen oscillation shows up in the small $B$ region.
|
Gaoqing Cao, Jianing Li
|
2023-01-11T04:52:57Z
|
http://arxiv.org/abs/2301.04308v3
|
# A self-consistent thermodynamic potential for a magnetized QCD matter
###### Abstract
Within the two-flavor Nambu-Jona-Lasinio model, we derive a self-consistent thermodynamic potential \(\Omega\) for a QCD matter in an external magnetic field \(B\). To be consistent with Schwinger's renormalization spirit, counter terms with vacuum quark mass are introduced into \(\Omega\) and then the explicit \(B\)-dependent parts can be regularized in a cutoff-free way. Following that, explicit expressions of gap equation and magnetization can be consistently obtained according to the standard thermodynamic relations. The formalism is able to reproduce the paramagnetic feature of a QCD matter without ambiguity. For more realistic study, a running coupling constant is also adopted to account for the inverse magnetic catalysis effect. It turns out that the running coupling would greatly suppress magnetization at large \(B\) and is important to reproduce the temperature enhancement effect to magnetization. The case with finite baryon chemical potential is also explored: no sign of first-order transition is found by varying \(B\) for the running coupling and the de Haas-van Alphen oscillation shows up in the small \(B\) region.
pacs: 11.30.Qc, 05.30.Fk, 11.30.Hv, 12.20.Ds
## I Introduction
Extremely strong magnetic field could be produced in peripheral relativistic heavy ion collisions (HICs) [1; 2] and is also expected to exist in magnetars [3; 4; 5] and the early Universe [6; 7; 8]. For that considerations, a lot of work has been carried out to understand the systematic features of quantum chromodynamics (QCD) matter under external magnetic field. One important aspect is to study QCD phase transition in strong magnetic field: as the magnitude of magnetic field is of the order of the QCD energy scale \(\Lambda_{\rm QCD}\sim 0.2\,\)GeV, the effect is expected to be considerable. In the end of 20th century, experts took the magnetic field into account in the chiral effective Nambu-Jona-Lasinio model and established the basic notion of "magnetic catalysis effect" to chiral condensate [9; 10; 11]. However, in 2012, the first-principle lattice QCD (LQCD) simulations [12; 13] showed that the chiral condensate could decrease with large magnetic field at the pseudo-critical temperature \(T\sim 0.155\,\)GeV, known as "inverse magnetic catalysis effect". Such anomalous feature had drawn most attentions of researchers interested in the thermodynamic properties of QCD matter and the QCD phase has been widely explored in the circumstances where magnetic field is involved, refer to the reviews Ref. [14; 15; 16] and the literatures therein.
Besides, magnetization is also an important thermodynamic quantity to understand QCD matter. In 2013, both the hadron resonance gas model [17] and \(2+1\) LQCD [18] had been adopted to study the magnetization and the results turned out that the QCD matter is consistently paramagnetic at zero temperature. The \(2+1\) LQCD simulations had been extended to finite temperature the next year and the magnetization was found to be enhanced by thermal motions [19]. In the following years, only few works concerned the magnetization feature in chiral models such as the two-flavor chiral perturbation theory [20; 21], three-flavor Polyakov-linear-sigma (PLS) model [22], and two- and three-flavor (Polyakov-)NJL model [23; 24]. The studies in PLS and (P)NJL models seem more realistic as chiral symmetry breaking and restoration was self-consistently taken into account for the evaluation of magnetization. However, compared to previous thermodynamic potential [25], it is unsatisfied that one had to introduce cutoff for the explicitly magnetic field dependent terms to evaluate magnetization in the PNJL model [23]. Furthermore, the definition of magnetization seemed ambiguous as one must apply the renormalization scheme of LQCD simulations [18] to get the correct paramagnetic feature [23].
This work is devoted to solving the regularization problem of (P)NJL model in a self-consistent way. In Sec.II, we will derive a self-consistent thermodynamic potential for finite magnetic field, temperature, and baryon chemical potential. From that, expressions of gap equation and magnetization can be given explicitly. Then, numerical calculations will be carried out in Sec.III, where we compare the results with different regularization schemes or different forms of coupling constants. Finally, we summarize in Sec.IV.
## II The self-consistent formalism
The Lagrangian density of the two-flavor NJL model with baryon chemical potential \(\mu_{\rm B}\) can be given as [9; 26]
\[\mathcal{L}=\bar{\psi}\Big{[}i\not{D}-i\gamma^{4}\frac{\mu_{\rm B}}{3}-m_{0} \Big{]}\psi+G(eB)\Big{[}\big{(}\bar{\psi}\psi\big{)}^{2}+\big{(}\bar{\psi}i \gamma_{5}\mathbf{\tau}\psi\big{)}^{2}\Big{]} \tag{1}\]
in Euclidean space, where \(\psi=(u,d)^{T}\) represents the two-flavor quark field, \(m_{0}\) is its current mass, and \(\mathbf{\tau}\) are Pauli matrices in flavor space. In minimal coupling scheme, the covariant derivative is defined as \(D_{\mu}\equiv\partial_{\mu}-iqA_{\mu}\) with the electric charge matrix \(q\equiv{\rm diag}(q_{u},q_{\rm d})={\rm diag}(\frac{2}{3},-\frac{1}{3})e\) and the magnetic effect introduced through the vector potential \(A_{\mu}\). For more general consideration, we have introduced a coupling constant \(G(eB)\) that could run with the magnetic field \(B\) here.
To obtain the analytic form of the basic thermodynamic potential, we take Hubbard-Stratonovich transformation with the help of the auxiliary fields \(\sigma=-2G\tilde{\psi}\psi\) and \(\mathbf{\pi}=-2G\tilde{\psi}i\gamma^{5}\mathbf{\tau}\psi\)[9] and the Lagrangian becomes
\[\mathcal{L}=\bar{\psi}\Big{[}i\not{\mathcal{D}}-i\gamma^{4}\frac{\mu_{\rm B}} {3}-i\gamma^{5}\mathbf{\tau}\cdot\mathbf{\pi}-\sigma-m_{0}\Big{]}\psi-\frac{\sigma^{2} +\mathbf{\pi}^{2}}{4G(eB)}. \tag{2}\]
We assume \(\langle\sigma\rangle\equiv m-m_{0}\neq 0\) and \(\langle\mathbf{\pi}\rangle=0\) in mean field approximation, and then the quark degrees of freedom can be integrated out to give the thermodynamic potential formally as
\[\Omega=\frac{(m-m_{0})^{2}}{4G(eB)}-\frac{T}{V}{\rm Tr}\ln\Big{[}i\not{ \mathcal{D}}-m-i\gamma^{4}\frac{\mu_{\rm B}}{3}\Big{]}\]
with the trace \({\rm Tr}\) over the coordinate, spinor, flavor and color spaces. Recalling that the quark propagator in a magnetic field takes the form \(\mathcal{S}=-\left[i\not{\mathcal{D}}-m-i\gamma^{4}\frac{\mu_{\rm B}}{3} \right]^{-1}\), \(\Omega\) can be alternatively presented as
\[\Omega=\frac{(m-m_{0})^{2}}{4G(eB)}-\frac{T}{V}\int{\rm d}\,m\,{\rm Tr}\, \mathcal{S}. \tag{3}\]
At zero temperature and chemical potential, the full fermion propagator in a magnetic field had been well evaluated with the help of proper time by Schwinger in 1951. In coordinate space, it takes the from [27]:
\[\mathcal{S}_{\rm f}(x,x^{\prime})= \frac{-i\,q_{\rm f}B}{(4\pi)^{2}}\int_{0}^{\infty}\frac{{\rm d}s} {s}\;e^{-iq_{\rm f}\int_{x^{\prime}}^{s}\,A\cdot dx}\exp\Big{\{}-im^{2}s+ \frac{i}{4}\left[\frac{q_{\rm f}B}{\tan(q_{\rm f}Bs)}(y_{1}^{2}+y_{2}^{2})+ \frac{1}{s}(y_{3}^{2}+y_{4}^{2})\right]\Big{\}}\] \[\left\{m-\frac{q_{\rm f}B}{2}\Big{[}\big{(}\cot(q_{\rm f}Bs) \gamma^{1}+\gamma^{2}\big{)}y_{1}+\big{(}\cot(q_{\rm f}Bs)\gamma^{2}-\gamma^{1 }\big{)}y_{2}\Big{]}-\frac{1}{2s}\Big{[}\gamma^{3}y_{3}+\gamma^{4}y_{4}\Big{]} \right\}\Big{[}\cot(q_{\rm f}Bs)+\gamma^{1}\gamma^{2}\Big{]} \tag{4}\]
with \(y_{\mu}=x_{\mu}-x^{\prime}_{\mu}\) and \(s\) the proper time. For the calculation of \(\Omega\), the Schwinger phase term \(e^{-iq_{\rm f}\int_{x^{\prime}}^{s}\,A\cdot dx}\) is irrelevant since we would take the limit \(x\to x^{\prime}\). After dropping this term, the left effective propagator becomes translation invariant and can be conveniently presented in energy-momentum space as
\[\hat{\mathcal{S}_{\rm f}}(p)= i\int_{0}^{\infty}{\rm d}s\exp\Big{\{}-i(m^{2}+p_{4}^{2}+p_{3}^{2 })s-i\frac{\tan(q_{\rm f}Bs)}{q_{\rm f}B}(p_{1}^{2}+p_{2}^{2})\Big{\}}\left[m- \gamma^{4}p_{4}-\gamma^{3}p_{3}-\gamma^{2}(p_{2}+\tan(q_{\rm f}Bs)p_{1})\right.\] \[\left.-\gamma^{1}(p_{1}-\tan(q_{\rm f}Bs)p_{2})\right]\Big{[}1+ \gamma^{1}\gamma^{2}\tan(q_{\rm f}Bs)\Big{]}. \tag{5}\]
In vanishing \(B\) limit, the well-known fermion propagator \(\mathcal{S}(p)=\frac{1}{m-\not{p}}\) can be reproduced by completing the integration over \(s\), hence the effective propagator is helpful for the discussion of regularization. Then, the bare thermodynamic potential follows directly as
\[\Omega_{0}= \frac{(m-m_{0})^{2}}{4G(eB)}+\frac{N_{\rm c}}{8\pi^{2}}\sum_{\rm f =u,d}\int_{0}^{\infty}\frac{{\rm d}s}{s^{3}}\,e^{-m^{2}s}\frac{q_{\rm f}Bs}{ \tanh(q_{\rm f}Bs)} \tag{6}\]
after substituting the propagator Eq.(4) into Eq.(3).
The last term of Eq.(6) is divergent and must be regularized for exploring physics. If we formally expand it as a serial sum of \(B^{2k}\) (\(k\in\mathbb{N}\)) around \(B\sim 0\), we would find that only the \(B^{0}\) and \(B^{2}\) terms are divergent. According to Schwinger's initial proposal [27], the \(B^{0}\) term is physics irrelevant and the \(B^{2}\) terms can be absorbed by performing renormalizations of electric charges and magnetic field. Then, the finite form of Eq.(6) would be
\[\Omega_{0}=\frac{(m-m_{0})^{2}}{4G(eB)}+\frac{N_{\rm c}}{8\pi^{2}}\sum_{\rm f=u, d}\int_{0}^{\infty}\frac{{\rm d}s}{s^{3}}\,e^{-m^{2}s}\left[\frac{q_{\rm f}Bs}{ \tanh(q_{\rm f}Bs)}-1-\frac{1}{3}(q_{\rm f}Bs)^{2}\right].\]
This is correct when the magnetic field is much smaller than the current mass square \(m^{2}\) in QED systems. But for QCD systems, the dynamical mass \(m\) is itself determined by the minimum of the thermodynamic potential, the \(B^{0}\) term can not be dropped at all [25]. Moreover, the dynamical mass \(m\) is also \(B\)-dependent due to magnetic catalysis effect [11], the term \(e^{-m^{2}s}\frac{1}{3}(q_{\rm f}Bs)^{2}\) actually contains \(o(B^{4})\) terms which can not be absorbed by the renormalizations of electric charges and magnetic field.
The solutions could be the following. Firstly, the \(B^{0}\) term can be recovered with three momentum cutoff according to the discussions in Ref. [25], then we have
\[\Omega_{0}=\frac{(m-m_{0})^{2}}{4G(eB)}+\frac{N_{\rm c}}{8\pi^{2}}\sum_{\rm f=u, d}\int_{0}^{\infty}\frac{\mathrm{d}s}{s^{3}}\,e^{-m^{2}s}\left[\frac{q_{\rm f}Bs}{ \tanh(q_{\rm f}Bs)}-1\right]-4N_{c}\int^{\Lambda}\frac{\mathrm{d}^{3}p}{(2\pi )^{3}}E_{\rm p}(m) \tag{7}\]
with \(E_{\rm p}(m)=(p^{2}+m^{2})^{1/2}\). Next, to absorb the \(B^{2}\) divergent term but not \(o(B^{4})\) terms, we could refer to the term with vacuum quark mass \(m_{\rm v}\) for help. Then, a thermodynamic potential consistent with Schwinger's renormalization spirit can be given as
\[\Omega_{0} = \frac{(m-m_{0})^{2}}{4G(eB)}-4N_{c}\int^{\Lambda}\frac{\mathrm{d} ^{3}p}{(2\pi)^{3}}E_{\rm p}(m)+\frac{N_{\rm c}}{8\pi^{2}}\sum_{\rm f=u,d}\int_ {0}^{\infty}\frac{\mathrm{d}s}{s^{3}}\,\left(e^{-m^{2}s}-e^{-m_{\rm v}^{2}s} \right)\left[\frac{q_{\rm f}Bs}{\tanh(q_{\rm f}Bs)}-1\right] \tag{8}\] \[+\frac{N_{\rm c}}{8\pi^{2}}\sum_{\rm f=u,d}\int_{0}^{\infty} \frac{\mathrm{d}s}{s^{3}}\,e^{-m_{\rm v}^{2}s}\left[\frac{q_{\rm f}Bs}{\tanh( q_{\rm f}Bs)}-1-\frac{1}{3}(q_{\rm f}Bs)^{2}\right].\]
Note that the subtracted term with integrand \(e^{-m_{\rm v}^{2}s}\frac{1}{3}(q_{\rm f}Bs)^{2}\) only contains \(B^{2}\) term as \(m_{\rm v}\) is a constant.
Eventually, to make sure the pressure to be consistent with the one given in Ref. [27] when \(m=m_{\rm v}\) for any \(B\), \(m\)-independent terms can be subtracted to get the physical thermodynamic potential as
\[\Omega_{0} = \frac{(m-m_{0})^{2}-(m_{\rm v}-m_{0})^{2}}{4G(eB)}-4N_{c}\int^{ \Lambda}\frac{\mathrm{d}^{3}p}{(2\pi)^{3}}[E_{\rm p}(m)-E_{\rm p}(m_{\rm v})] +\frac{N_{\rm c}}{8\pi^{2}}\sum_{\rm f=u,d}\int_{0}^{\infty}\frac{\mathrm{d}s }{s^{3}}\,\left(e^{-m^{2}s}-e^{-m_{\rm v}^{2}s}\right) \tag{9}\] \[\times\left[\frac{q_{\rm f}Bs}{\tanh(q_{\rm f}Bs)}-1\right]+ \frac{N_{\rm c}}{8\pi^{2}}\sum_{\rm f=u,d}\int_{0}^{\infty}\frac{\mathrm{d}s}{ s^{3}}\,e^{-m_{\rm v}^{2}s}\left[\frac{q_{\rm f}Bs}{\tanh(q_{\rm f}Bs)}-1- \frac{1}{3}(q_{\rm f}Bs)^{2}\right].\]
This form of \(\Omega_{0}\) would be adopted for analytic derivations in the following and numerical calculations in next section. Finite temperature and chemical potential usually do not induce extra divergence and the corresponding terms of thermodynamic potential can be easily evaluated with the help of Landau levels as
\[\Omega_{T\mu}=-2N_{\rm c}T\sum_{\rm f=u,d}^{\rm t=\pm}\frac{|q_{\rm f}B|}{2\pi }\sum_{n=0}^{\infty}\alpha_{\rm n}\int_{-\infty}^{\infty}\frac{\mathrm{d}p_{3 }}{2\pi}\ln\left[1+e^{-\frac{1}{2}\left(E_{\rm f}^{n}(p_{3},m)+t\frac{\mu_{\rm f }}{2}\right)}\right], \tag{10}\]
where \(\alpha_{\rm n}=1-\delta_{\rm n0}/2\) and \(E_{\rm f}^{n}(p_{3},m)=(2n|q_{\rm f}B|+p_{3}^{2}+m^{2})^{1/2}\). So the total thermodynamic potential of a magnetized QCD matter is \(\Omega=\Omega_{0}+\Omega_{T\mu}\), and the expressions of gap equation and magnetization follow the thermodynamic relations \(\partial\Omega/\partial m=0\) and \({\cal M}=-\partial\Omega/\partial eB\) as
\[0 = \frac{m-m_{0}}{2G(eB)}-4N_{c}\int^{\Lambda}\frac{\mathrm{d}^{3}p} {(2\pi)^{3}}\frac{m}{E_{\rm p}(m)}-\frac{N_{\rm c}m}{4\pi^{2}}\sum_{\rm f=u,d} \int_{0}^{\infty}\frac{\mathrm{d}s}{s^{2}}\,e^{-m^{2}s}\left[\frac{q_{\rm f}Bs} {\tanh(q_{\rm f}Bs)}-1\right]+2N_{\rm c}\sum_{\rm f=u,d}^{\rm t=\pm}\frac{|q_ {\rm f}B|}{2\pi}\sum_{n=0}^{\infty}\alpha_{\rm n}\int_{-\infty}^{\infty}\frac{ \mathrm{d}p_{3}}{2\pi} \tag{11}\] \[\frac{m}{E_{\rm f}^{n}(p_{3},m)}\frac{1}{1+e^{\frac{1}{2}\left[E_{ \rm f}^{n}(p_{3},m)+t\frac{\mu_{\rm f}}{3}\right]}},\] \[{\cal M} = \frac{(m-m_{0})^{2}-(m_{\rm v}-m_{0})^{2}}{4}\frac{G^{\prime}(eB) }{G^{2}(eB)}-\frac{N_{\rm c}}{8\pi^{2}}\sum_{\rm f=u,d}\int_{0}^{\infty} \frac{\mathrm{d}s}{s^{3}}\,\left(e^{-m^{2}s}-e^{-m_{\rm v}^{2}s}\right)\left[ \frac{\tilde{q}_{\rm f}s}{\tanh(q_{\rm f}Bs)}-\frac{\tilde{q}_{\rm f}q_{\rm f }Bs^{2}}{\sinh^{2}(q_{\rm f}Bs)}\right]-\] (12) \[\frac{N_{\rm c}}{8\pi^{2}}\sum_{\rm f=u,d}\int_{0}^{\infty}\frac{ \mathrm{d}s}{s^{3}}\,e^{-m_{\rm v}^{2}s}\left[\frac{\tilde{q}_{\rm f}s}{\tanh( q_{\rm f}Bs)}-\frac{\tilde{q}_{\rm f}q_{\rm f}Bs^{2}}{\sinh^{2}(q_{\rm f}Bs)}- \frac{2}{3}\tilde{q}_{\rm f}q_{\rm f}Bs^{2}\right]+2N_{\rm c}T\sum_{\rm f=u,d}^{ \rm t=\pm}\frac{|\tilde{q}_{\rm f}|}{2\pi}\sum_{n=0}^{\infty}\alpha_{\rm n}\int_{- \infty}^{\infty}\frac{\mathrm{d}p_{3}}{2\pi}\] \[\ln\left[1+e^{-\frac{1}{2}\left(E_{\rm f}^{n}(p_{3},m)+t\frac{ \mu_{\rm f}}{3}\right)}\right]-2N_{\rm c}\sum_{\rm f=u,d}^{\rm t=\pm}\frac{|q_ {\rm f}B|}{2\pi}\sum_{n=0}^{\infty}\alpha_{\rm n}\int_{-\infty}^{\infty}\frac{ \mathrm{d}p_{3}}{2\pi}\frac{n|\tilde{q}_{\rm f}|}{E_{\rm f}^{n}(p_{3},m)}\frac{1 }{1+e^{\frac{1}{2}\left[E_{\rm f}^{n}(p_{3},m)+t\frac{\mu_{\rm f}}{3}\right]}}\]
with \(\tilde{q}_{\rm f}=q_{\rm f}/e\).
For comparison, the gap equation and magnetization in the so-called _vacuum magnetic regularization_ (VMR) [23]
are
\[0 = \frac{m-m_{0}}{2G(0)}-4N_{c}\int^{\Lambda}\frac{\mathrm{d}^{3}p}{(2 \pi)^{3}}\frac{m}{E_{\mathrm{p}}(m)}-\frac{N_{c}m}{4\pi^{2}}\sum_{\mathrm{f=u,d} }\int_{0}^{\infty}\frac{\mathrm{d}s}{s^{2}}\,e^{-m^{2}s}\left[\frac{q_{\mathrm{f}} Bs}{\tanh(q_{\mathrm{f}}Bs)}-1-\frac{1}{3}(q_{\mathrm{f}}Bs)^{2}\right] \tag{13}\] \[-\frac{N_{c}m}{12\pi^{2}}\sum_{\mathrm{f=u,d}}\int_{\frac{1}{ \Lambda^{2}}}^{\infty}\frac{\mathrm{d}s}{s^{2}}\,e^{-m^{2}s}(q_{\mathrm{f}}Bs)^ {2},\] \[\mathcal{M}_{0} = -\frac{N_{\mathrm{c}}}{8\pi^{2}}\sum_{\mathrm{f=u,d}}\!\!\int_{0 }^{\infty}\frac{\mathrm{d}s}{s^{3}}\,e^{-m^{2}s}\!\left[\frac{\tilde{q}_{ \mathrm{f}}s}{\tanh(q_{\mathrm{f}}Bs)}-\frac{\tilde{q}_{\mathrm{f}}q_{ \mathrm{f}}Bs^{2}}{\sinh^{2}(q_{\mathrm{f}}Bs)}-\frac{2}{3}\tilde{q}_{ \mathrm{f}}q_{\mathrm{f}}Bs^{2}\right]-\frac{N_{\mathrm{c}}}{12\pi^{2}}\sum_{ \mathrm{f=u,d}}\int_{\frac{1}{\Lambda^{2}}}^{\infty}\frac{\mathrm{d}s}{s} \left(e^{-m^{2}s}-e^{-m^{2}s}\right)\tilde{q}_{\mathrm{f}}q_{\mathrm{f}}B\]
at zero temperature for a constant coupling \(G(0)\). But instead of proper-time regularization [23], we regularize the explicitly \(B\)-independent term with three momentum cutoff for better comparison here. Note that the \(m_{\mathrm{v}}\)-dependent term in Eq.(II) is important to reproduce the paramagnetic feature of QCD matter though they did not manage to give the explicit form [23].
## III Numerical results
To carry out numerical calculations, the model parameters are fixed as \(m_{0}=5\,\mathrm{MeV}\), \(\Lambda=653\,\mathrm{MeV}\), \(G(0)\Lambda^{2}=2.10\) by fitting to the vacuum values: chiral condensate \(\langle\bar{\psi}\psi\rangle=-2\times(250\,\mathrm{MeV})^{3}\), pion mass \(m_{\pi}=135\,\mathrm{MeV}\), and pion decay constant \(f_{\pi}=93\,\mathrm{MeV}\)[28; 29]. Then, the vacuum quark mass is \(m_{\mathrm{v}}=-2G(0)\langle\bar{\psi}\psi\rangle+m_{0}=0.313\,\mathrm{GeV}\). For finite magnetic field, the explicit form of \(G(eB)\) should be given. In Ref. [16], a form of \(G(eB)\) had been determined by fitting to the data of \(\pi^{0}\) mass from LQCD simulations, and we were able to explain inverse magnetic catalysis effect for larger \(B\) with that form. However, there was nonphysical increasing of G(eB) around \(eB\sim 0\); to avoid that, we choose to fit to the region \(eB\geq 0.6\,\mathrm{GeV}^{2}\) here and get a monotonic form \(G(eB)=\frac{G(0)}{1+0.524eB^{2}}\). Hence, \(\frac{G^{\prime}(eB)}{G^{2}(eB)}=-\frac{1.048eB}{G(0)}\).
For a constant coupling \(G(0)\), we compare the results of our self-consistent regularization scheme with those of VMR scheme in Fig. 1 at zero temperature. Both results are consistent with the LQCD data [18] for the region \(0\leq eB\leq 0.6\,\mathrm{GeV}^{2}\), but they diverge quite much for larger \(B\). In our opinion, the cutoff to the explicitly \(B\)-dependent term in VMR would introduce artifact at larger \(B\) - the non-monotonic feature of \(m\) is a reflection of that.
In the following, we would explore how a running coupling constant could affect the dynamical mass and the corresponding magnetization in the self-consistent regularization. At zero temperature, the results with \(G(0)\) and \(G(eB)\) are shown together in Fig. 2. Due to the running of coupling constant, \(m\) shows a non-monotonic feature though the absolute value of chiral condensate \(m/2G(eB)\) increases with \(B\) almost linearly [16]. Accordingly, the second term in Eq.(II) demonstrates a non-monotonic feature and becomes negative at larger \(B\). Such feature is responsible for the strong suppression of magnetization with the running coupling at larger \(B\) compared to the constant coupling case.
At finite temperature, the results are illustrated in Fig. 3. As we can see, the temperature tends to suppress magnetization in the case with \(G(0)\) but enhance magnetization in the case with \(G(eB)\). In their book, Landau and Lifshitz had calculated magnetic susceptibility \(\chi\equiv\frac{e\mathcal{M}}{\mathcal{N}B}\) of a non-relativistic dilute electronic gas at high temperature and found it decreases as \(1/T\)[30]. To be concrete, the situations they considered are \(\sqrt{B}\ll T\ll m_{\mathrm{e}}\) and the electric chemical potential \(-\mu_{\mathrm{e}}(\gtrsim m_{\mathrm{e}})\) changes with \(T\) to keep the total number \(\mathcal{N}\) constant. If we keep \(-\mu_{\mathrm{e}}(\gtrsim m_{\mathrm{e}})\) a constant, then the total elec
Figure 1: The dynamical mass \(m\) (upper panel) and magnetization \(\mathcal{M}\) (lower panel) with the self-consistent regularization (blue dashed lines) and vacuum magnetic regularization (black dotted lines) schemes at zero temperature.
tronic number \(\mathcal{N}\) could be easily evaluated to increase with temperature as \(T^{3/2}\). Therefore, the magnetization \(\mathcal{M}=\chi\mathcal{N}B/e\) would increase with temperature as \(\sqrt{T}\), and the result with \(G(eB)\) is qualitatively consistent with the non-relativistic study. That is not the end of story: when we keep \(m=m_{\mathrm{v}}\) for \(G(0)\), \(\mathcal{M}\) would increase with \(T\) for a given \(B\); so it is adequate chiral symmetry restoration induced by \(T\) that reduces the contribution of second term in Eq.(12) and thus reverses the trend. One can refer to Fig.2 for the dynamical mass effect on magnetization. For \(G(eB)\), \(m\) changes mildly with \(B\) for a given \(T\), that is, the large mass gaps induced by \(T\) at vanishing \(B\) sustain to strong magnetic field. According to our analysis, it is the great enhancement of the forth \(T\)-dependent term in Eq.(12) that helps to recover the trend of naiive expectation. In fact, the result with \(G(eB)\) is qualitatively consistent with that found in LQCD simulations at finite temperature [19], so we conclude that the running coupling is able to consistently explain both inverse magnetic catalysis effect and magnetization enhancement with temperature.
At finite baryon chemical potential, the results are illustrated in Fig. 4. For \(G(0)\), \(m\) always changes discontinuously with \(B\) for \(\mu_{\mathrm{B}}>m_{\mathrm{v}}\), which signals a first-order transition. But for \(G(eB)\), \(m\) only changes slightly at \(\mu_{\mathrm{B}}=0\) and no sign of first-order transition could be identified for a given \(\mu_{\mathrm{B}}\). The de Haas-van Alphen oscillation [30] can be found both in the evolutions of \(m\) and \(\mathcal{M}\) with \(B\): the effect is significant to \(m\) only when \(\mu_{\mathrm{B}}\) is a little larger than \(3m_{\mathrm{v}}\) but is significant to \(\mathcal{M}\) for any \(\mu_{\mathrm{B}}>3m_{\mathrm{v}}\). According to the mechanism of de Haas-van Alphen oscillation [30], the last non-analytic points of \(\mathcal{M}\) can be roughly determined by \(\sqrt{2q_{\mathrm{d}}|B|}\approx\mu_{\mathrm{B}}/3\), that is, \(eB\approx 0.167\,\mathrm{GeV}^{2}\) for \(\mu_{\mathrm{B}}=1\,\mathrm{GeV}\) and \(eB\approx 0.375\,\mathrm{GeV}^{2}\) for \(\mu_{\mathrm{B}}=1.5\,\mathrm{GeV}\). That is consistent with the numerical results shown in the lower panel of Fig. 4. Moreover, at larger \(B\), \(\mathcal{M}\) does not depend on \(\mu_{\mathrm{B}}\) for \(G(0)\) due to the "Silver braze" property but increases with \(\mu_{\mathrm{B}}\) for \(G(eB)\) due to the strong suppression of \(m\).
## IV Summary
In this work, a self-consistent thermodynamic potential has been obtained for a magnetized QCD matter in two-flavor NJL model by following Schwinger's renormalization spirit. The thermodynamic potential is free of cutoff for the explicitly magnetic field dependent terms and explicit expressions of gap equation and magnetization could be derived from that according to thermodynamic relations. Compared to the VMR scheme, the numerical calculations showed that magnetic catalysis effect persists to very large magnetic field at zero temper
Figure 2: The dynamical mass \(m\) (upper panel) and magnetization \(\mathcal{M}\) (lower panel) with the constant coupling \(G(0)\) (blue dashed lines) and the running coupling \(G(eB)\) (red lines) at zero temperature. The dotted lines correspond to the corresponding contributions of the second term in Eq.(12).
Figure 3: The dynamical mass \(m\) (upper panel) and magnetization \(\mathcal{M}\) (lower panel) as functions of the magnetic field \(B\) at temperature \(T=0,0.15\), and \(0.2\,\mathrm{GeV}\). The dashed, dotted, and dashdotted lines correspond to the results with the constant coupling \(G(0)\) and the solid lines correspond to the results with the running coupling \(G(eB)\).
ature when adopting the self-consistent scheme, and the magnetization is strongly affected accordingly.
Keeping in the self-consistent scheme, results with the constant coupling \(G(0)\) and running coupling \(G(eB)\) are compared with each other. At zero temperature and chemical potential, the running coupling greatly suppresses the dynamical mass \(m\) at large magnetic field \(B\) and thus reduces the magnetization \(\mathcal{M}\) a lot. At finite temperature \(T\), \(\mathcal{M}\) decreases with \(T\) for \(G(0)\) due to adequate suppression of \(m\) but increases with \(T\) for \(G(eB)\) due to the persistance of large mass gaps at large \(B\). At finite baryon chemical potential \(\mu_{\rm B}\), no sign of first-order transition could be identified for \(G(eB)\) by varying \(B\) and de Haas-van Alphen oscillation could be found both in the evolutions of \(m\) and \(\mathcal{M}\) with \(B\).
Since we found that the regularization scheme could affect the result greatly in the large magnetic field region, we would try to perform similar study in three-flavor NJL or PNJL model. Then, we could compare the magnetization with the LQCD data for finite temperature in the region \(0\leq eB\leq 1\,\mathrm{GeV}^{2}\)[19] and give further predictions for much larger magnetic field. The situation with finite baryon chemical potential could also be explored for completeness, which might help to understand the properties of magnetars.
_Acknowledgments_ G.C. is supported by the National Natural Science Foundation of China with Grant No. 11805290. J. Li is supported by the National Natural Science Foundation of China with Grant No. 11890712.
|
2306.09853
|
A Note on the Base-$p$ Expansions of Putative Counterexamples to the
$p$-adic Littlewood Conjecture
|
In this paper, we investigate the base-$p$ expansions of putative
counterexamples to the $p$-adic Littlewood conjecture of de Mathan and
Teuli\'e. We show that if a counterexample exists, then so does a
counterexample whose base-$p$ expansion is uniformly recurrent. Furthermore, we
show that if the base-$p$ expansion of $x$ is a morphic word
$\tau(\phi^\omega(a))$ where $\phi^\omega(a)$ contains a subword of the form
$uXuXu$ with $\lim_{n\to\infty}|\phi^n(u)|=\infty$, then $x$ satisfies the
$p$-adic Littlewood conjecture. In the special case when $p=2$, we show that
the conjecture holds for all pure morphic words.
|
John Blackman, Simon Kristensen, Matthew J. Northey
|
2023-06-16T14:07:34Z
|
http://arxiv.org/abs/2306.09853v3
|
A Note on the Base-\(p\) Expansions of Putative Counterexamples to the \(p\)-adic Littlewood Conjecture
###### Abstract
In this paper, we investigate the base-\(p\) expansions of putative counterexamples to the \(p\)-adic Littlewood conjecture of de Mathan and Teulie. We show that if a counterexample exists, then so does a counterexample whose base-\(p\) expansion is uniformly recurrent. Furthermore, we show that if the base-\(p\) expansion of \(x\) is a morphic word \(\tau(\varphi^{\omega}(a))\) where \(\varphi^{\omega}(a)\) contains a subword of the form \(uXuXu\) with \(\lim_{n\to\infty}|\varphi^{n}(u)|=\infty\), then \(x\) satisfies the \(p\)-adic Littlewood conjecture. In the special case when \(p=2\), we show that the conjecture holds for all pure morphic words.
## 1 Introduction
The \(p\)-adic Littlewood conjecture (pLC) is an open problem in Diophantine approximation, first proposed by de Mathan and Teulie [8] in 2004, which states that for each prime number \(p\) and all \(x\in\mathbb{R}\) the following equality holds
\[\liminf_{q\to\infty}q\cdot|q|_{p}\cdot\|qx\|=0. \tag{1}\]
Here, \(|\cdot|_{p}\) denotes the \(p\)-adic absolute value, \(\|\cdot\|\) denotes the distance to the nearest integer, and \(q\) runs over the positive integers. It follows trivially that if a real number \(x\) is _well-approximable_, _i.e._,
\[\liminf_{q\to\infty}q\cdot\|qx\|=0,\]
then \(x\) satisfies pLC, for all primes \(p\).
In the paper that introduced this problem, de Mathan and Teulie showed that (1) is equivalent to the condition that for each real number \(x\) and all non-negative integers \(k\), the partial quotients of \(p^{k}x\) are not uniformly bounded from above.
**Lemma 1.1**.: ([8, Lemma 1.3]) _For each \(k\in\mathbb{Z}_{\geq 0}\), let \(\overline{p^{k}x}=[a_{0,k};a_{1,k},\ldots]\) be the continued fraction expansion of \(p^{k}x\). Then condition (1) is equivalent to_
\[\sup\{a_{i,k};i\geq 1,k\geq 0\}=+\infty. \tag{2}\]
In particular, the \(p\)-adic Littlewood conjecture is deeply connected to how the partial quotients of a real number behave under iterative prime multiplication. Note that since
\[\frac{1}{\sup_{i\geq 1}\left\{a_{i,k}\right\}+2}\leq\inf_{q\geq 1}\left\{q \cdot\|qp^{k}x\|\right\}\leq\frac{1}{\sup_{i\geq 1}\left\{a_{i,k}\right\}},\]
for all \(k\in\mathbb{Z}_{\geq 0}\) (see [7, Ch. 7]), conditions (1) and (2) are also equivalent to
\[\inf_{k\geq 0}\inf_{q\geq 1}q\cdot\|qp^{k}x\|=0. \tag{3}\]
The main results regarding this conjecture can be broadly separated into two categories: 1) results which induce restrictions on the structure of the continued fraction expansions of potential counterexamples to pLC, and 2) results regarding the measure of the set of counterexamples to pLC and related objects. Notable works regarding the continued fraction expansion of putative counterexamples to pLC include that of de Mathan and Teulie [8], which shows that quadratic irrationals satisfy pLC; Bugeaud, Drmota and de Mathan [6], which shows that all real numbers which have arbitrarily many repetitions of a given finite block in their continued fraction expansion satsisfy pLC; and Badziahin, Bugeaud, Einsiedler and Kleinbock [4], which shows that the complexity function of the continued fraction expansion of a counterexample to pLC must grow sub-exponentially, but the continued fraction expansion cannot be _recurrent_, see below for a definition. In other words, the complexity function cannot grow too quickly or too slowly. The main result regarding the measure of the set of potential counterexamples is that of Einsiedler and Kleinbock [10], which shows that for each prime \(p\) the set of real numbers that do not satisfy (1) has Hausdorff dimension \(0\). In fact, a stronger result was shown: this set is a countable union of sets which have box-counting dimension zero.
In this manuscript, instead of looking at the continued fraction expansions of potential counterexamples to pLC, we will look at the base-\(p\) expansions (see Section 2), which for the most part appear to have been largely unexplored. Our main results are presented in Section 2. In Section 2.1, we look at the base-\(p\) expansions of potential counterexamples to pLC and put restrictions on the type of repetitive blocks that can occur in these expansions. Furthermore, we show that if any counterexamples to pLC exist, then there exist counterexamples with uniformly recurrent base-\(p\) expansions. In Section 2.2, we utilise the results of Section 2.1 to analyse the 2-adic Littlewood conjecture. Due to the simpler alphabet, we are able to provide stronger results. In particular, we show that any real number with a pure morphic base-\(2\) expansion satisfies \(2\)LC and that no counterexample to \(2\)LC can have arbitrarily long _overlap-free_ subwords - see below. The proofs of the results of Section 2.1 are contained in Section 3 and the proofs for Section 2.2 are contained in Section 4.
### Notation
Let \(\mathcal{A}\) be a finite set which we refer to as an _alphabet_ and let \(\mathcal{A}^{*}\) be the set of all finite words over \(\mathcal{A}\) including the empty word, which we denote as \(\epsilon\). The set \(\mathcal{A}^{*}\) forms a free monoid over \(\mathcal{A}\) generated by concatenation. We denote the set of (right-sided) infinite words of \(\mathcal{A}\) as \(\mathcal{A}^{\omega}\), and denote the union of this set with \(\mathcal{A}^{*}\) as \(\mathcal{A}^{\infty}\). Given these notions, we define the _length_\(|\cdot|\) of a word \(w\in\mathcal{A}^{\infty}\) to be the number of letters that appear in \(w\), where \(|\epsilon|=0\) and \(|w|=\infty\) if \(w\in\mathcal{A}^{\omega}\).
**Definition 1.2**.: A finite word \(w\in\mathcal{A}^{*}\) is an \(\alpha\)_-power_ if it can be written in the form \(w=v^{\lfloor\alpha\rfloor}v^{\prime}\) where \(|v^{\prime}|/|v|\geq\{\alpha\}:=\alpha-\lfloor\alpha\rfloor\). A word \(w\in\mathcal{A}^{\infty}\) is _overlap-free_ if it contains no subword of the form \(uXuXu\), where \(u\in\mathcal{A}\) and \(X\in\mathcal{A}^{*}\).
Note that a word contains overlap if and only if it contains a subword that is a \((2+\delta)\)-power for some \(\delta>0\).
#### 1.1.1 Morphic Words
An important class of words are the morphic words. As a special case, these include all automatic words, _i.e._, words which can be generated by a finite automaton with output. Let \(\varphi:\mathcal{A}\to\mathcal{A}^{*}\) be a morphism. If there is some natural number \(j\geq 1\) such that \(\varphi^{j}(a)=\epsilon\), for \(a\in\mathcal{A}\), then \(a\) is said to be _mortal_. The set of mortal letters is denoted as \(M_{\varphi}\). A morphism \(\varphi\) is _prolongable_ on the letter \(a\in\mathcal{A}\), if \(\varphi(a)=ax\) and \(x\not\in M_{\varphi}\). If a morphism is prolongable on \(a\), then the words \(a\), \(\varphi(a)\), \(\varphi^{2}(a),\ldots\) converge to an infinite word \(\varphi^{\omega}(a)\) of the form
\[\varphi^{\omega}(a)=ax\cdot\varphi(x)\cdot\varphi^{2}(x)\cdot\ldots \tag{4}\]
Any word that can be formed in this way is referred to as a _pure morphic word_. If there is a coding between alphabets \(\tau:\mathcal{A}\to\mathcal{B}\) such that \(w=\tau(\varphi^{\omega}(a))\), then \(w\) is referred to as a _morphic word_. A morphism \(\varphi:\mathcal{A}\to\mathcal{A}^{*}\) is _\(k\)-uniform_ if \(|\varphi(a)|=k\) for all \(a\in\mathcal{A}\) and is _expanding_ if \(|\varphi(a)|\geq 2\) for all \(a\in\mathcal{A}\). A morphism \(\varphi\) is _primitive_ if there exists some exponent \(n\geq 1\) such that for every \(a,b\in\mathcal{A}\), the letter \(b\) appears in the word \(\varphi^{n}(a)\) at least once.
**Example 1.3**.: The _Thue-Morse word_\(M\) is the overlap-free, infinite word that is the limit \(\mu^{\omega}(0)\) of the morphism \(\mu:\{0,1\}\to\{0,1\}^{*}\) with \(\mu(0):=01\) and \(\mu(1):=10\). The first few letters are
\[M=01101001100110\cdots.\]
The complement of the Thue-Morse word \(\widetilde{M}\) is the word given by \(\mu^{\omega}(1)\).
## 2 Main Results
For every \(x\in[0,1]\) and every natural number \(n\geq 2\), we can rewrite \(x\) in the following form
\[x=\sum_{i=1}^{\infty}a_{i}n^{-i},\]
where \(a_{i}\in\{0,1,\ldots,n-1\}\) for all \(i\in\mathbb{N}\). Unless the number \(x\) is a rational number with denominator \(n^{k}\) for some \(k\geq 1\), this series expansion is unique. Since pLC is clearly satisfied for rational numbers, we will disregard this case and only consider real numbers that correspond to a unique sequence of digits. The word formed by taking the coefficients of this power series is called the _base-\(n\) expansion_ of \(x\). We denote this word as \(w(x,n)\), _i.e._, \(w(x,n):=a_{1}a_{2}\cdots\). Conversely, given a word \(w\in\{0,1,\cdots,n-1\}^{\omega}\), we will denote the real number whose base-\(n\) expansion coincides with \(w\) as \(w_{n}\). If \(\{nx\}\) is the fractional part of \(nx\), _i.e., \(\{nx\}:=nx-\lfloor nx\rfloor\)_, then the corresponding base-\(n\) expansion is \(T(a_{1}a_{2}a_{3}\cdots):=a_{2}a_{3}\cdots\). In particular, up to taking the number modulo \(1\), the _shift map_\(T\) induces multiplication by \(n\). More generally, the base-\(n\) expansion of \(\{n^{k}x\}\) corresponds to the word \(T^{k}(a_{1}a_{2}a_{3}\cdots)=a_{k+1}a_{k+2}\cdots\).
Due to this structure, the base-\(n\) expansion is very well-equipped for producing information regarding the limiting behaviour of a real point under repeated multiplication by \(n\). Whilst the rational approximations coming from the base-\(n\) (or base-\(p\)) expansion are typically worse than the rational approximations coming from the continued fraction expansion, in a number of cases this approximation is still good enough to induce restrictions on the potential counterexamples of pLC. On the other hand, whilst the continued fraction expansion gives a very good rational approximation of a real number, the integer multiplication of continued fractions is far more complicated - see [12, 14].
For our purposes, it will also be useful to deal with _base-\(n\) representations_ of integers. For any integer \(a\geq 0\), we can uniquely write \(a\) as:
\[\sum_{i=1}^{m}a_{i}n^{m-i},\]
with \(a_{i}\in\{0,1,\ldots,n-1\}\) and \(a_{m}\neq 0\) (unless \(m=1\)). The word \(v(a,n)\) formed by taking the coefficients of this sum is the _base-\(n\) representation_ of \(a\). Given a finite, non-empty word \(v\), let \(v_{n}^{+}\) denote the integer whose base-\(n\) representation coincides with \(v\).
### The \(p\)-adic Littlewood Conjecture
For a finite word \(w\) on some alphabet \(\mathcal{A}\) and a \(\delta\in(0,1)\), we will denote the prefix of the word \(w\) of length \(\lfloor\delta\cdot|w|\rfloor\) as \(w^{\delta}\). Note that by construction, \(www^{\delta}\) is an \(\alpha\)-power for all \(\alpha{\leq}2+(\lfloor\delta|w|\rfloor/|w|)\). The following theorem shows that if the base-\(p\) expansion of a real number \(x\) has a sequence of subwords of the form \(w_{j}w_{j}w_{j}^{\delta_{j}}\) with \(\lim_{j\to\infty}|w_{j}^{\delta_{j}}|=\lim_{j\to\infty}\lfloor\delta_{j}\cdot |w_{j}|\rfloor=\infty\), then \(x\) satisfies pLC.
**Theorem 2.1**.: _Let \(w=(a_{n})_{n=1}^{\infty}\) be an infinite word on the alphabet \(\{0,1,\ldots,p-1\}\) satisfying the property that there is a sequence \((w_{j})_{j=1}^{\infty}\) of finite words and a sequence of positive real numbers \((\delta_{j})_{j=1}^{\infty}\) which are less than \(1\), such that the word \(w_{j}w_{j}w_{j}^{\delta_{j}}\) occurs as a subword in \(w\) and \(\lim_{j\to\infty}|w_{j}^{\delta_{j}}|=\infty\). Then \(w_{p}=\sum_{n=1}^{\infty}a_{n}p^{-n}\) satisfies the \(p\)-adic Littlewood conjecture._
Taking \((\delta_{j})_{j=1}^{\infty}\) to be a constant sequence leads to the following corollary.
**Corollary 2.2**.: _Assume \(x\) is a counterexample to pLC and let \(w(x,p)\) be the corresponding base-\(p\) expansion. For each fixed \(\alpha>2\), the length of the \(\alpha\)-powers appearing in \(w(x,p)\) are bounded._
Theorem 2.1 can be generalised as follows.
**Theorem 2.3**.: _Let \(w=(a_{n})_{n=1}^{\infty}\) be an infinite word on the alphabet \(\{0,1,\ldots,p-1\}\) that contains a sequence \((w_{j})_{j=1}^{\infty}\) of finite words with \(m_{j}=|w_{j}|\) and a sequence of positive real numbers \((\delta_{j})_{j=1}^{\infty}\) such that the word \(w_{j}w_{j}^{\delta_{j}}\) occurs as a subword in \(w\). Furthermore, let \((\ell_{j})_{j=1}^{\infty}\) be the sequence of natural numbers satisfying_
\[p^{\ell_{j}-1}\leq\frac{p^{m_{j}}-1}{\gcd(p^{m_{j}}-1,(w_{j})_{p}^{+})}\leq p ^{\ell_{j}}. \tag{5}\]
_If \(\lim_{j\to\infty}m_{j}+\lfloor m_{j}\delta_{j}\rfloor-2\ell_{j}=\infty\), then \(w_{p}\) satisfies pLC._
In the above theorem, the three most useful cases are:
* when \(\gcd(p^{m_{j}}-1,(w_{j})_{p}^{+})=1\), \(\ell_{j}=m_{j}\), and \(\lim_{j\to\infty}\lfloor m_{j}\delta_{j}\rfloor-m_{j}=\infty\) (Theorem 2.1),
* when \(m_{j}=2n_{j}\) with \(n_{j}\in\mathbb{N}\), \(\gcd(p^{m_{j}}-1,(w_{j})_{p}^{+})=p^{n_{j}}-1\), \(\ell_{j}=n_{j}+1\) and \(\lim_{j\to\infty}\lfloor m_{j}\delta_{j}\rfloor=\infty\), and
* when \(\lim_{j\to\infty}\delta_{j}=\infty\).
As an example of how the second of the above bullet points can be used, given a word \(w=b_{1}b_{2}\cdots b_{n}\) in \(\{0,1,\ldots,p-1\}^{*}\), the integer \((w\overline{w})_{p}^{+}\) will always be divisible by \(p^{n}-1\) where \(\overline{b}=p-1-b\) for letter each \(b\) in the alphabet \(\{0,1,\ldots,p-1\}\). This follows since
\[\sum_{i=1}^{n}p^{n-i}\cdot[p^{n}b_{i}+p-1-b_{i}]=(p^{n}-1)+\sum_{i=1}^{n}(p^{n} -1)p^{n-i}b_{i}\]
Thus, we obtain the following corollary.
**Corollary 2.4**.: _Let \(w=(a_{n})_{n=1}^{\infty}\) be an infinite word on the alphabet \(\{0,1,\ldots,p-1\}\) satisfying the property that there is a sequence \((w_{j})_{j=1}^{\infty}\) of finite words and a sequence of positive real numbers \((\delta_{j})_{j=1}^{\infty}\) such that the word \(w_{j}\overline{w_{j}}w_{j}^{\delta_{j}}\) occurs as a subword in \(w\) and \(\lim_{j\to\infty}|w_{j}^{\delta_{j}}|=\infty\). Then \(w_{p}\) satisfies the \(p\)-adic Littlewood conjecture._
Another property that can be deduced is that if a word \(w\) contains a sequence of increasing prefixes of another word \(v\) and \(v_{p}\) satisfies pLC, then so does \(w_{p}\).
**Proposition 2.5**.: _Let \(w,v\in\{0,1,\ldots,p-1\}^{\omega}\) and assume that there exists a sequence of prefixes \((v_{k})_{k=1}^{\infty}\) of \(v\) such that \(|v_{k}|\to\infty\) and \(v_{k}\) appears as a subword of \(w\) for all \(k\). If \(v_{p}\) satisfies pLC, then so does \(w_{p}\)._
An infinite word \(w=(a_{n})_{n=1}^{\infty}\) is said to be _recurrent_ if any finite subword \(v\) of \(w\) occurs infinitely often in \(w\). It is said to be _uniformly recurrent_ if for every finite subword \(v\) of \(w\), there exists a constant \(N_{v}\) such that \(v\) appears in every subword of \(w\) of length \(N_{v}\). Using a similar idea to the work of Badziahin [3] on "limit words" of continued fraction expansions, we can look at the topological closure of the set of base-\(p\) expansions of the counterexamples to pLC under the action of the shift map. This allows us to deduce that if this set is non-empty then it contains an element with a uniformly recurrent base-\(p\) expansion.
**Theorem 2.6**.: _If there is a counterexample to pLC, there is a counterexample with a uniformly recurrent base-\(p\) expansion._
_Remark 2.7_.: It is worth noting that none of the above statements rely on \(p\) being prime other than to link to the \(p\)-adic Littlewood conjecture. In particular, we can replace \(p\) with a composite number \(n\) to obtain analogous results on the "\(n\)-adic Littlewood conjecture".
The proof of Theorems 2.1 and 2.3 can be found in Section 3.1. The proof of Proposition 2.5 and Theorem 2.6 is in Section 3.2.
#### 2.1.1 Results on Morphic Words
Let \(w=\varphi^{\omega}(a)\) be a pure morphic word. If the prefix \(\varphi^{k}(a)\) contains overlap of the form \(uXuXu\) for some \(k\in\mathbb{N}\), then \(\varphi^{n}(u)\varphi^{n}(X)\varphi^{n}(u)\varphi^{n}(X)\varphi^{n}(u)\) is a subword of \(\varphi^{k+n}(a)\) for all \(n\in\mathbb{N}\). Under the assumption that \(u\) is not mortal for \(\varphi\), infinitely many instances of overlap occur. Furthermore, if \(\lim_{n\to\infty}|\varphi^{n}(u)|=\infty\), the word satisfies the conditions of Theorem 2.1. This leads to the following proposition.
**Proposition 2.8**.: _Let \(w=\varphi^{\omega}(a)\in\mathcal{A}^{\omega}\) be a pure morphic word containing a subword \(uXuXu\) such that \(\lim_{n\to\infty}|\varphi^{n}(u)|=\infty\). For any non-erasing morphism \(g:\mathcal{A}\to\{0,1,\ldots,p-1\}\), the real number \(g(w)_{p}\) satisfies the \(p\)-adic Littlewood conjecture._
_Remark 2.9_.: Here we should note that the condition \(\lim_{n\to\infty}|\varphi^{n}(u)|=\infty\) is instantly satisfied for morphisms which are expanding, including (powers of) primitive morphisms and \(k\)-uniform morphisms for \(k\geq 2\). Furthermore, due to a result of Durand [9], all uniformly recurrent morphic words are primitive morphic. Therefore, if \(x\) is a counterexample to pLC with a morphic uniformly recurrent base-\(p\) expansion of the form \(\tau(\varphi^{\omega}(a))\), then the underlying pure morphic word \(\varphi^{\omega}(a)\) must be overlap-free.
Similar to the previous argument, if a morphism \(\varphi\) is prolongable on the letters \(a,b\in\mathcal{A}^{*}\) and \(b\) appears in the word \(\varphi^{\omega}(a)\) at least once, then every prefix of \(\varphi^{\omega}(b)\) appears in \(\varphi^{\omega}(a)\). Proposition 2.5 then directly implies the following corollary.
**Corollary 2.10**.: _Let \(w=\varphi^{\omega}(a)\) be a pure morphic word over \(\mathcal{A}\) and let \(\mathcal{B}\) be a sub-alphabet of \(\mathcal{A}\) such that \(\varphi:\mathcal{B}\to\mathcal{B}^{*}\). Furthermore, assume that \(\varphi^{\omega}(a)\) contains a letter \(b\in\mathcal{B}\) such that \(\varphi\) is prolongable over \(b\) and let \(\tau:\mathcal{A}\to\{0,1,\ldots,p-1\}\) be a coding. If \(\tau(\varphi^{\omega}(b))_{p}\) satisfies pLC, then so does \(\tau(\varphi^{\omega}(a))_{p}\)._
### Applications to the 2-adic Littlewood Conjecture
In the case of the 2-adic Littlewood conjecture, all pure morphic words satisfy at least one of three properties **(P1)-(P3)** - see Lemma 4.2 below. Combining this result with Theorem 2.1 and other results in the literature leads to the following theorem.
**Theorem 2.11**.: _Let \(x\in[0,1]\) and assume that the corresponding base-\(2\) expansion \(w(x,2)\) is a pure morphic word. Then \(x\) satisfies \(2\)LC._
This theorem can be extended to a class of results regarding pLC by applying Corollary 2.10.
**Corollary 2.12**.: _Let \(w=\varphi^{\omega}(a)\) be a pure morphic word over \(\mathcal{A}\) and let \(\mathcal{B}\) be a sub-alphabet of \(\mathcal{A}\) such that \(\varphi:\mathcal{B}\to\mathcal{B}^{*}\) and \(|\mathcal{B}|=2\). Furthermore, assume that \(\varphi^{\omega}(a)\) contains a letter \(b\in\mathcal{B}\) such that \(\varphi\) is prolongable over \(b\). Then \(\tau(w)_{p}\) satisfies pLC for any coding \(\tau:\mathcal{A}\to\{0,1,\ldots,p-1\}\)._
Finally, as a contrasting result to Corollary 2.2, we show that length of the overlap-free subwords of the base-\(2\) expansion of a counterexample to \(2\)LC are bounded.
**Theorem 2.13**.: _Assume that \(x\) is a counterexample to \(2\)LC and let \(w(x,2)\) be the corresponding base-\(2\) expansion. Then the length of the overlap-free subwords in \(w(x,2)\) are bounded._
The proof of Theorem 2.11 and Corollary 2.12 can be found in Section 4.1 and the proof of Theorem 2.13 can be found in Section 4.2.
## 3 The \(p\)-adic Littlewood Conjecture
### Proof of Theorems 2.1 and 2.3
To prove Theorems 2.1 and 2.3, we will show that the conditions of these theorems imply (3). To this end, we will produce sequences \((q_{j})_{j=1}^{\infty}\) and \((k_{j})_{j=1}^{\infty}\) of natural numbers such that
\[\lim_{j\to\infty}q_{j}\cdot\|q_{j}p^{k_{j}}x\|=0. \tag{6}\]
Proof of Theorem 2.1.: For each \(j\in\mathbb{N}\), let \(k_{j}\) be the length of the prefix of \((a_{n})_{n=1}^{\infty}\) up to the first occurrence of the subword \(w_{j}w_{j}w_{j}^{\delta_{j}}\). Set
\[x^{\prime}:=\left\{p^{k_{j}}x\right\}=\left\{p^{k_{j}}\sum_{n=1}^{\infty}a_{n} p^{-n}\right\}=\sum_{n=1}^{\infty}a_{k_{j}+n}p^{-n}.\]
Then, the base-\(p\) expansion of \(x^{\prime}\) begins with the subword \(w_{j}w_{j}w_{j}^{\delta_{j}}\).
Now, for each \(j\), we denote \(w_{j}\) as \(b_{1}^{(j)}b_{2}^{(j)}\cdots b_{m_{j}}^{(j)}\) where \(m_{j}=|w_{j}|\), and define a sequence of rational numbers
\[\frac{r_{j}}{q_{j}}:=\sum_{h=0}^{\infty}\sum_{i=1}^{m_{j}}\frac{b_{i}^{(j)}}{ p^{i+hm_{j}}}=\sum_{h=0}^{\infty}\frac{1}{p^{hm_{j}}}\sum_{i=1}^{m_{j}}\frac{b_ {i}^{(j)}}{p^{i}}.\]
These are the rational numbers (in reduced form) whose base-\(p\) expansion is obtained by extending the word \(w_{j}\) periodically. The sequence of denominators \((q_{j})_{j=1}^{\infty}\) is the sequence used in (6).
The numbers \(r_{j}/q_{j}\) approximate \(x^{\prime}\) rather well. Indeed,
\[\left|x^{\prime}-\frac{r_{j}}{q_{j}}\right|=\left|\sum_{i=\lfloor\delta_{j}m_ {j}\rfloor+1}^{m_{j}}\frac{c_{2,i}^{(j)}}{p^{i+2m_{j}}}+\sum_{h=3}^{\infty} \frac{1}{p^{hm_{j}}}\sum_{i=1}^{m_{j}}\frac{c_{h,i}^{(j)}}{p^{i}}\right|<\frac {1}{p^{2m_{j}+\lfloor\delta_{j}m_{j}\rfloor}},\]
where \(c_{h,i}^{(j)}=(a_{k_{j}+hm_{j}+i}-b_{i}^{(j)})\). On the other hand,
\[\frac{r_{j}}{q_{j}}=\left(\sum_{h=0}^{\infty}\frac{1}{p^{hm_{j}}}\right)\! \left(\sum_{i=1}^{m_{j}}\frac{b_{i}^{(j)}}{p^{i}}\right)=\frac{p^{m_{j}}}{p^{ m_{j}}-1}\sum_{i=1}^{m_{j}}\frac{b_{i}^{(j)}}{p^{i}}=\frac{r_{j}^{\prime}}{p^{m_{j} }-1},\]
where \(r_{j}^{\prime}\in\mathbb{Z}\). Consequently, \(q_{j}\leq p^{m_{j}}-1<p^{m_{j}}\) and therefore,
\[q_{j}\cdot\|q_{j}p^{k_{j}}x\|\leq q_{j}^{2}\cdot\left|x^{\prime}-\frac{r_{j}}{ q_{j}}\right|<\frac{1}{p^{\lfloor\delta_{j}m_{j}\rfloor}}.\]
Since \(\delta_{j}\cdot m_{j}\) tends to infinity with \(j\), the theorem follows.
The above proof illustrates a very useful technique for using combinatorial properties of base-\(p\) expansions to show that real numbers satisfy pLC. The proof of Theorem 2.3 serves as generalisation of the above method.
Proof of Theorem 2.3.: For each \(j\in\mathbb{N}\), let \(k_{j}\) be the length of the prefix of \((a_{n})_{n=1}^{\infty}\) up to the first occurrence of the subword \(w_{j}w_{j}^{\delta_{j}}\) and set
\[x^{\prime}:=\{p^{k_{j}}x\}\]
Then, the base-\(p\) expansion of \(x^{\prime}\) begins with the subword \(w_{j}w_{j}^{\delta_{j}}\). Let \(n_{j}=\lfloor\delta_{j}\rfloor\).
For each \(j\in\mathbb{N}\), we denote \(w_{j}\) as \(b_{1}^{(j)}b_{2}^{(j)}\cdots b_{m_{j}}^{(j)}\) and set
\[\frac{r_{j}}{q_{j}}:=\sum_{h=0}^{\infty}\sum_{i=1}^{m_{j}}\frac{b_{i}^{(j)}}{ p^{i+hm_{j}}}=\left(\sum_{h=0}^{\infty}\frac{1}{p^{(h+1)m_{j}}}\right)\left(\sum_{i=1 }^{m_{j}}p^{m_{j}-i}b_{i}^{(j)}\right) \tag{7}\]
These are the rational numbers (in reduced form) whose base-\(p\) expansion is obtained by extending the word \(w_{j}\) periodically.
As in the proof of Theorem 2.1, this sequence of rational numbers \(\frac{r_{j}}{q_{j}}\) approximates \(x^{\prime}\) very well,
\[\left|x^{\prime}-\frac{r_{j}}{q_{j}}\right|=\left|\sum_{i=\lfloor(\delta_{j}-n_ {j})m_{j}\rfloor+1}^{m_{j}}\frac{c_{n_{j},i}^{(j)}}{p^{i+n_{j}m_{j}}}+\sum_{h=n _{j}+1}^{\infty}\frac{1}{p^{hm_{j}}}\sum_{i=1}^{m_{j}}\frac{c_{h,i}^{(j)}}{p^{ i}}\right|<\frac{1}{p^{m_{j}+\lfloor\delta_{j}m_{j}\rfloor}},\]
where \(c_{h,i}^{(j)}=(a_{k_{j}+2hm_{j}+i}-b_{i}^{(j)})\).
Let \(d_{j}=\gcd(p^{m_{j}}-1,(w_{j})_{p}^{+})\). Then there exists some \(a\in\mathbb{N}\) such that
\[ad_{j}=\sum_{i=1}^{m_{j}}p^{m_{j}-i}b_{i}^{(j)}\]
Combining this with (7) shows
\[\frac{r_{j}}{q_{j}}=a\cdot\frac{d_{j}}{p^{m_{j}}-1}.\]
From (5), it follows that \(q_{j}\leq(p^{m_{j}}-1)/d_{j}\leq p^{\ell_{j}}\) and therefore
\[q_{j}^{2}\cdot\left|x^{\prime}-\frac{r_{j}}{q_{j}}\right|<\frac{p^{2\ell_{j}}} {p^{m_{j}+\lfloor\delta_{j}m_{j}\rfloor}}=\frac{1}{p^{m_{j}+\lfloor\delta_{j}m _{j}\rfloor-2\ell_{j}}}.\]
Since it was assumed that \(\lim_{j\to\infty}m_{j}+\lfloor\delta_{j}m_{j}\rfloor-2\ell_{j}=\infty\), this completes the proof.
### Proof of Proposition 2.5 and Theorem 2.6
Given an infinite word \(w\in\mathcal{A}^{\omega}\), we define the _set of suffixes_\(\mathcal{S}(w)\) of \(w\in\mathcal{A}^{\omega}\) to be
\[\mathcal{S}(w):=\{T^{k}(w):k\in\mathbb{Z}_{\geq 0}\}.\]
We can turn \(\mathcal{A}^{\omega}\) into a metric space by defining a metric \(d(x,y)=2^{-|u|}\), where \(u\) is the largest common prefix of \(x\) and \(y\) and \(d(x,x)=0\). From this, we can take the topological closure of the set of suffixes \(\overline{\mathcal{S}(w)}\). A word \(v\in\mathcal{A}^{\omega}\) is an element of \(\overline{\mathcal{S}(w)}\) if and only if every prefix of \(v\) appears in \(w\). Analogously, for any \(x\in[0,1]\), we can define the set
\[T_{p}(x):=\{\{p^{n}x\}:n\in\mathbb{N}\cup\{0\}\}.\]
Assuming \(x\) is not a rational number with denominator equal to \(p^{k}\) for natural number \(k\geq 1\), the sets \(T_{p}(x)\) and \(\mathcal{S}(w(x,p))\) are in bijection, where each real number corresponds to its base-p expansion. Likewise, the topological closures \(\overline{\mathcal{T}_{p}(x)}\) (using the Euclidean metric) and \(\overline{\mathcal{S}(w(x,p))}\) are also in bijection. This comes from the observation that there is a subsequence \(\{p^{k_{j}}x\}\) that limits to \(y\) if and only if the base-\(p\) expansions of \(\{p^{k_{j}}x\}\) limit to the base-\(p\) expansion of \(y\). Using the notions above, the proof of Proposition 2.5 essentially comes down to showing that if any accumulation point of \(T_{p}(x)\) satisfies pLC, then \(x\) satisfies pLC. The contrapositive of this statement is shown in the next lemma.
**Lemma 3.1**.: _Let \(x\) be a counterexample to pLC and assume that there exists some \(\varepsilon>0\) such that \(m_{p}(x)\geq\varepsilon.\) Then \(m_{p}(y)\geq\varepsilon\) for all \(y\in\overline{T_{p}(x)}\)._
Proof.: We trivially have that \(m_{p}(x)\geq\varepsilon\) implies that \(m_{p}(\{p^{n}x\})\geq\varepsilon\) for every \(n\in\mathbb{N}\cup\{0\}\), since
\[\liminf_{q\to\infty}q\cdot|q|_{p}\cdot\|qp^{n}x\|=\liminf_{q\to\infty}p^{n}q \cdot|p^{n}q|_{p}\cdot\|p^{n}qx\|\geq\liminf_{q\to\infty}q\cdot|q|_{p}\cdot\| qx\|.\]
Assume that \(y\) is a limit point of \(T_{p}(x)\) such that \(m_{p}(y)<\varepsilon\). Then there exists some \(0<\varepsilon^{\prime}<\varepsilon<1\) and some sequence \((q_{n})_{n=1}^{\infty}\) such that
\[\lim_{n\to\infty}q_{n}\cdot|q_{n}|_{p}\cdot\|q_{n}y\|=\varepsilon^{\prime}.\]
For every \(n\in\mathbb{N}\), let \(\delta_{n}=2^{-1}q_{n}^{-2}(\varepsilon-\varepsilon^{\prime})\) and let \(k_{n}\) be the smallest natural number such that \(\{p^{k_{n}}x\}=y+\Delta_{n}\) with \(|\Delta_{n}|<\delta_{n}\). The existence of \(k_{n}\) follows from the fact that \(y\) is an accumulation point. This implies
\[p^{k_{n}}q_{n}\cdot|p^{k_{n}}q_{n}|_{p}\cdot\|p^{k_{n}}q_{n}x\| =q_{n}\cdot|q_{n}|_{p}\cdot\|q_{n}p^{k_{n}}x\|\] \[=q_{n}\cdot|q_{n}|_{p}\cdot\|q_{n}(y+\Delta_{n})\|\] \[\leq q_{n}\cdot|q_{n}|_{p}\cdot\|q_{n}y\|+q_{n}\cdot|q_{n}|_{p} \cdot\|q_{n}\Delta_{n}\|\] \[\leq q_{n}\cdot|q_{n}|_{p}\cdot\|q_{n}y\|+q_{n}\cdot|q_{n}\Delta _{n}|\]
Then
\[\lim_{n\to\infty}p^{k_{n}}q_{n}\cdot|p^{k_{n}}q_{n}|_{p}\cdot\|p^{k_{n}}q_{n}x \|\leq\varepsilon^{\prime}+2^{-1}(\varepsilon-\varepsilon^{\prime})<\varepsilon\]
Therefore, \(m_{p}(x)<\varepsilon\) which is a contradiction.
We can use the above lemma to deduce that should a counterexample \(x\) to pLC exist, there exists an element \(y\) of \(\overline{T_{p}(x)}\) with a uniformly recurrent base-\(p\) expansion that is a counterexample to pLC. This proves Theorem 2.6.
**Proposition 3.2**.: _Let \(x\) be a counterexample of pLC. Then \(\overline{T_{p}(x)}\) contains a counterexample of pLC with a uniformly recurrent base-\(p\) expansion._
Proof.: By construction \(\overline{T_{p}(x)}\) is closed and bounded. Therefore, \(\overline{T_{p}(x)}\) is compact and invariant under multiplication by \(p\). The corresponding set of base-\(p\) expansions is given by \(\overline{S(w(x,p))}\) and is also compact and invariant under the shift map \(T\). At least one minimal, invariant, compact subset \(R\) of \(\overline{S(w(x,p))}\) exists, and by [13, Theorem 1.5.9], this is a set comprised of numbers with uniformly recurrent base-\(p\) expansions. By Lemma 3.1, all elements in \(R\) are counterexamples to pLC.
## 4 The \(2\)-adic Littlewood Conjecture
### Proof of Theorem 2.11
In order to prove Theorem 2.11, it will be useful to first introduce a number of auxiliary results. The first result is that of Seebold [15], which shows that the only pure morphic words over \(\{0,1\}\) which are overlap-free are the Thue-Morse word \(M\) and its complement \(\widetilde{M}\).
**Theorem 4.1** ([15]).: \(M\) _and \(\widetilde{M}\) are the only pure morphic overlap-free words in \(\{0,1\}^{\omega}\)._
Using this theorem, we can give the following characterisation of all binary pure morphic words.
**Lemma 4.2**.: _Let \(w\) be a pure morphic word in \(\{0,1\}^{\omega}\), where \(\varphi\) is the underlying morphism. Then either:_
**(P1)**: \(w\) _is_ \(M\) _or_ \(\widetilde{M}\)_._
**(P2)**: _There is a non-trivial subword_ \(v\) _of_ \(w\)_, such that_ \(v^{n}\) _is a subword of_ \(w\) _for all_ \(n\in\mathbb{N}\)_._
**(P3)**: \(w\) _contains overlap of the form_ \(aXaXa\) _and_ \(\lim_{n\to\infty}|\varphi(a)|=\infty\)_._
Proof.: In the case that \(w\) is overlap-free, Theorem 4.1 shows that \(w\) is the Thue-Morse word \(M\) or its complement \(\widetilde{M}\)**(P1)**.
Assume that \(w\) is not overlap-free and that \(w=\varphi^{\omega}(0)\) - the case that \(w=\varphi^{\omega}(1)\) follows by symmetry. Then, for some words \(u,v\in\{0,1\}^{*}\), we have
\[\varphi(0)=0u\quad\text{and}\quad\varphi(1)=v.\]
Since \(\varphi\) is prolongable over \(0\), the word \(u\) is not the empty word. In particular, \(|\varphi(0)|\geq 2\). Note that if \(u\) consists only \(0\)'s, _i.e._, \(u=0^{n}\) for \(n\in\mathbb{N}\), then
\[w=\varphi^{\omega}(0)=0^{\omega},\]
where \(x^{\omega}\) is the _periodic_ word \(xxx\cdots\). Thus, \(w\) satisfies **(P2)**.
**Case I: \(v\) is the empty word \(\epsilon\).**
Since \(\varphi(1)=\epsilon\), applying the morphism to \(\varphi^{k}(0)\) will ignore any \(1\)'s in this sequence. In other words, if \(i_{k}\) is the number of \(0\)'s that appear in \(\varphi^{k}(0)\), then
\[\varphi^{k+1}(0)=\left(\varphi(0)\right)^{i_{k}}.\]
Therefore, \(i_{k+1}=i_{k}\cdot i_{1}=i_{1}^{k+1}\). Since \(\varphi\) is prolongable, \(u\) contains the letter \(0\) at least once, and so \(i_{1}\geq 2\). Since \(\lim_{k\to\infty}i_{k}=\infty\),
\[w=\varphi^{\omega}(0)=\left(\varphi(0)\right)^{\omega}.\]
In this case, \(w\) satisfies **(P2)**.
**Case II: \(v=1^{n}\).**
As discussed above, we can assume that \(u\) contains the letter \(1\) at least once. If \(\varphi(1)=1^{n}\) for some \(n\geq 2\), then \(\varphi^{k}(1)=1^{n^{k}}\). Since \(\varphi(0)\) contains the letter \(1\), the word \(\varphi^{k+1}(0)\) contains the subword \(\varphi^{k}(1)\) for all \(k\in\mathbb{N}\). Therefore, \(w=\varphi^{\omega}(0)\) satisfies **(P2)**.
Let \(v=1\). If \(u=1^{k}0\) with \(k\in\mathbb{N}\), then for all \(m\in\mathbb{N}\)
\[\varphi(1^{m}0)=1^{m}1^{k}0=1^{(m+k)}0.\]
Therefore, for all \(n\in\mathbb{N}\), the word \(\varphi^{n}(u)\) starts with the prefix \(1^{(n+1)k}\). Taking the limit as \(n\) tends to infinity, it follows from (4) that \(w\) satisfies **(P2)**.
Now assume that \(u\) is of the form \(u^{\prime}01^{k}\) with \(k\in\mathbb{N}\), then for all \(m\in\mathbb{N}\)
\[\varphi(01^{m})=0u^{\prime}01^{k}1^{m}=0u^{\prime}01^{(k+m)}\]
In particular, for all \(n\in\mathbb{N}\) the word \(\varphi^{n}(u)\) ends in the term \(1^{(n+1)k}\). Therefore, \(w\) satisfies **(P2)**.
Finally, assume that \(u=u^{\prime}01^{k}0\) with \(k\in\mathbb{Z}_{\geq 0}\). The word
\[\varphi(01^{k}0)=0u^{\prime}01^{k}01^{k}0u^{\prime}01^{k}0\]
contains \(01^{k}01^{k}0\) as a subword. Since \(\varphi\) is prolongable on \(0\), the length of \(\varphi^{n}(0)\) tends to infinity, and **(P3)** is satisfied.
**Case III: \(v\) contains \(0\).**
Again, we can freely assume that \(u\) contains the letter \(1\). Since \(v\) contains the letter \(0\), the morphism \(\varphi\) is primitive: if \(v\) contains both \(0\) and \(1\), it follows by definition; if \(v\) only contains the letter \(0\), then \(\varphi^{2}(1)\) will contain \(\varphi(0)\) which contains both the letters \(0\) and \(1\). Since \(\varphi\) is primitive \(\lim_{n\to\infty}|\varphi^{n}(0)|=\lim_{n\to\infty}|\varphi^{n}(1)|=\infty\). These words contain overlap by previous assumption and, therefore, satisfy **(P3)**.
The final result needed to prove Theorem 2.11 is due to Badziahin and Zorin, which shows that the real number that has the Thue-Morse word (or its complement) as its base-\(n\) expansion is well-approximable provided that \(n\) is not divisible by \(15\). Note that if \(n\) is divisible by \(15\) the result is unknown, as opposed to being false.
**Theorem 4.3** ([5]).: _Let \(M_{n}\) be the real number whose base-\(n\) expansion is the Thue-Morse word. If \(n\) is not divisible by \(15\), then \(M_{n}\) is well-approximable._
Combining together Proposition 3.2, Lemma 4.2, and Theorem 4.3 provides the proof for Theorem 2.11.
Proof of Theorem 2.11.: From Theorem 4.3, \(M_{2}\) is well-approximable and therefore, satisfies 2LC. In this case, \(\widetilde{M}_{2}\) is given by \(1-M_{2}\). Since \(M_{2}\) is well-approximable, so is \(\widetilde{M}_{2}\). Therefore, the real numbers whose base-\(2\) expansion satisfy **(P1)** satisfy 2LC. For words satisfying **(P2)**, we note that for any periodic word \(v\), _i.e._, \(v=X^{\omega}\), the real number \(v_{2}\) is rational and therefore, well-approximable. Applying Proposition 2.5 shows that the real numbers whose base-\(2\) expansions satisfy **(P2)**, also satisfy 2LC. Finally, Proposition 2.8 implies that for any base-\(2\) expansion which satisfies **(P3)**, the corresponding real number satisfies 2LC.
#### 4.1.1 Proof of Corollary 2.12
From Corollary 2.10, we can extend Theorem 2.11 to Corollary 2.12, by showing that for any morphism \(\psi:\{a,b\}\to\{a,b\}^{*}\) which is prolongable on \(a\) with \(a,b\in\mathcal{A}\) and any coding \(\tau:\mathcal{A}\to\{0,1,\ldots,p-1\}\), the real number \(\tau(\psi^{\omega}(a))_{p}\) satisfies pLC. Note that since \(a\) and \(b\) are arbitrary letters, we can consider them to be letters in \(\{0,1,\ldots,p-1\}\) and forget the coding. By the same argument, the word \(\psi^{\omega}(a)\) can be rewritten as a coding of a pure morphic word \(w\) over the alphabet \(\{0,1\}\), _i.e._, \(\psi^{\omega}(a)=\sigma(w)\) where \(\sigma(0)=a\) and \(\sigma(1)=b\). If \(w\) satisfies **(P2)** or **(P3)**, then \(\psi^{\omega}(a)_{p}\) satisfies pLC using the same arguments as in the proof of Theorem 2.11. When \(\psi^{\omega}(a)\) is a coding of the Thue-Morse word or its complement, the situation is a bit more complicated.
Let \(TM(a,b)\) be the coding of \(M\), where \(0\) is mapped to \(a\) and \(1\) is mapped to \(b\). In order to complete the proof of Corollary 2.12, we will show that \(TM(a,b)_{p}\) is well-approximable for all primes \(p\) and all \(a,b\in\{0,1,\ldots,p-1\}\).
**Proposition 4.4**.: _Let \(a,b\in\{0,1,\ldots,n-1\}\). If \(n\) is not divisible by \(15\), then \(TM(a,b)_{n}\) is well-approximable._
Proof.: We start this proof by noting that if a real number \(x\) is well-approximable, then adding a rational number \(p/q\) or multiplying by a rational constant will preserve this property. In particular, \(TM(0,1)_{n}\) is well-approximable if and only if \(r\cdot TM(0,1)_{n}\) is
well-approximable for all \(r\in\mathbb{Q}\). If we restrict \(r\) to \(\{0,1,\ldots,n-1\}\), then the base-\(n\) expansion of \(TM(0,r)_{n}\) is
\[r\cdot TM(0,1)_{n}=r\cdot\sum_{i=1}^{\infty}\frac{\sigma(i)}{n^{i}}=\sum_{i=1}^{ \infty}\frac{r\cdot\sigma(i)}{n^{i}}=TM(0,r)_{n},\]
where \(\sigma(i)\) returns the \(i\)-th letter in the Thue-Morse word.
Similarly, \(TM(0,n-1)_{n}\) is well-approximable if and only if \(TM(n-1,0)_{n}\) is well-approximable. This follows from the following observation:
\[1-TM(0,n-1)_{n}=1-\sum_{i=1}^{\infty}\frac{(n-1)\cdot\sigma(i)}{n^{i}}=\sum_{i =1}^{\infty}\frac{(n-1)\cdot(1-\sigma(i))}{n^{i}}=TM(n-1,0)_{n}.\]
Multiplying by \(k/(n-1)\) shows that the number \(TM(k,0)_{n}\) is well-approximable for all \(k\in\{0,1,\ldots,n-1\}\) if and only if \(TM(n-1,0)_{n}\) is well-approximable.
Furthermore, we note that for \(\ell\in\{0,1,\ldots,n-1\}\) the real number whose base-\(n\) expansion is an infinite string of \(\ell\)'s corresponds to the rational number \(\ell/(n-1)\). Therefore, if \(\ell\leq n-1-k\), then
\[TM(0,k)_{n}+\frac{\ell}{n-1}=\sum_{i=1}^{\infty}\frac{k\cdot\sigma(i)}{n^{i}} +\sum_{i=1}^{\infty}\frac{\ell}{n^{i}}=\sum_{i=1}^{\infty}\frac{k\cdot\sigma( i)+\ell}{n^{i}}=TM(\ell,\ell+k)_{n}.\]
Likewise, \(TM(k,0)_{n}+\ell/(n-1)=TM(k+\ell,\ell)_{n}\). This, combined with the previous arguments, shows that for all \(a,b\in\{0,1,\ldots,n-1\}\) the real number \(TM(a,b)_{n}\) is well-approximable if and only if \(TM(0,1)_{n}\) is well-approximable. Applying Theorem 4.3 completes the proof.
### Proof of Theorem 2.13
Let \(\mu\) be the Thue-Morse morphism. In order to prove Theorem 2.13, we will use Proposition 3.2, Theorem 4.3, and the following two lemmas.
**Lemma 4.5**.: _For every overlap-free word \(x\in\{0,1\}^{*}\), there exist words \(u,v,y\in\{0,1\}^{*}\) with \(|u|,|v|\leq 2\) and \(x=u\mu(y)v\)._
**Lemma 4.6**.: _Let \(y\in\{0,1\}^{*}\). Then \(y\) is overlap-free if and only if \(\mu(y)\) is overlap-free._
For Lemma 4.5, see [11, Theorem 6.4] or [1, Lemma 3]. For Lemma 4.6, see [2, Lemma 1.7.4].
Proof of Theorem 2.13.: In order to prove this result, we will show that every overlap-free base-\(2\) expansion of length \(K\) contains a prefix of \(M\) or \(\widetilde{M}\) of length \(p(K)\), where \(\lim_{K\to\infty}p(K)=\infty\). The result then follows from Proposition 2.5 and Theorem 4.3.
Let \(x\) be an overlap-free word of length \(K\). By Lemma 4.5, there exist words \(u_{1},v_{1},y_{1}\in\{0,1\}^{*}\) with \(|u_{1}|,|v_{1}|\leq 2\) and \(x=u_{1}\mu(y_{1})v_{1}\). Using this construction, we can conclude that \(|\mu(y_{1})|=K-|u|-|v|\geq K-4\). Furthermore, since \(\mu\) is \(2\)-uniform, _i.e._, \(|\mu(0)|=|\mu(1)|=2\), the length of \(y_{1}\) is equal to \(|\mu(y_{1})|/2\). Provided that \(K-4\geq 1\), we also have that \(y_{1}\) is not the empty word.
Since \(x\) is overlap-free, it follows that \(\mu(y_{1})\) is overlap-free. Additionally, Lemma 4.6 implies that \(y_{1}\) is overlap-free. As a result, there exist \(u_{2},v_{2},y_{2}\in\{0,1\}^{*}\) with \(|u_{2}|,|v_{2}|\leq 2\) such that \(y_{1}=u_{2}\mu(y_{2})v_{2}\). Then \(x\) can be rewritten as
\[x=u_{1}\mu(y_{1})v_{1}=u_{1}\mu(u_{2})\mu^{2}(y_{2})\mu(v_{2})v_{1}.\]
The length of \(y_{2}\) is bounded as follows:
\[\frac{|y_{1}|-4}{2}\leq|y_{2}|\leq\frac{|y_{1}|}{2}\]
More generally, for any \(k\in\mathbb{N}\), the subword \(y_{k}\) can be rewritten as
\[y_{k}=u_{k+1}\mu(y_{k+1})v_{k+1},\]
where \(u_{k+1},v_{k+1},y_{k+1}\in\{0,1\}^{*}\), \(|u_{k+1}|,|v_{k+1}|\leq 2\) and
\[\frac{|y_{k}|-4}{2}\leq|y_{k+1}|\leq\frac{|y_{k}|}{2}.\]
Note that \(u_{k+1},v_{k+1}\) and \(y_{k+1}\) can all be the empty word.
Using this substitution, \(x\) can be rewritten in terms of \(y_{k}\) as
\[x=u_{1}\mu(u_{2})\ldots\mu^{k-1}(u_{k})\mu^{k}(y_{k})\mu^{k-1}(v_{k})\ldots\mu (v_{2})v_{1},\]
where the length of \(y_{k}\) is bounded below:
\[|y_{k}|\geq\frac{K-4\cdot(2^{k}-1)}{2^{k}}. \tag{8}\]
From (8), the word \(y_{k}\) is non-empty provided that \(K-4\cdot(2^{k}-1)>0\). By rearranging, the largest value of \(k\) that guarantees that \(y_{k}\) is non-empty is \(k=\lfloor\log_{2}(K+4)\rfloor-2\). For such a value of \(k\), let \(a\) be any subword of \(y_{k}\) of length 1. Then \(\mu^{k}(a)\) is a prefix of either \(M\) or \(\widetilde{M}\). Since \(\mu\) is 2-uniform and \(|a|=1\), it follows that the length of this prefix is
\[|\mu^{k}(a)| \geq 2^{k}\] \[\geq 2^{(\log_{2}(K+4)-3)}\] \[=\frac{K+4}{8}.\]
Since \(\lim_{K\to\infty}(K+4)/8=\infty\), the result follows.
### Funding
Dr Blackman was supported by the Engineering and Physical Sciences Research Council (EPSRC) [grant no. EP/W006863/1] awarded through the University of Liverpool. Prof. Kristensen's research was supported by the Independent Research Fund Denmark (Grant ref. 1026-00081B) and Aarhus University Research Foundation (Grant ref. AUFF-E-2021-9-20). Dr Northey was supported by the Engineering and Physical Sciences Research Council [grant no. EP/RF060349] awarded through Durham University.
|
2310.14882
|
On a Markov chain related to the individual lengths in the recursive
construction of Kingman's coalescent
|
Kingman's coalescent is a widely used process to model sample genealogies in
population genetics. Recently there have been studies on the inference of
quantities related to the genealogy of additional individuals given a known
sample. This paper explores the recursive (or sequential) construction which is
a natural way of enlarging the sample size by adding individuals one after
another to the sample genealogy via individual lineages to construct the
Kingman's coalescent. Although the process of successively added lineage
lengths is not Markovian, we show that it contains a Markov chain which records
the information of the successive largest lineage lengths and we prove a limit
theorem for this Markov chain.
|
Linglong Yuan
|
2023-10-23T12:51:57Z
|
http://arxiv.org/abs/2310.14882v3
|
On a Markov chain related to the individual lengths in the recursive construction of Kingman's coalescent
###### Abstract
Kingman's coalescent is a widely used process to model sample genealogies in population genetics. Recently there have been studies on the inference of quantities related to the genealogy of additional individuals given a known sample. This paper explores the recursive (or sequential) construction which is a natural way of enlarging the sample size by adding individuals one after another to the sample genealogy via individual lineages to construct the Kingman's coalescent. Although the process of successively added lineage lengths is not Markovian, we show that it contains a Markov chain which records the information of the successive largest lineage lengths and we prove a limit theorem for this Markov chain.
_Key words:_ Kingman's coalescent, recursive construction, sequential construction, lineage length, provisional external branch length, convergence of non-Markov processes
_MSC (2020):_ primary 60J90, 60B10, 60B12; secondary 37A30, 60J10.
## 1 Introduction
The coalescent theory was introduced by Kingman [11] and has since then become a standard framework to model sample genealogies. A Kingman's \(n\)-coalescent with \(n\geq 1\), denoted by \(\Pi^{n}=(\Pi^{n}(t))_{t\geq 0}\), is a continuous-time Markov process with state space \(\mathcal{P}(n)\), the set of partitions of \([n]:=\{1,2,..,n\}\). It starts at time \(0\) with the partition of singletons \(\{\{1\},\{2\},...,\{n\}\}\), and at any time, any two blocks merge into one at rate \(1\) independently. Eventually, the coalescent reaches the final state \(\{1,2,..,n\}\), called the most recent common ancestor (MRCA), and stays there forever. We set by convention that \(\Pi^{1}(t)=\{\{1\}\}\) for any \(t\geq 0\).
We introduce further notations: if \(\pi\) is a partition of a set of integers, let \(|\pi|\) be the number of blocks in \(\pi\); let \(\mathbb{N}=\{1,2,...\}\) and \(\mathcal{P}(\infty)\) be the set of partitions of \(\mathbb{N}\).
The Kingman's \(n\)-coalescents are consistent: for any \(m>n\geq 1\), if we consider the natural restriction of \(\Pi^{m}\) to the partitions in \(\mathcal{P}(n)\), then the resulting new process has the same law as \(\Pi^{n}\), thus independent of \(m\). The consistency property will allow to construct the Kingman's (infinite) coalescent \(\Pi^{\infty}=(\Pi^{\infty}(t))_{t\geq 0}\) which starts at time \(0\) with the set of singletons \(\{\{1\},\{2\},...\}\in\mathcal{P}(\infty)\), and the restriction of \(\Pi^{\infty}\) to \(\mathcal{P}(n)\) has the same law as \(\Pi^{n}\) for all \(n\geq 1\). This \(\Pi^{\infty}\) can be constructed using Kolmogorov's extension theorem ([3, Proposition 2.1]).
The process \(\Pi^{\infty}\) can also be constructed naturally by first giving \(\Pi^{1}\), and conditionally on \(\Pi^{n}\) for \(n\geq 1\), we use consistency property to construct \(\Pi^{n+1}\) by connecting individual \(n+1\) to \(\Pi^{n}\) at a random time, see Figure 1. More precisely, given \(\Pi^{n}\): at any time \(t\), if individual \(n+1\) has not been connected to \(\Pi^{n}\), then the rate for it to be connected is equal to \(|\Pi^{n}(t)|\); if the connection takes place at time \(t\), then the individual will coalesce with a block chosen uniformly from \(\Pi^{n}(t)\). We denote the connection time of individual \(n+1\) by \(L_{n+1}\). We shall also call it the _lineage length_ of individual n+1. The construction just explained is the so-called _recursive (or sequential) construction_ of Kingman's coalescent; see [7, Section 5] for an introduction of recursive construction of \(\Lambda\)-coalescents for which Kingman's coalescent is a special case, see also [5, Section 3.4].
We are interested in the asymptotic behaviour of the process of lineage lengths (or connection times) of individuals in this construction. This is partly motivated by a recent work [8] which studied the inference of quantities related to the genealogy of additional individuals given a known
sample. The recursive construction provides a natural way of enlarging sample sizes, and could be a useful angle to investigate the genealogical relationship between known and new additional individuals. The idea of adding up small parts to construct the whole process can also be found in the measure division construction of \(\Lambda\)-coalescents [13], see also [2, Section 3].
Based on the definition of recursive construction, for any \(n\geq 1\), we have
\[\mathbb{P}(L_{n+1}\geq t\,|\,\Pi^{n})=\exp\left(-\int_{0}^{t}|\Pi^{n}(s)|ds \right),\quad\forall t\geq 0. \tag{1.1}\]
Since \(\Pi^{1}(t)=\{1\}\) for any \(t\geq 0\), we set \(L_{1}=\infty\) by convention. Note that in this construction, \(L_{n+1}\) is the external branch length of individual \(n+1\) in \(\Pi^{n+1}\). However as more individuals are added, the external branch length of individual \(n+1\) will be shorter and shorter, see Figure 1. From this point of view, we call \(L_{n}\) the _provisional external branch length_ (although for brevity we will still use _lineage length_ later) of individual \(n\), for \(n\geq 1.\) Here the case \(n=1\) is included for completeness.
**Definition 1**.: _In the recursive construction of Kingman's coalescent, we call the process_
\[(L_{n}):=(L_{n})_{n\geq 1}\]
_provisional external branch length sequence (PEBLS) of Kingman's coalescent._
Given \((L_{n})\), we can construct \(\Pi^{\infty}\), following the description of recursive construction:
* for any \(n\geq 2\), choose uniformly an element from \(\{i:L_{i}\geq L_{n},1\leq i\leq n-1\}\), say \(j\);
* then merge individual \(n\) with the cluster containing \(j\) at time \(L_{n}\).
The resulting process has the same law as \(\Pi^{\infty}\).
The recursive construction allows to build \(\Pi^{\infty}\) by sample size expansion. We can view integer \(n\) as individual \(n\) and also as time \(n\). This is the main difference with the usual Kingman's coalescent which fixes the sample size (finite or infinity) first and evolves in (real) time. A direct consequence of the size-expansion point of view is that \((L_{n})\) is not a Markov chain, since to determine the law of \(L_{n}\), we need to know not only \(L_{n-1}\), but all \(L_{i}\) for \(2\leq i\leq n-1\), see (1.1). The main result of this paper is that, surprisingly, there is a Markov chain out of \((L_{n})\), see Theorem 1 in the next section.
## 2 Main results
Let \(M_{j}\) be the \(j\)-th largest length in \((L_{n})_{n\geq 2}\). Note that \(M_{1}<\infty\) almost surely as Kingman's coalescent comes down from infinity (i.e. \(|\Pi^{\infty}(t)|<\infty\) for all \(t>0\), almost surely). Let \(A_{1}\) be the arrival time of \(M_{1}:L_{A_{1}}=M_{1}\). Let \(R_{1}=1.\) For any \(i\geq 2\), define \(A_{i},R_{i}\) by
\[A_{i}=\arg\max_{j}\{L_{j}:j>A_{i-1}\},\quad M_{R_{i}}=L_{A_{i}}. \tag{2.2}\]
In other words,
* the largest length in \((L_{n})_{n\geq 2}\) has index \(A_{1}\);
* for any \(i\geq 2\), the largest among \(\{L_{n}:n>A_{i-1}\}\) has index \(A_{i}\);
Figure 1: On the left, the recursive construction is up to individual \(3\), and on the right is up to individual \(4\). The bold segments are the external branches. On the left, individual \(3\) is just added and thus \(L_{3}\) is the external branch length of individual \(3\). On the right, since individual \(4\) coalesced with individual \(3\), the external branch length of individual \(3\) becomes \(L_{4}\) which is smaller than \(L_{3}\).
* moreover, the length with index \(A_{i}\) is the \(R_{i}\)-th largest among \((L_{n})_{n\geq 2}\).
Thus, \((A)=(A_{i})_{i\geq 1}\) records the arrival times of successive largest lengths in \((L_{n})_{n\geq 2}\), and \((R)=(R_{i})_{i\geq 1}\) records the rankings of these lengths in \((L_{n})_{n\geq 2}\). By definition we have
\[1<A_{i}<A_{i+1},1\leq R_{i}<R_{i+1},\quad\text{ for any }i\geq 1. \tag{2.3}\]
It turns out that \((R,A)\) is a Markov chain despite that \((L_{n})\) is non-Markov.
**Theorem 1**.: _The process \((R,A)\) is a Markov chain such that_
1. \(A_{i}-R_{i}\geq 1\)_, for any_ \(i\geq 1\)_;_
2. \(\mathbb{P}(A_{1}=n)=\frac{2}{n(n+1)}\) _for any_ \(n\geq 2\)_, and_ \(R_{1}=1\)_;_
3. _For any_ \(i\geq 1\) _we have_ \(R_{i}+1\leq R_{i+1}\leq A_{i}\) _and for any_ \(1\leq x\leq A_{i}-R_{i}\)_,_ \[\mathbb{P}(R_{i+1}\geq R_{i}+x\,|\,R_{i},A_{i})=\frac{\binom{A_{i}-R_{i}-x}{A_{ i}-R_{i}-1}}{\binom{A_{i}-R_{i}-1}{A_{i}-1}},\] (2.4) _and for any_ \(y\geq 1\)_,_ \[\mathbb{P}(A_{i+1}\geq A_{i}+y\,|\,R_{i},A_{i},R_{i+1})=\frac{A_{i}+R_{i+1}}{A _{i}+R_{i+1}+y-1}.\] (2.5)
To analyse the asymptotic behaviour of \((R,A)\), we study the convergence of the processes below
\[\mathcal{W}^{(n)}:=\left(\left(\frac{R_{n+1+i}^{2}}{A_{n+1+i}},\,\ln\frac{A_{ n+1+i}}{A_{n+i}}\right)\right)_{i\geq 1},\quad n\geq 0,\]
as \(n\to\infty\). Note that the above processes are not Markov for any \(n\). To state the convergence result, we need some more notations. We use \(\mathcal{L}(\cdot)\) to denote the law of a random object. For \(t\geq 0\), let \(\mathcal{E}_{t}\) be the law as follows:
\[\mathcal{E}_{t}:=\mathcal{L}(X\,|\,X\geq t)=\mathcal{L}(t+X),\quad\text{ for }X\sim\text{Exp}(1),t\geq 0, \tag{2.6}\]
where \(\text{Exp}(1)\) denotes the exponential law with parameter \(1\) and \(\sim\) is to indicate a random variable following a certain law. The above notation entails \(\mathcal{E}_{0}=\text{Exp}(1)\). But we will only use the notation \(\text{Exp}(1)\) when \(t=0\) for explicitness.
Let \((\eta_{1},\eta_{2},\cdots)\) be i.i.d. random variables with common law \(\text{Exp}(1)\). Let \(\xi_{0}\geq 0\) be a random variable independent of \((\eta_{1},\eta_{2},\cdots)\). Inductively, define
\[\xi_{i}:=Z^{(i)}e^{-\eta_{i}},\quad i\geq 1 \tag{2.7}\]
where \(Z^{(i)}\) is a random variable such that
\[Z^{(i)}=\xi_{i-1}+X_{i},\]
where \(X_{i}\sim\text{Exp}(1)\) and is independent of \(\{\xi_{0},\xi_{1},\cdots,\xi_{i-1},\eta_{1},\eta_{2},\cdots\}\). Denote \(\mathcal{W}:=\left((\xi_{i},\eta_{i})\right)_{i\geq 1}\). We use \(\Longrightarrow\) to denote the weak convergence in finite dimensional distributions. Then we have the following asymptotic result for \(\mathcal{W}^{(n)}\).
**Theorem 2**.: _If \(\xi_{0}\sim\text{Exp}(1)\), then \(\mathcal{W}^{(n)}\stackrel{{ n\to\infty}}{{\Longrightarrow}} \mathcal{W}\) which is a stationary Markov chain with_
* \(\xi_{i}\sim\text{Exp}(1)\) _for any_ \(i\geq 1\)_,_
* _and the density function of_ \((\xi_{1},\eta_{1})\) _given by_ \[\frac{\mathrm{d}^{2}}{\mathrm{d}s\mathrm{d}t}\mathbb{P}(\xi_{1}\leq s,\eta_{1} \leq t)=se^{t}e^{-se^{t}},\quad s\geq 0,t\geq 0.\] (2.8)
The remaining part of the paper will be devoted to proofs. We will prove Theorem 1 in Section 3 and Theorem 2 in Section 4.
## 3 Proof of Theorem 1
### Aldous's construction
Aldous [1, Section 4.2] introduced a construction of Kingman's coalescent, see also [3, Theorem 2.2]. The idea can be dated back to the seminal paper of Kingman [11]. This construction will play a key role in the proof of Theorem 1.
Let \((\zeta_{i})_{i\geq 2}\) be independent random variables such that \(\zeta_{i}\sim\exp\left(\binom{i}{2}\right)\). Define a random sequence \(0<\cdots<\tau_{3}<\tau_{2}<\tau_{1}<\infty\) by \(\tau_{j}=\sum_{k=j+1}^{\infty}\zeta_{k}.\) Let \((U_{1},U_{2},\cdots,V_{1},V_{2},\cdots)\) be i.i.d. uniform random variables on \((0,1)\) and independent of \((\tau_{j})_{j\geq 1}\). Note that almost surely random variables in \(U_{i}\)'s and \(V_{i}\)'s are different from each other. Define a function \(T:(0,1)\mapsto[0,\infty)\) such that \(T(U_{j})=\tau_{j}\) for any \(j\geq 1\), and \(T(u)=0\) if \(u\notin\{U_{1},U_{2},\cdots\}\). We call the vertical line from \((U_{i},\tau_{i})\) down to \((U_{i},0)\) the stick \(i\), see Figure 2. Then for any \(t\geq 0\), we define a partition of \(\mathbb{N}\) such that \(i,j\) are in the same block if and only if
\[t>\sup_{\text{any $u\in(0,1)$ between $V_{i},V_{j}$}}T(u).\]
The resulting process takes values in \(\mathcal{P}(\infty)\) for any \(t\geq 0\), and has the same law as \(\Pi^{\infty}\). In this construction, we call \(V_{i}\) the location of individual \(i\), and \(U_{i}\) the location of stick \(i\). Note that this construction applies to finitely many individuals as a natural restriction, see an example of 7 individuals in Figure 2.
Recall the sequence \((M_{j})\) introduced at the beginning of Section 2. Using Aldous's construction, it is clear that \(M_{j}=\tau_{j}\) for any \(j\geq 1\). We shall from now on only use the notation \((\tau_{j})\) as further discussions are based on Aldous's construction.
### Identify \((L_{n})\) from Aldous's construction
Aldous's construction gives a realisation of \(\Pi^{\infty}\), and thus we can use it to identify \((L_{n})\) and \((R,A)\) in the recursive construction. We first deal with \((L_{n})\). We start with this question: find \(k\) such that \(L_{k}=\tau_{1}\). Recall \(U_{1},U_{2},\cdots,V_{1},V_{2},\cdots\) in Aldous's construction.
**Lemma 1**.: _Let \(k\) be the smallest integer such that \(U_{1}\) is between \(V_{k}\) and \(V_{1}\). Then \(L_{k}=\tau_{1}\)._
Proof.: The stick \(1\) at \(U_{1}\) splits \((0,1)\) into two subintervals. WLOG, assume that \(V_{1},V_{2},\cdots,V_{k-1}\) are on \((0,U_{1})\) and \(V_{k}\) is on \((U_{1},1)\). By Aldous's construction, since the stick \(1\) at \(U_{1}\) is of length \(\tau_{1}\) and is the largest among all sticks, all these \(k\) individuals will merge into one cluster at time \(\tau_{1}\), and the \(k-1\) individuals on \((0,U_{1})\) merge into one cluster at a time strictly smaller than \(\tau_{1}\). Then the only possibility is that individual \(k\) is connected to the coalescent process of the first \(k-1\) individuals at time \(\tau_{1}\). Then the lemma is proved.
Next we determine \(L_{n}\) for any \(n\geq 2\) (recall \(L_{1}=\infty\)).
**Corollary 1**.: _Let \(n\geq 2\). Let \(k\) be the smallest integer such that \(V_{n}\) is single without any others from \(\{V_{1},V_{2},\cdots,V_{n}\}\) that is between \(U_{k}\) and some \(U_{j}\) for \(1\leq j<k\) or between \(U_{k}\) and \(0\) or between \(U_{k}\) and \(1\). Then \(L_{n}=\tau_{k}\)._
Proof.: The first \(k-1\) sticks divide \((0,1)\) into \(k\) subintervals. Individual \(n\) is located with some other individuals on one of the subintervals, say \((a,b)\), where \(a,b\) are distinct elements in \(\{U_{1},U_{2},\cdots,U_{k-1},0,1\}\). The arrival of stick \(k\) at \(U_{k}\) will separate individual \(n\) from others on \((a,b)\). This implies \(L_{n}=\tau_{k}\), following a similar reasoning as in Lemma 1. Then the proof is finished.
We present three more corollaries which will be used for the identification of \((R,A)\). The first one finds the value of \(n\) such that \(L_{n}=\tau_{k}\) for \(k\geq 1\), generalising Lemma 1.
Figure 2: Aldous’s construction of Kingman’s coalescent. The vertical axis is the time axis. The stick \(i\) is at \(U_{i}\) with height \(\tau_{i}\). The \(V_{i}\)’s (the crosses) are the locations of individuals. For the 7 individuals in the figure, the partition at time \(t\) is \(\{\{2,4,6\},\{5\},\{1\},\{3,7\}\}\).
**Corollary 2**.: _Let \(k\geq 1\). Let \(X\) be the closest element to \(U_{k}\) from the left in \(\{U_{1},U_{2},\cdots,U_{k-1},0,1\}\) and \(Y\) from the right. Let \(n\) be the smallest integer such that the interval \((X,U_{k})\) contains at least one element from \(\{V_{1},V_{2},\cdots,V_{n}\}\) and the same for \((U_{k},Y)\). Then \(L_{n}=\tau_{k}\)._
Proof.: For \(k=1\), it is a restatement of Lemma 1. For \(k\geq 2\), we apply Corollary 1 to obtain that for such a unique \(n\) we have \(L_{n}=\tau_{k}\). Then the proof is finished.
The second one presents a special scenario where an upper bound for \(n\) in the above corollary can be given.
**Corollary 3**.: _Let \(s>1\) and consider \(V_{1},V_{2},\cdots,V_{s}\). Assume that for \(k\geq 1\), \(U_{k}\) is neighbour to some \(V_{i},V_{j}\) for \(1\leq i\neq j\leq s\) (i.e. \(U_{k}\) is between \(V_{i}\) and \(V_{j}\); there exists no other \(U_{a}\) or \(V_{b}\) between \(V_{i},V_{j}\) for \(1\leq a\leq k-1,1\leq b\leq s\)). Then the individual \(n\) with \(L_{n}=\tau_{k}\) must have \(n\leq s\)._
Proof.: WLOG, assume \(V_{i}<U_{k}<V_{j}\). Let \(X\) be the closest element to \(U_{k}\) from the left in \(\{U_{1},U_{2},\cdots,U_{k-1},0,1\}\) and \(Y\) from the right. Then we have
\[X<V_{i}<U_{k}<V_{j}<Y.\]
Then by Corollary 2, the individual \(n\) such that \(L_{n}=\tau_{k}\) must have \(n\leq i\lor j\leq s\). Then the proof is finished.
The last one finds a special scenario where a lower bound for \(n\) in Corollary 2 can be given.
**Corollary 4**.: _Let \(m\geq 1,k\geq 1\). If there is no element from \(\{V_{1},V_{2},\cdots,V_{m}\}\) that is located between \(U_{k}\) and some \(U_{j}\) for \(1\leq j<k\), or between \(U_{k}\) and \(0\), or between \(U_{k}\) and \(1\), then the individual \(n\) with \(L_{n}=\tau_{k}\) must have \(n>m\)._
Proof.: To find \(n\) such that \(L_{n}=\tau_{k}\), we need to consider more \(V^{\prime}s\) so that the condition in Corollary 2 is satisfied. Therefore \(n>m\).
### Identify \((R,a)\) from Aldous's construction
Now we provide an algorithm to identify \((R,W)\) based on Corollary 2.
**Corollary 5**.: _We have \(R_{1}=1\) by definition and the value of \(A_{1}\) is given by Lemma 1 or Corollary 2: \(A_{1}\) is the smallest integer such that \(U_{1}\) is between \(V_{A_{1}}\) and some \(V_{j}\) for \(1\leq j<A_{1}\). We have \(L_{A_{1}}=\tau_{1}\)._
_In general, given \((R_{i},A_{i})\) for some \(i\geq 1\), we perform the following loop to obtain \((R_{i+1},A_{i+1})\)._
* _for_ \(j\geq R_{i}+1\)__
* _find_ \(n\) _using Corollary_ 2 _such that_ \(L_{n}=\tau_{j}\)_. If_ \(n<A_{i}\)_, then continue the loop with_ \(j=j+1\)_; otherwise (i.e._ \(n>A_{i}\)) let_ \(A_{i+1}=n\) _and_ \(R_{i+1}=j\) _(hence_ \(L_{A_{i+1}}=\tau_{R_{i+1}}\)_), and get out of the loop._
Proof.: We only need to check how the algorithm produces \((R_{i+1},A_{i+1})\) given \((R_{i},A_{i})\). We count \(R_{i}+1,R_{i}+2,R_{i}+3,\cdots\) until the first integer \(k\) such that the lineage length of rank \(k\) belongs to an individual, say \(m\), which is larger than \(A_{i}\). Then we have found \(R_{i+1}=k,A_{i+1}=m\), based on the definition of \((R,A)\). This is a restatement of the algorithm in the corollary and thus the proof is finished.
It is clear that Corollary 5 is like a shell with the core being Corollary 2. We will provide another way of identifying \((R,A)\) so that in Corollary 5 we do not need to find the exact value of \(n\) to know that \(n<A_{i}\) (thanks to Corollary 3) and there is a natural way of determining \(n\) if \(n>A_{i}\) (thanks to Corollary 4). The proof of Theorem 1 will reply on this new approach that we introduce in the next section.
### A simpler way of identifying \((R,a)\) from Aldous's construction
Note that Aldous's construction has two systems of notation: \(U_{i}\)'s and \(V_{i}\)'s, and also sticks and individuals. Stick \(i\) is at \(U_{i}\) and individual \(i\) is at \(V_{i}\). Once \(U_{i}\)'s and \(V_{i}\)'s are given, the sticks and individuals are automatically planted at their locations. To facilitate the description of the simpler way of identification, we will plant the sticks and individuals successively in the order of \(U_{1},U_{2},\cdots\) and \(V_{1},V_{2},\cdots\) respectively. The implementation of the identification can be decomposed into two steps.
Step 1: plant the stick \(1\) at \(U_{1}\). Plant individuals \(1,2,3,\cdots\) successively on \((0,1)\) at locations \(V_{1},V_{2},V_{3},\cdots\) until the first integer \(n\) (\(n\geq 2\)) such that \(U_{1}\) is between \(V_{n}\) and some \(V_{j}\) for \(j<n\). We know from Lemma 1 that \(A_{1}=n\) and \(R_{1}=1\).
Step 2: we proceed by induction to show the transition from \((R_{i},A_{i})\) to \((R_{i+1},A_{i+1})\) for any \(i\geq 1\). Assume the following is true:
* We know \(R_{i}\) already, and after planting individual \(k\) we obtain \(A_{i}=k\);
* \(U_{1},U_{2},\cdots,U_{R_{i}}\) and \(V_{1},V_{2},\cdots,V_{A_{i}}\) are planted locations;
* Moreover, every \(U_{l}\) for \(1\leq l\leq R_{i}\) is neighbour to some \(V_{k}\) and \(V_{j}\) for \(1\leq k\neq j\leq A_{i}\) (i.e. \(U_{l}\) is between \(V_{k},V_{j}\); there exists no other \(U_{s}\) or \(V_{t}\) between \(V_{k},V_{j}\) for \(1\leq s\leq R_{i},1\leq t\leq A_{i}\)).
Note that the notion of neighbour here is based on the planted locations. The above assumptions hold for \(i=1\), see Step 1. To obtain \((R_{i+1},A_{i+1})\), we do the following.
* Finding \(R_{i+1}\). Plant remaining sticks successively until the first stick \(n\) at \(U_{n}\), which is not neighbour to some \(V_{k}\) and \(V_{j}\) for \(1\leq k\neq j\leq A_{i}\). Then we have found \(R_{i+1}=n\). Note that in this case, there are three possibilities: 1. \(U_{n}\) is neighbour to some \(U_{l}\) and \(V_{j}\) for \(1\leq l\leq n-1,1\leq j\leq A_{i}\); 2. \(U_{n}\) is neighbour to \(0\) and some \(V_{j}\) for \(1\leq j\leq A_{i}\); 3. \(U_{n}\) is neighbour to \(1\) and some \(V_{j}\) for \(1\leq j\leq A_{i}\).
* Finding \(A_{i+1}\). Next we plant the remaining individuals successively until the first individual \(m\) such that \(U_{n}\) is neighbour to \(V_{m}\) and \(V_{j}\) where \(V_{j}\) is from any of the above \(3\) cases. Then we have found \(A_{i+1}=m\). Moreover, the assumptions made at the beginning of Step 2 hold for \((R_{i+1},A_{i+1})\).
**Proposition 1**.: _The above procedure identifies \((R,A)\)._
Proof.: Starting from \((R_{i},A_{i})\), before finding \(R_{i+1}\), we are in the scenario described in Corollary 3. That is, for any \(R_{i}+1\leq n<R_{i+1}\), we have \(L_{s}=\tau_{n}\) for some \(s\leq A_{i}\). The searching of \(R_{i+1}\) stops at \(n\) when we are in the scenario described in Corollary 4 as more individuals need to be planted successively until finding the individual (which is \(A_{i+1}\)) that has the lineage length \(\tau_{n}\) (we use Corollary 2 to see why this individual has length \(\tau_{n}\)) and thus \(R_{i+1}=n\). The proof is finished.
### Proof of Theorem 1
Proof of Theorem 1 - (1).: First of all \(R_{1}=1\) and \(A_{1}\geq 2\), then \(A_{1}-R_{1}\geq 1\). For any \(i\geq 2\), recall that \(L_{A_{i}}\) is the largest among all lengths that arrive later than \(A_{i-1}\) in the recursive construction, and is the \(R_{i}\)-th largest among all in \((L_{n})_{n\geq 2}\). If \(\min_{2\leq j\leq A_{i-1}}L_{j}>L_{A_{i}}\), then \(\{L_{2},\cdots,L_{A_{i-1}}\}\) constitute the \(A_{i-1}-1\) largest lengths in \((L_{n})_{n\geq 2}\). That implies \(R_{i}=A_{i-1}\). If \(L_{A_{i}}>\min_{2\leq j\leq A_{i-1}}L_{j}\), then \(R_{i}<A_{i-1}\). Therefore in either case we have \(R_{i}\leq A_{i-1}\). Moreover by (2.3), we have \(A_{i}\geq A_{i-1}+1\). Then we conclude that \(A_{i}-R_{i}\geq 1\) for any \(i\geq 1\), which means statement (1) holds true.
Proof of Theorem 1 - (2).: To determine the law of \(A_{1}\), we first recall the following lemma which is well known, see for instance [6].
**Lemma 2**.: _Assume there are \(k(k\geq 1)\) i.i.d. uniform random variables on \((0,1)\). Then \((0,1)\) is cut into \(k+1\) subintervals whose lengths are exchangeable and the vector of lengths follows the uniform distribution on a standard \(k\)-simplex. If we plant another independent uniform random variable on \((0,1)\), then it will enter one of the \(k+1\) subintervals with equal probability. Conditioned on entering any subinterval, the resulting \(k+2\) subintervals are again exchangeable and the new vector follows the uniform distribution on a standard \((k+1)\)-simplex._
Now we recall that \((U_{1},U_{2},\cdots,V_{1},V_{2},\cdots)\) are i.i.d. uniform on \((0,1)\). Then using the above lemma and Lemma 1, we have for any \(n\geq 2\),
\[\mathbb{P}(A_{1}=n) =2\mathbb{P}(\max_{1\leq i\leq n-1}V_{i}<U_{1},V_{n}>U_{1})\] \[=2\mathbb{P}(V_{1}<U_{1})\prod_{j=1}^{n-2}\mathbb{P}(V_{j+1}<U_{1 }\,|\max_{1\leq i\leq j}V_{i}<U_{1})\mathbb{P}(V_{n}>U_{1}\,|\max_{1\leq i\leq n -1}V_{i}<U_{1})\] \[=2\times\frac{1}{2}\times\prod_{j=1}^{n-2}\frac{j+1}{j+2}\times \frac{1}{n+1}=\frac{2}{n(n+1)}.\]
Then statement (2) is proved.
Proof of Theorem 1 - (3).: We will show the probability mass functions for the two laws to deduce the tail probabilities. More precisely, we shall first prove the following:
* For any \(i\geq 1\) we have \(R_{i}+1\leq R_{i+1}\leq A_{i}\) and for any \(1\leq x\leq A_{i}-R_{i}\), \[\mathbb{P}(R_{i+1}=R_{i}+x\,|\,R_{i},A_{i}) =\frac{2R_{i}+2x}{A_{i}+R_{i}+x}\,\biggl{(}\prod_{k=1}^{x-1}\frac{ A_{i}-R_{i}-k}{A_{i}+R_{i}+k}\biggr{)}\] (3.9) \[=\frac{2R_{i}+2x}{A_{i}+R_{i}+1}\,\biggl{(}\begin{matrix}2A_{i}\\ A_{i}-R_{i}-x\end{matrix}\biggr{)}\biggl{/}\biggl{(}\begin{matrix}2A_{i}\\ A_{i}-R_{i}-1\end{matrix}\biggr{)},\]
* and for any \(y\geq 1\), \[\mathbb{P}(A_{i+1}=A_{i}+y\,|\,R_{i},A_{i},R_{i+1}) =\frac{1}{A_{i}+R_{i+1}+y}\prod_{k=1}^{y-1}\biggl{(}\frac{A_{i}+R _{i+1}+k-1}{A_{i}+R_{i+1}+k}\biggr{)}\] (3.10) \[=\frac{A_{i}+R_{i+1}}{(A_{i}+R_{i+1}+y-1)(A_{i}+R_{i+1}+y)}.\]
We will use the identification procedure in Section 3.4. Under the assumptions in Step 2, \((0,1)\) is divided into \(R_{i}+A_{i}+1\) subintervals. Among them, there are three categories of subintervals:
1. there are \(R_{i}\) pairs of subintervals such that each pair share the same \(U_{k}\) as a common end for some \(1\leq k\leq R_{i}\);
2. there are two subintervals such that either of them has one end being \(0\) or \(1\) (cannot have both \(0,1\) as ends);
3. the remaining \(A_{i}-R_{i}-1\) subintervals can only have ends from \(\{V_{j}:1\leq j\leq A_{i}\}\).
Then following Step 2, we plant remaining sticks starting from \(R_{i}+1\) successively until the first stick \(R_{i}+x\) that is not neighbour to some \(V_{k}\) and \(V_{j}\) for \(1\leq k\neq j\leq A_{i}\). Note that every stick \(j\) for \(R_{i}+1\leq j<R_{i}+x\) enters a subinterval of category \(3\), and thus killing one subinterval of category \(3\) and adding a pair of subintervals of category \(1\). The stick \(R_{i}+x\) will enter a subinterval of category \(1\) or \(2\). In other words,
\[\mathbb{P}(R_{i+1}=R_{i}+x\,|\,R_{i},A_{i})=\mathbb{P}(\text{ stick $j$ enters a subinterval of category $3$ for $R_{i}+1\leq j<R_{i}+x$},\] \[\text{ and stick $R_{i}+x$ enters a subinterval of category $1$ or $2\,|\,R_{i},A_{i}$})\]
Then clearly we have \(R_{i}+1\leq R_{i+1}\leq A_{i}\) since the number of subintervals of category \(3\) is \(A_{i}-R_{i}-1\). Using Lemma 2 and conditional probability formula, the above display yields the first equality in (3.9). The second equality is a direct simplification.
The next step is to plant remaining individuals until stick \(R_{i}+x\) is again neighbour to two planted individuals. We omit the proof of (3.10), which is very similar to that of (3.9).
Next we deduce the tail probabilities. We will only show (2.4) as (2.5) is straightforward. If \(x=A_{i}-R_{i}\), then we obtain
\[\mathbb{P}(R_{i+1}\geq A_{i}\,|\,R_{i},A_{i})=\frac{1}{\binom{2A_{i}-1}{A_{i} -R_{i}-1}}=\mathbb{P}(R_{i+1}=A_{i}\,|\,R_{i},A_{i}).\]
For \(1\leq x<A_{i}-R_{i}\), we show that (2.4) implies (3.9):
\[\mathbb{P}(R_{i+1}\geq R_{i}+x\,|\,R_{i},A_{i})-\mathbb{P}(R_{i+ 1}\geq R_{i}+x+1\,|\,R_{i},A_{i})\] \[= \frac{\binom{2A_{i}-1}{A_{i}-R_{i}-x}}{\binom{2A_{i}-1}{A_{i}-1} }-\frac{\binom{2A_{i}-1}{A_{i}-1}}{\binom{2A_{i}-1}{A_{i}-1}}\] \[= \frac{A_{i}+R_{i}+x}{A_{i}+R_{i}+1}\,\frac{\binom{2A_{i}}{A_{i}-R _{i}-x}}{\binom{2A_{i}}{A_{i}-R_{i}-1}}-\frac{A_{i}-R_{i}-x}{A_{i}+R_{i}+1} \,\binom{2A_{i}}{A_{i}-R_{i}-x}{A_{i}-R_{i}-1}\] \[= \frac{2R_{i}+2x}{A_{i}+R_{i}+1}\,\frac{\binom{2A_{i}}{A_{i}-R_{i} -x}}{\binom{2A_{i}}{A_{i}-R_{i}-1}}\]
which is exactly the probability \(\mathbb{P}(R_{i+1}=R_{i}+x\,|\,R_{i},A_{i})\) given by (3.9). Then we conclude that (2.4) holds true.
Finally, all statements in Theorem 1 are proved, and the proof is complete.
Proof of Theorem 2
### Preliminaries
In this section, we prove two lemmas for preparation. We use \(\xrightarrow{w}\) to denote the weak convergence of probability measures; \(\xrightarrow{d}\) to denote the convergence in distribution for random variables and \(\stackrel{{ d}}{{=}}\) for being equal in distribution.
**Lemma 3**.: _Let \(n\geq 1.\) Consider a random variable \(W:=W_{n}\) such that_
\[\mathbb{P}(W=k)=ck\binom{2n}{n-k},\quad 0\leq k\leq n,\]
_where \(c>0\) is a normalising constant. Let \(t>0.\) Then uniformly in \(s\in[0,t]\), we have_
\[\sqrt{n}\mathbb{P}(W=\left\lfloor s\sqrt{n}\right\rfloor)\longrightarrow 2se^{-s ^{2}},\text{ as }n\to\infty. \tag{4.11}\]
_As a consequence,_
\[\mathcal{L}\left(\frac{W^{2}}{n}\right)\xrightarrow{w}\text{Exp}(1).\]
Proof.: It suffices to prove (4.11). We first find the asymptotic equivalent of \(c\). Let \(\alpha\sim B(2n,1/2)\), a binomial random variable with parameters \(2n\) and \(1/2\). Note that
\[\frac{1}{c}=\sum_{k=0}^{n}k\binom{2n}{n-k}=\sum_{k=0}^{n}n\binom{2n}{n-k}- \sum_{k=0}^{n}(n-k)\binom{2n}{n-k}=:I_{1}-I_{2}.\]
For the first term, we have
\[2^{-2n}I_{1}=n\mathbb{P}(\alpha\leq n)=\frac{n}{2}+\frac{n}{2}\mathbb{P}( \alpha=n)=\frac{n}{2}+\frac{n}{2}2^{-2n}\binom{2n}{n}=\frac{n}{2}+\frac{\sqrt{ n}}{2\sqrt{n}}(1+o(1)),\]
where the last equality is due to Stirling formula. For the second term, we have
\[2^{-2n}I_{2} =\sum_{k=0}^{n}k\binom{2n}{k}2^{-2n}=2n\sum_{k=1}^{n}\binom{2n-1}{ k-1}2^{-2n}=2n\sum_{k=0}^{n-1}\binom{2n-1}{k}2^{-2n}\] \[=2n\frac{\sum_{k=0}^{2n-1}\binom{2n-1}{k}2^{-2n}=2n\star\frac{2^ {2n-1}}{2}2^{-2n}=\frac{n}{2}.\]
Then we obtain that
\[\frac{1}{c}=\frac{\sqrt{n}}{2\sqrt{\pi}}2^{2n}(1+o(1)). \tag{4.12}\]
Therefore, uniformly for \(s\in[0,t]\), as \(n\to\infty\),
\[\mathbb{P}(W=\left\lfloor s\sqrt{n}\right\rfloor)=\frac{2\sqrt{\pi}}{\sqrt{n} }2^{-2n}\left\lfloor s\sqrt{n}\right\rfloor\binom{2n}{n-\left\lfloor s\sqrt{n }\right\rfloor}(1+o(1))=e^{-s^{2}}\frac{2s}{\sqrt{n}}(1+o(1)),\]
where the first \(o(1)\) comes from (4.12) and the second equality follows from Stirling formula, and both \(o(1)\)'s converge to \(0\) uniformly in \(s\in[0,t]\) as \(n\to\infty\). Then the proof is finished.
**Lemma 4**.: _The process \(\left(\frac{R_{i}^{2}}{A_{i}}\right)_{i\geq 1}\) is tight._
Proof.: Tightness means that for any \(\varepsilon>0\), there exists a compact set \(E\subset[0,\infty)\) such that for any \(i\geq 1\), \(\mathbb{P}\left(\frac{R_{i}^{2}}{A_{i}}\notin E\right)\leq\varepsilon.\) To prove this, it suffices to show that there exists \(c>0\) such that the following holds true
\[\mathbb{E}\left[\frac{R_{i}}{\sqrt{A_{i}}}\right]\leq c,\quad\forall i\geq 1. \tag{4.13}\]
Indeed, applying Markov inequality, we obtain that \(\mathbb{P}\left(\frac{R_{i}^{2}}{A_{i}}>N\right)=\mathbb{P}\left(\frac{R_{i}} {\sqrt{A_{i}}}>\sqrt{N}\right)\leq\frac{c}{\sqrt{N}}\leq\varepsilon\) if we take \(N\) large. Then the compact set can be set as \(E=[0,N]\) and the tightness is obtained.
Note that using (2.4), we have
\[\mathbb{E}[A_{i}-R_{i+1}\,|\,R_{i},A_{i}]\] \[=\sum_{x=1}^{A_{i}-R_{i+1}}(A_{i}-R_{i}-x)\left(\frac{\binom{2A_{i} -1}{\binom{A_{i}-R_{i}-x}{A_{i}-R_{i}-x}}-\frac{\binom{2A_{i}-1}{\binom{A_{i}-R _{i}-x}{A_{i}-1}}}{\binom{2A_{i}-1}{\binom{2A_{i}-R_{i}-1}}}}{\binom{2A_{i}-1} {\binom{2A_{i}-1}{\binom{2A_{i}-R_{i}-1}}}}\right)\] \[=\sum_{x=1}^{A_{i}-R_{i}-1}(A_{i}-R_{i}-x)\frac{\binom{2A_{i}-1}{ \binom{2A_{i}-R_{i}-x}{A_{i}-1}}-\sum_{x=1}^{A_{i}-R_{i}-1}(A_{i}-R_{i}-x-1+1) \frac{\binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom {2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A_{i}-1}{ \binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A_{i} -1}{\binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A_{i }-1}{\binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A _{i}-1{\binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A_{i}-1}{ \binom{2A_{i}-1{\binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A_{i} -1}{\binom{2A_{i-1}{\binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A_{i} -1}{\binom{2A_{i-1}{\binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A_{i} -1}{\binom{2A_{i-1}{\binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A_{i} -1}{\binom{2A_{i-1}{\binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A_{i}-1}{\binom{2A_{i} -1}{\binom{2A_{i-1}{\binom{2A_{i}}{\binom{i-1}{\binom{2A_{i}}{\binom{2A_{i}-1}{ \binom{2A_{i}}{\binom{2A_{i}-1}{\binom{2A_{i}}{\binom{2A_{i}-1}{\binom{2A_{i}}{ \binom{2A_{i}-1{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{-1}{ \binom{2A_{i}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\frac{2A_{i} {\binom{i}}{\binom{2A_{i}}{\binom{2A_{i}{\frac{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{ \binom{i}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{ \binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\frac{2A_{i}}{\binom{2A_{i}}{\frac{2A_{i}}{ \binom{2A_{i}{\binom{2A_{i}}{\frac{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\frac{2A_{i}}{ \binom{2A_{i}}{\binom{2A_{i}{\frac{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\frac{2A_{i}}{ \binom{2A_{i}}{\binom{2A_{i}{\frac{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\frac{2A_{i}}{ \binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\frac{2A_{i}}{\binom{2A_{i}}{ \binom{2A_{i}{\frac{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\frac{2A_{i}}{ \binom{2A_{i}}{\binom{2A_{i}{\frac{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{ \binom{2A_{i}{\binom{i}}{\binom{2A_{i}{\binom{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\frac{2A_{i} {\binom{i}}{\binom{2A_{i}}{\binom{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\frac{2A_{i}}{ \binom{{\binom{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\frac{2A_{i}}{ \binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{ \binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{ \binom{2A_{i}}{\binom{2A_{{i}}{\binom{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{ i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{ \binom{{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{ \binom{2A_{i}}{\binom{i}}{\binom{2A_{i}{\binom{i}}{\binom{2A_{i}{\binom{i}}{\binom{2A_{i}}{\binom{2A_{ i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{ \binom{{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{i}{\binom{2A_{i}{i}}{\binom{2A_{i}{i}}{ \binom{2A_{i}{\binom{{i}}{\binom{2A_{i}}{\binom{i}}{\binom{2A_{i}}{\binom{2A_{i}}{\binom{2A_{i}}{ \binom{{2A_{i}}{\binom{{i}}{\binom{2A_{i}}{\binom{{i}}{\binom{2A_{i}}{\binom{i}}{\binom{2A_{i}{ \binom{{i}}{\binom{2A_{i}}{\binom{{i}}{\binom{2A_{i}}{\binom{i}}{\binom{2A_{i}}{\binom{2A_{i{ }}{\binom{i}}{\binom{2A_{{i}}{\binom{i}}{\binom{2A_{i{i}}{\binom{i}}{\binom{2A_{i}}{ \binom{{i}}{\binom{2A_{i{i}}{\binom{{i}}{\binom{2A_{i}}{\binom{i}}{\binom{2A_{i{i}}{\binom{i}}{ \binom{2A_{{i}}{\binom{i}}{\binom{2A_{i{i}}{\binom{{i}}{\binom{2A_{i}}{\binom{i}}{\binom{2A_{ i}{i}{\binom{i}}{\binom{2A_{i{i}}{\binom{i}}{\binom{2A_{i{i}}{\binom{i}}{\binom{2A_{ i}{i}{\binom{i}{\binom{i}}{\binom{2A_{i{i}}{\binom{{i}}{\binom{i}}{\binom{2A_{i{i}}{ \binom{{i}}{\binom{i}}{\binom{2A_{{i{i}}}{\binom{{i}}\binom{{2A_{i{i}}{\binom{i}}{\binom{2A_{i{i}} {\binom{i}}{\binom{2A_{{i}}{i}{\frac{{\binom{i}}{\binom{i}}{\binom{{i}}{\binom{{i}}{\binom{{i}} \binom{{i}}{\binom{2A_{i{i}}{\binom{{i}}{\binom{{i}}{\binom{2A_{i{i}}{\frac{{i}}{\binom{{i}} {\binom{i}}{\binom{{i}}{\binom{2A_{i{i}}{\binom{i}{\binom{{i}}{\binom{i}}{\binom{2A_{i{i}}{ \binom{{i}}\binom{{i}}{\binom{2A_{i}{i}{\frac{{i}}{\binom{{i}}{\binom{2A_{i{i}}{\binom{{i}} \binom{{i}}{\binom{2A_{i{i}}{\binom{{i}}\binom{{i}}{\binom{{i}}{\binom{2A_{i}{i}{\frac{{i \binom{{i}}{\binom{{
(4.18) and (4.19), we obtain for \(i\) large enough,
\[\mathbb{E}\left[\frac{R_{i+1}}{\sqrt{A_{i+1}}}\,|\,R_{i},A_{i}\right] =\mathbb{E}\left[\mathbb{E}\left[\frac{R_{i+1}}{\sqrt{A_{i+1}}}\,| \,R_{i},A_{i},R_{i+1}\right]|\,R_{i},A_{i}\right]\] \[\leq c_{1}\mathbb{E}\left[\frac{R_{i+1}}{\sqrt{A_{i}}}\,|\,R_{i}, A_{i}\right]\] \[\leq\begin{cases}c_{1}\frac{R_{i}}{\sqrt{A_{i}}}+c_{1}\frac{1}{ \sqrt{A_{i}}}+c_{1}\frac{\sqrt{A_{i}}}{R_{i}},&\text{ if }R_{i}>\sqrt{A_{i}}\\ c_{1}\frac{R_{i}}{\sqrt{A_{i}}}+c_{1}\frac{1}{\sqrt{A_{i}}}+c_{0}c_{1},&\text{ if }R_{i}\leq\sqrt{A_{i}}\end{cases}\] \[\leq\begin{cases}c_{1}\frac{R_{i}}{\sqrt{A_{i+1}}}+2c_{1},&\text{ if }R_{i}>\sqrt{A_{i}}\\ c_{1}\frac{R_{i}}{\sqrt{A_{i}}}+c_{1}(1+c_{0}),&\text{ if }R_{i}\leq \sqrt{A_{i}}\end{cases}\] \[\leq c_{1}\frac{R_{i}}{\sqrt{A_{i}}}+c_{2}\]
where \(c_{2}=\max\{2c_{1},c_{1}(1+c_{0})\}\), and for the third inequality we used (2.3). Therefore, we obtain the following
\[\mathbb{E}\left[\frac{R_{i+1}}{\sqrt{A_{i+1}}}\right]\leq c_{1}\mathbb{E} \left[\frac{R_{i}}{\sqrt{A_{i}}}\right]+c_{2},\quad\text{if }i\text{ is large enough}. \tag{4.20}\]
Since \(0<c_{1}<1\), the above display yields (4.13). Then the proof is finished.
### Proof of Theorem 2
We start with two propositions. Either of them proves a partial result of Theorem 2 whose proof is provided at the end of this section. The first proposition is about the ergodicity of the Markov chain \((\xi_{i})_{i\geq 0}\), introduced in (2.7). Let \(P\) be the transition kernel of \((\xi_{i})_{i\geq 0}.\) Introduce the following weighted supremum norm for a function \(f:[0,\infty)\mapsto\mathbb{R}\)
\[\|f\|=\sup_{x\geq 0}\frac{|f(x)|}{x+1}.\]
Let \(C_{b}\) be the set of bounded continuous functions from \([0,\infty)\) to \(\mathbb{R}\).
**Proposition 2** (Geometric ergodicity).: _The Markov chain \((\xi_{i})_{i\geq 0}\) admits a unique invariant measure \(\mu=\text{Exp}(1)\). Moreover, there exists \(C>0\) and \(\rho\in(0,1)\) such that_
\[\|P^{n}f-\mu(f)\|\leq C\rho^{n}\|f-\mu(f)\|,\quad\text{ for any }f\in C_{b}. \tag{4.21}\]
Proof.: First of all, we verify that \(\text{Exp}(1)\) is an invariant measure. Thanks to the construction (2.7), it suffices to show that, if \(\xi_{0}\sim\text{Exp}(1)\), then \(\xi_{1}\sim\text{Exp}(1)\). Recall \(\xi_{1}=Z^{(1)}e^{-\eta_{1}}\) with \(Z^{(1)}=\xi_{0}+X_{1}\) and \(\xi_{0},X_{1},\eta_{1}\) are independent and \(X_{1},\eta_{1}\) follow the same law \(\text{Exp}(1)\) (see (2.7)). If we assume \(\xi_{0}\sim\text{Exp}(1)\), then for any integer \(k\geq 0\), we have
\[\mathbb{E}[\xi_{1}^{k}] =\mathbb{E}[(Z^{(1)})^{k}]\mathbb{E}[e^{-k\eta_{1}}]\] \[=\frac{1}{k+1}\mathbb{E}[(\xi_{0}+X_{1})^{k}]\] \[=\frac{1}{k+1}\sum_{i=0}^{k}\binom{k}{i}\mathbb{E}[\xi_{0}^{i}] \mathbb{E}[X_{1}^{k-i}]=k!=\mathbb{E}[\xi_{0}^{k}].\]
Since the above display is true for any \(k\), we conclude that \(\xi_{1}\overset{d}{=}\xi_{0}\sim\text{Exp}(1)\). Here we used the method of moments, see [4, Theorem 30.1]. Thus \(\text{Exp}(1)\) is indeed an invariant measure of \((\xi_{i})_{i\geq 0}\).
Note that (4.21) implies that \(\mu=\text{Exp}(1)\) is the unique invariant measure. For the rest, we will prove (4.21) by applying Theorem 3.6 in [9] in our context. It suffices to verify the following condition which combines Assumption 3.1 and Remark 3.5 in [9].
_Condition: (1) Let \(V\) be the identity function: \(V(x)=x,\forall x\geq 0\). Then \(PV(x)=\frac{V(x)+1}{2}\) for any \(x\geq 0\). (2) Let \(R>0\). Then for any \(f\) with \(\sup_{x\geq 0}|f(x)|\leq 1\), we have_
\[|Pf(x)-Pf(y)|\leq 2(1-e^{-R}),\quad\forall\,0\leq x\leq y\leq R.\]
The verification of the above condition is as follows. Note that
\[PV(x)=\mathbb{E}[V(\xi_{1})\,|\,\xi_{0}=x]=\mathbb{E}[Z^{(1)}]\mathbb{E}[e^{- \eta_{1}}]=\frac{x+1}{2}=\frac{V(x)+1}{2}.\]
Then the first statement is proved. For the second statement, recall that \(\xi_{0},X_{1},\eta_{1}\) are independent. Then we observe that
\[Pf(x) =\mathbb{E}[f(Z^{(1)}e^{-\eta_{1}})\,|\,\xi_{0}=x]\] \[=\mathbb{E}[f(Z^{(1)}e^{-\eta_{1}})\mathbf{1}_{X_{1}\leq y-x}\,|\, \xi_{0}=x]+\mathbb{E}[f(Z^{(1)}e^{-\eta_{1}})\mathbf{1}_{X_{1}<y-x}\,|\,\xi_{0}=x]\] \[=\mathbb{P}(X_{1}\geq y-x\,|\,\xi_{0}=x)\mathbb{E}[f(\xi_{0}+X_{1 })e^{-\eta_{1}})\,|\,X_{1}\geq y-x,\xi_{0}=x]\] \[\qquad\quad+\mathbb{E}[f(Z^{(1)}e^{-\eta_{1}})\mathbf{1}_{X_{1}<y -x}\,|\,\xi_{0}=x]\] \[=\mathbb{P}(X_{1}\geq y-x)Pf(y)+\mathbb{E}[f(Z^{(1)}e^{-\eta_{1} })\mathbf{1}_{X_{1}<y-x}\,|\,\xi_{0}=x].\]
Here for the last equality we used that \(\mathcal{L}(X_{1}\,|\,X_{1}\geq y-x)=\mathcal{L}(X_{1}+y-x)\) since \(X_{1}\sim\mathrm{Exp}(1)\). Then since \(|f(\cdot)|\leq 1\) and \(0\leq x\leq y\leq R\), we obtain
\[|Pf(x)-Pf(y)|\leq 2\mathbb{P}(X_{1}<y-x)\leq 2(1-e^{-R}).\]
Thus the condition holds true and the proof is finished.
The second proposition is to prove the one dimensional convergence of \(\left(\frac{R_{i}^{2}}{A_{i}}\right)\).
**Proposition 3**.: _The law of \(\frac{R_{i}^{2}}{A_{i}}\) converges weakly to Exp(1) as \(i\to\infty\)._
Proof.: We claim that it suffices to prove that for any \(t\geq 0\),
\[\sup_{0\leq s\leq t}\left|\mathbb{E}\left[f\left(\frac{R_{i+1}^{2}}{A_{i+1}} \right)\,|\,R_{i}=\lfloor\sqrt{sA_{i}}\rfloor\right]-Pf(s)\right|\xrightarrow{ i\to\infty}0. \tag{4.22}\]
Note that here we allow \(s=0\) which means \(R_{i}=0\). For Kingman's coalescent, \(R_{i}\) can never be \(0\), but the transition probabilities (3.9) and (3.10) do allow the more general case with \(R_{i}=0.\) For \(A_{i}\), we still assume \(A_{i}\to\infty\), as in (2.3). Thus \(i\to\infty\) is the same as \(A_{i}\to\infty\).
Indeed, if (4.22) is true, then \(\mathcal{L}\left(\frac{R_{i+1}^{2}}{A_{i+1}}\,|\,R_{i}=\lfloor\sqrt{sA_{i}} \rfloor\right)\) converges weakly to \(\mathcal{L}(\xi_{1}\,|\,\xi_{0}=s)\) for any \(s\geq 0.\) Since for \(s\) in the interval \([0,t]\), the convergence in (4.22) is uniform, and \(Pf(s)\) is continuous and thus uniformly continuous, we conclude that for any integer \(n>0\), \(\mathcal{L}(\frac{R_{i+1}^{2}}{A_{i+1}}\,|\,R_{i}=\lfloor\sqrt{sA_{i}}\rfloor)\) converges weakly to \(\mathcal{L}(\xi_{n}\,|\,\xi_{0}=s)\) for any \(s\geq 0\) as \(i\to\infty\). By Proposition 2, \(\mathcal{L}(\xi_{n}\,|\,\xi_{0}=s)\) converges weakly to \(\mathrm{Exp}(1)\) as \(n\to\infty\). Finally we apply Lemma 4 to conclude that this proposition holds true. This reasoning also leads to \(\left(\frac{R_{i}^{2}}{A_{i}},\frac{R_{i+1}^{2}}{A_{i+1}},\frac{R_{i+1}^{2}}{A _{i+k}}\right)\xrightarrow[i\to\infty]{}\left(\xi_{0},\xi_{1},\cdots,\xi_{k}\right)\) for any positive integer \(k\) if \(\xi_{0}\sim\mathrm{Exp}(1)\). But we are content with the one dimensional convergence for this proposition and the multidimensional convergence will be proved in the proof of Theorem 2 more straightforwardly. Another way to prove this proposition using (4.22) is to apply [10, Theorem (1)]. The only problem is that \(\left(\frac{R_{i}^{2}}{A_{i}}\right)\) is not a Markov chain. It suffices to enlarge it into \(\left(\frac{R_{i}^{2}}{A_{i}},\frac{1}{A_{i}}\right)\). We omit the detailed steps.
To prove (4.22), we first recall the random variable \(W=W_{n}\) in Lemma 3. For \(0\leq k\leq n\), (3.9) yields
\[\mathcal{L}(W\,|\,W\geq k)=\mathcal{L}(R_{i+1}\,|\,R_{i}=k,A_{i}=n/2).\]
Then using (4.11), we obtain
\[\sup_{0\leq s\leq t}\left|\mathbb{E}\left[f\left(\frac{R_{i+1}^{2}}{A_{i}} \right)\,|\,R_{i}=\lfloor\sqrt{sA_{i}}\rfloor\right]-\mathbb{E}[f(Z_{s})] \right|\xrightarrow[i\to\infty]{i\to\infty}0, \tag{4.23}\]
where \(Z_{s}\sim\mathcal{E}_{s}\) (see (2.6)). As a consequence, for any \(\varepsilon>0\), there exists \(C>0\) such that
\[\sup_{0\leq s\leq t}\mathbb{P}(R_{i+1}^{2}\geq CA_{i}\,|\,R_{i}=\lfloor\sqrt{sA _{i}}\rfloor)\leq\varepsilon,\quad i\text{ large enough}. \tag{4.24}\]
Then using (3.10), we have
\[\sup_{0\leq s\leq t,s\leq c\leq C}\left|\mathbb{E}\left[f\left(\frac{A_{i}}{A_ {i+1}}\right)\,|\,R_{i}=\lfloor\sqrt{sA_{i}}\rfloor,R_{i+1}=\lfloor\sqrt{cA_{i} }\rfloor\right]-\mathbb{E}[f(e^{-\eta})]\right|\xrightarrow[i\to\infty]{i\to \infty}0. \tag{4.25}\]
where \(\eta\sim\mathrm{Exp}(1)\). Note that \(Pf(s)=\mathbb{E}[f(Z_{s}e^{-\eta})]\) if we assume \(\eta\) is independent of \(Z_{s}\). Moreover, \(\frac{R_{i+1}^{2}}{A_{i+1}}=\frac{R_{i+1}^{2}}{A_{i+1}}\frac{A_{i}}{A_{i+1}}.\) Then using the above three displays, we conclude that (4.22) holds true and thus the proof for the proposition is finished.
Proof of Theorem 2.: By Proposition 2, \((\xi_{i})_{i\geq 0}\) is a stationary Markov chain if \(\xi_{0}\sim\text{Exp}(1)\). Using the definition (2.7), we conclude that \(\mathcal{W}=(\xi_{i},\eta_{i})_{i\geq 1}\) is a stationary Markov chain. Next we show (2.8). Using (2.7), we have
\[\mathbb{P}(\xi_{1}\leq s,\eta_{1}\leq t)=\mathbb{P}((\xi_{0}+X_{1})e^{-\eta_{ 1}}\leq s,\eta_{1}\leq t)=\int_{0}^{t}e^{-u}\gamma(2,se^{u})\mathrm{d}u,\]
where \(\gamma(\cdot,\cdot)\) is the lower incomplete gamma function. Here we used that \(\xi_{0},X_{1},\eta_{1}\) are i.i.d. random variables of the same law \(\text{Exp}(1)\), and such that \(\xi_{0}+X_{1}\) follows the Gamma distribution with shape parameter \(2\) and scale parameter \(1\). Taking the partial derivative with respect to \(s\) and \(t\) yields (2.8). It remains to prove \(\mathcal{W}^{(n)}\Longrightarrow\mathcal{W}\) as \(n\rightarrow\infty\).
Let \(\xi_{0}\sim\text{Exp}(1)\). Using Proposition 2 and (4.23), we have
\[\left(\frac{R_{n}^{2}}{A_{n}},\frac{R_{n+1}^{2}}{A_{n}}\right)\xrightarrow[n \to\infty]{d}(\xi_{0},Z^{(1)}).\]
Together with (3.10) and the fact that \(\left(\frac{R_{n}^{2}}{A_{i}}\right)_{i\geq 1}\) is tight, we obtain further that
\[\left(\frac{R_{n}^{2}}{A_{n}},\frac{R_{n+1}^{2}}{A_{n}},\ln\frac{A_{n+1}}{A_{n }},\ln\frac{A_{n+2}}{A_{n+1}},...\right)\stackrel{{ n\rightarrow\infty}}{{ \Longrightarrow}}(\xi_{0},Z^{(1)},\eta_{1},\eta_{2},...). \tag{4.26}\]
Note that
\[\frac{R_{i+1}^{2}}{A_{i+1}}=\frac{R_{i+1}^{2}}{A_{i}}\frac{A_{i}}{A_{i+1}}= \frac{R_{i+1}^{2}}{A_{i}}\exp\left(-\ln\frac{A_{i+1}}{A_{i}}\right),\quad \forall i\geq 1.\]
The above two displays entail
\[\left(\frac{R_{n}^{2}}{A_{n}},\frac{R_{n+1}^{2}}{A_{n}},\frac{R_{n+1}^{2}}{A_{ n+1}},\ln\frac{A_{n+1}}{A_{n}},\ln\frac{A_{n+2}}{A_{n+1}},...\right)\stackrel{{ n\rightarrow\infty}}{{ \Longrightarrow}}(\xi_{0},Z^{(1)},\xi_{1},\eta_{1},\eta_{2},...).\]
Since the asymptotic behaviour of \(\frac{R_{i+1}^{2}}{A_{i}}\) only depends on that of \(\frac{R_{i}^{2}}{A_{i}}\) (see (4.26)), we have
\[\left(\frac{R_{n}^{2}}{A_{n}},\frac{R_{n+1}^{2}}{A_{n}},\frac{R_{n+1}^{2}}{A_{ n+1}},\frac{R_{n+2}^{2}}{A_{n+1}},\ln\frac{A_{n+1}}{A_{n}},\ln\frac{A_{n+2}}{A_{ n+1}},...\right)\stackrel{{ n\rightarrow\infty}}{{ \Longrightarrow}}(\xi_{0},Z^{(1)},\xi_{1},Z^{(2)},\eta_{1},\eta_{2},...).\]
Iterating this procedure, we obtain that for any \(k\geq 0\),
\[\left(\frac{R_{n}^{2}}{A_{n}},\frac{R_{n+1}^{2}}{A_{n}},...,\frac {R_{n+k}^{2}}{A_{n+k}},\frac{R_{n+k+1}^{2}}{A_{n+k}},\ln\frac{A_{n+1}}{A_{n}}, \ln\frac{A_{n+2}}{A_{n+1}},...\right)\] \[\stackrel{{ n\rightarrow\infty}}{{\Longrightarrow}}( \xi_{0},Z^{(1)},...,\xi_{k},Z^{(k+1)},\eta_{1},\eta_{2},...).\]
which implies \(\mathcal{W}^{(n)}\stackrel{{ n\rightarrow\infty}}{{ \Longrightarrow}}\mathcal{W}\). Then the proof of Theorem 2 is finished.
_Remark 1_.: If \(\xi_{0}\sim\text{Exp}(1)\), then not only we have \(\mathcal{W}=((\xi_{i},\eta_{i}))_{i\geq 1}\) stationary; in fact, \((\xi_{i})_{i\geq 1},(\eta_{i})_{i\geq 1}\) and \(\mathcal{W}\) are all stationary ergodic. This (mainly for the latter two) can be proved using the results in [12]. But we omit the proof as it goes beyond the scope of this paper.
## Acknowledgement
The author thanks Matthias Birkner for suggesting the name _provisional external branch length sequence_ for \((L_{n})\). The author thanks Clement Foucart and Huili Liu for reading a draft version and for very helpful comments.
|
2305.10852
|
Q-SHED: Distributed Optimization at the Edge via Hessian Eigenvectors
Quantization
|
Edge networks call for communication efficient (low overhead) and robust
distributed optimization (DO) algorithms. These are, in fact, desirable
qualities for DO frameworks, such as federated edge learning techniques, in the
presence of data and system heterogeneity, and in scenarios where internode
communication is the main bottleneck. Although computationally demanding,
Newton-type (NT) methods have been recently advocated as enablers of robust
convergence rates in challenging DO problems where edge devices have sufficient
computational power. Along these lines, in this work we propose Q-SHED, an
original NT algorithm for DO featuring a novel bit-allocation scheme based on
incremental Hessian eigenvectors quantization. The proposed technique is
integrated with the recent SHED algorithm, from which it inherits appealing
features like the small number of required Hessian computations, while being
bandwidth-versatile at a bit-resolution level. Our empirical evaluation against
competing approaches shows that Q-SHED can reduce by up to 60% the number of
communication rounds required for convergence.
|
Nicolò Dal Fabbro, Michele Rossi, Luca Schenato, Subhrakanti Dey
|
2023-05-18T10:15:03Z
|
http://arxiv.org/abs/2305.10852v1
|
# Q-SHED: Distributed Optimization at the Edge via Hessian Eigenvectors Quantization
###### Abstract
Edge networks call for communication efficient (low overhead) and robust distributed optimization (DO) algorithms. These are, in fact, desirable qualities for DO frameworks, such as federated edge learning techniques, in the presence of data and system heterogeneity, and in scenarios where inter-node communication is the main bottleneck. Although computationally demanding, Newton-type (NT) methods have been recently advocated as enablers of robust convergence rates in challenging DO problems where edge devices have sufficient computational power. Along these lines, in this work we propose Q-SHED, an original NT algorithm for DO featuring a novel bit-allocation scheme based on incremental Hessian eigenvectors quantization. The proposed technique is integrated with the recent SHED algorithm, from which it inherits appealing features like the small number of required Hessian computations, while being bandwidth-versatile at a bit-resolution level. Our empirical evaluation against competing approaches shows that Q-SHED can reduce by up to 60% the number of communication rounds required for convergence.
Newton method, distributed optimization, federated edge learning, wireless networks, 6G
## I Introduction
Solving distributed optimization problems in a communication-efficient fashion is one of the main challenges of next generation edge networks [1]. In particular, much attention is being turned to distributed machine learning (ML) settings and applications, and to the distributed training of ML models via _federated learning_ (FL) [2]. FL is a distributed optimization (DO) framework motivated by the increasing concerns for data privacy at the user end, and by the convenience of performing distributed processing in multi-access edge computing (MEC) networks. However, DO is particularly challenging in federated edge learning (FEL) scenarios where communication occurs over unpredictable and heterogeneous wireless links [3]. To tackle these challenges, major research efforts have been conducted in recent years [4, 5, 6]. A common assumption in FEL is that edge devices are equipped with sufficient computing capabilities. Hence, Newton-type (NT) methods, although computationally demanding, have been recently advocated to improve the convergence rate of distributed optimization, while significantly reducing its communication overhead [7, 8]. Communication efficient distributed NT (DNT) algorithms like GIANT [9], and DONE [7] have shown promising results in configurations with i.i.d. data distributions among devices, but underperform when applied to ill-conditioned problems and heterogeneous data configurations [10], which are scenarios of major practical relevance. Some works, like FedNL [11] and SHED (sharing Hessian eigenvectors for distributed learning) [10] have been recently proposed to robustify FL in the presence of non i.i.d. data distributions, system heterogeneity and ill-conditioning. A DNT method with over-the-air aggregation has been studied in [8]. Quantized Newton (QN) [12] has investigated the convergence properties of the distributed Newton method when the Hessian matrix is quantized. However, QN entails a communication load proportional to \(O(n^{2})\), where \(n\) is the problem dimensionality, while a linear per-iteration communication complexity of \(O(n)\) is desirable.
In this paper, we present Q-SHED, a new algorithm that extends the recently proposed SHED [10] via a novel bit-allocation scheme based on incremental Hessian eigenvector quantization. In particular, our main contributions are:
* We propose an original bit-allocation scheme for Hessian approximation based on uniform scalar dithered quantization of Hessian eigenvectors, to improve the efficiency of second-order information transmission in a DNT method.
* We integrate our bit-allocation scheme with the recently proposed SHED technique [10], obtaining a new approach, Q-SHED, based on incremental dithered quantization of Hessian eigenvectors. Q-SHED has a communication complexity of \(O(n)\) (inherited by SHED) and handles per-iteration heterogeneity of communication channels of the different edge computers involved in the optimization problem at a bit-resolution (per vector coordinate) level.
* We evaluate Q-SHED on two datasets assessing its performance in a standard distributed optimization setup, as well as in a scenario where the transmission quality of communication links randomly fluctuates over time according to a Rayleigh fading model (a popular model for wireless channels). With respect to competing solutions, Q-SHED shows convergence speed improvements of at least 30% in a non-fading scenario and of up to 60% in the Rayleigh fading case.
## II Distributed optimization framework
We consider the typical DO framework where \(M\) machines communicate with an aggregator to cooperatively solve an empirical risk minimization problem of the form
\[\min_{\mathbf{\theta}}f(\mathbf{\theta}):=\frac{1}{N}\sum_{d=1}^{M}N_{d}f^{(d)}(\mathbf{ \theta}), \tag{1}\]
where \(\mathbf{\theta}\in\mathbb{R}^{n}\) is the optimization variable, \(N_{d}\) is the number of data samples of the \(d\)-th machine and \(N=\sum_{d=1}^{M}N_{d}\). For the convergence analysis of the algorithm, we make the following standard assumption on the cost function \(f\):
**Assumption 1**.: _Let \(\mathbf{H}(\mathbf{\theta}):=\nabla^{2}f(\mathbf{\theta})\) be the Hessian matrix of the cost \(f(\mathbf{\theta})\). \(f(\mathbf{\theta})\) is twice continuously differentiable, smooth, strongly convex and \(\mathbf{H}(\mathbf{\theta})\) is Lipschitz continuous._
### _Distributed Newton method_
The Newton method to solve (1) is:
\[\mathbf{\theta}^{t+1}=\mathbf{\theta}^{t}-\eta_{t}\mathbf{H}_{t}^{-1}\mathbf{g}_{t},\]
where \(t\) denotes the \(t\)-th iteration, \(\mathbf{g}_{t}=\mathbf{g}(\mathbf{\theta}_{t})=\nabla f(\mathbf{\theta}^{t})\), \(\eta_{t}\) and \(\mathbf{H}_{t}=\nabla^{2}f(\mathbf{\theta}^{t})\) denote the gradient, the step size and the Hessian matrix at iteration \(t\), respectively. In the considered DO scenario, we have that:
\[\mathbf{H}_{t}=\frac{1}{N}\sum_{d=1}^{M}N_{d}\mathbf{H}_{t}^{(d)},\ \ \mathbf{g}_{t}=\frac{1}{N}\sum_{d=1}^{M}N_{d}\mathbf{g}_{t}^{(d)}, \tag{2}\]
where \(\mathbf{H}_{t}^{(d)}=\nabla^{2}f^{(d)}(\mathbf{\theta}^{t})\) and \(\mathbf{g}_{t}^{(d)}=\nabla f^{(d)}(\mathbf{\theta}^{t})\) denote local Hessian and gradient of the local cost \(f^{(d)}(\mathbf{\theta}^{t})\) of machine \(d\), respectively. To get a Newton update at the aggregator, in a FL setting one would need each agent to transfer the matrix \(\mathbf{H}_{t}^{(i)}\) of size \(O(n^{2})\) to the aggregator at each iteration, whose communication cost is considered prohibitive in many practical scenarios, especially when \(n\) is large. To deal with communication constraints, while still exploiting second-order information, DNT methods use Hessian approximations:
\[\mathbf{\theta}^{t+1}=\mathbf{\theta}^{t}-\eta_{t}\hat{\mathbf{H}}_{t}^{-1}\mathbf{g} _{t}, \tag{3}\]
where \(\hat{\mathbf{H}}_{t}\) is an approximation of \(\mathbf{H}_{t}\).
### _The SHED algorithm_
In this paper, we propose a DNT approach built upon SHED [10], a DNT algorithm for FL designed to require few Hessian computations by FL workers, that efficiently shares (low communication overhead) second-order information with the aggregator, see [10] for a detailed description. SHED exploits a full-rank approximation of the workers' Hessians by sending to the aggregator the most relevant eigenvalue-eigenvector pairs (EEPs) of the local Hessian, along with a local approximation parameter. Approximations are incrementally improved across iterations, as machines send additional EEPs to the aggregator. By doing so, the Hessian is computed only sporadically and outdated versions of it are used to incrementally improve the convergence rate. Under Lipschitz Hessians, strong convexity and smoothness assumptions, SHED has super-linear convergence.
### _Q-SHED: Hessian eigenvectors quantization_
Let \(\mathbf{H}=\mathbf{V}\mathbf{\Lambda}\mathbf{V}^{\top}\), be the eigendecomposition of a machine (edge computer) Hessian matrix, with \(\mathbf{\Lambda}=\mathrm{diag}(\lambda_{1},...,\lambda_{n})\), where \(\lambda_{k}\) is the eigenvalue corresponding to the \(k\)-th unitary eigenvector, \(\mathbf{v}_{k}\). In general, the Hessian is a function of the parameter \(\mathbf{\theta}\), but here we omit this dependence for ease of notation. We always consider eigenvalues ordered so that \(\lambda_{1}\geq\lambda_{2}\geq...\geq\lambda_{n}\). In SHED, a machine shares with the aggregator a parameter \(\rho_{q}\) together with \(q\) EEPs, allowing for a full-rank \((q,\rho_{q})\)-approximation of its Hessian, of the form
\[\mathbf{H}_{q,\rho_{q}}=\sum_{i=1}^{q}(\lambda_{i}-\rho_{q})\mathbf{v}_{i} \mathbf{v}_{i}^{\top}+\rho_{q}\mathbf{I}=\mathbf{V}\mathbf{\Lambda}_{\rho_{q} }\mathbf{V}^{\top}, \tag{4}\]
where \(\mathbf{V}:=[\mathbf{v}_{1},...,\mathbf{v}_{n}]\), \(\mathbf{\Lambda}_{\rho_{q}}:=\mathrm{diag}(\lambda_{1},...,\lambda_{q},\rho_{q },...,\rho_{q})\in\mathbb{R}^{n\times n}\). In the original SHED algorithm, eigenvectors are transmitted exactly (up to machine-precision). Differently, we here design a quantization scheme for the eigenvectors, obtaining a quantized approximation of the Hessian of the form:
\[\hat{\mathbf{H}}_{q,\rho_{q}}(b_{1},...,b_{q})=\sum_{i=1}^{q}(\lambda_{i}-\rho _{q})\hat{\mathbf{v}}_{i}(b_{i})\hat{\mathbf{v}}_{i}(b_{i})^{\top}+\rho_{q} \mathbf{I}, \tag{5}\]
where we denote by \(\hat{\mathbf{v}}_{i}(b_{i})\) the \(i\)-th eigenvector, quantized with \(b_{i}\) bits per vector element. As in [10], we fix \(\rho_{q}=\lambda_{q+1}\). We design the quantization scheme so that if an eigenvector \(\mathbf{v}_{i}\) is quantized and transmitted, then at least one bit is assigned to each of its components. The vectors to which no bit is assigned are all set equal to zero, i.e., \(\hat{\mathbf{v}}_{i}(0)=\mathbf{0}\). We assume that, as in typical machine learning problems, \(n\gg 1\). Hence, we design the quantization scheme such that the approximation parameter \(\rho_{q}\) and the eigenvalues \(\{\lambda_{i}\}\) are not quantized and are transmitted exactly (up to machine precision).
## III Optimal quantization of eigenvectors
We formulate the design of the quantization scheme as a bit allocation problem, exploiting the specific structure of the Hessian. In particular, as, e.g., in [5], we consider dithered quantization, so that we can model the quantization error as a uniformly distributed (in the lattice) zero mean additive random noise. Let \(\mathbf{v}_{i}\) be a Hessian eigenvector and let \(\hat{\mathbf{v}}_{i}=\hat{\mathbf{v}}_{i}(b_{i})\) be the same eigenvector quantized with \(b_{i}\) bits per vector coordinate (to improve readability, the dependence on \(b_{i}\) is omitted in the following). We write:
\[\hat{\mathbf{v}}_{i}=\mathbf{v}_{i}+\mathbf{\epsilon}_{i}, \tag{6}\]
where \(\mathbf{\epsilon}_{i}\) is a uniformly distributed quantization noise, with \(\mathbb{E}[\mathbf{\epsilon}_{i}]=0\). This is a general and standard model for the quantization noise, widely adopted in the literature, see, e.g., [5].
The aim of the bit allocation is to provide the best possible Hessian approximation given a bit budget. Hence, the quantization scheme design is obtained as the solution of the following problem:
\[\min_{b_{1},...,b_{q},q} \mathbb{E}[\|\mathbf{H}-\hat{\mathbf{H}}_{q,\rho_{q}}(b_{1},...,b_{q })\|_{\mathcal{F}}^{2}|\{\mathbf{v}_{i},\lambda_{i}\}_{i=1}^{q}]\] (7) s.t. \[\sum_{i=1}^{q}b_{i}=B\] \[0\leq\ b_{i}\leq b_{\max},\forall\,i,\]
where \(\hat{\mathbf{H}}_{q,\rho_{q}}(b_{1},...,b_{q})\) is defined in (5). The operator \(\|\cdot\|_{\mathcal{F}}\) denotes the Frobenius norm. Note that \(q\) is a variable determining the approximation parameter \(\rho_{q}\). The constant \(B\) denotes the bit budget, normalized by \(n\): denoting the total number of available bits by \(B_{\mathrm{tot}}\), it holds \(B=\lfloor B_{\mathrm{tot}}/n\rfloor\). The integer \(b_{\max}\) is the maximum number of bits per vector
component. In the following, for ease of notation, we omit the conditioned values from the expectation expression of the squared Frobenius norm introduced in Eq. (7). For simplicity, we define \(\hat{\mathbf{H}}_{q,p_{q}}:=\hat{\mathbf{H}}_{q,p_{q}}(b_{1},...,b_{q})\). Denoting by \(\operatorname{tr}(\cdot)\) the trace operator, we have that
\[\mathbb{E}[\|\mathbf{H}-\hat{\mathbf{H}}_{q,p_{q}}\|_{\mathcal{F}}^{2}]= \mathbb{E}[\operatorname{tr}((\mathbf{H}-\hat{\mathbf{H}}_{q,p_{q}})(\mathbf{ H}-\hat{\mathbf{H}}_{q,p_{q}}))], \tag{8}\]
where we can write, denoting the unitary eigenvector matrix by \(\mathbf{V}:=[\mathbf{v}_{1},...,\mathbf{v}_{n}]\),
\[\mathbf{H}-\hat{\mathbf{H}}_{q,p_{q}}=\mathbf{V}(\mathbf{\Lambda}-\mathbf{ \Lambda}_{\rho_{q}})\mathbf{V}^{\top}+\sum_{i=1}^{q}(\lambda_{i}-\rho_{q}) \delta\mathbf{V}_{i}. \tag{9}\]
defining \(\delta\mathbf{V}_{i}:=(\mathbf{v}_{i}\mathbf{v}_{i}^{\top}-\hat{\mathbf{v}}_ {i}\hat{\mathbf{v}}_{i}^{\top})\). Plugging (9) into (8):
\[\mathbb{E}[\|\mathbf{H}-\hat{\mathbf{H}}_{q,p_{q}}\|_{\mathcal{F} }^{2}]=\operatorname{tr}(\mathbf{V}(\mathbf{\Lambda}-\mathbf{\Lambda}_{\rho_ {q}})^{2}\mathbf{V}^{\top}) \tag{10}\] \[+2\operatorname{tr}\left(\sum_{i=q+1}^{n}(\bar{\lambda}_{q,i}) \mathbf{v}_{i}\mathbf{v}_{i}^{\top}\sum_{i=1}^{q}(\bar{\lambda}_{q,i}) \mathbb{E}[\delta\mathbf{V}_{i}]\right)\] \[+\mathbb{E}\left[\operatorname{tr}\left(\sum_{i=1}^{q}\bar{ \lambda}_{q,i}^{2}\delta\mathbf{V}_{i}\delta\mathbf{V}_{i}\right)+ \operatorname{tr}\left(\sum_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{q}\bar{\lambda}_{q,i}\bar{\lambda}_{q,j}\delta \mathbf{V}_{i}\delta\mathbf{V}_{j}\right)\right].\]
where \(\bar{\lambda}_{q,i}:=\lambda_{i}-\rho_{q}\). The first term of the previous expression does not depend on the quantization strategy, but only on the choice of \(q\). The second and third terms, instead, both depend on \(q\) and on the quantization strategy through the matrices \(\{\delta\mathbf{V}_{i}\}_{i=1}^{q}\). In the next section, we consider the special case of scalar uniform quantization of the eigenvectors' coordinates.
### _Scalar uniform quantization_
In the case of scalar uniform quantization, each component of vector \(\mathbf{v}_{i}\) is uniformly quantized in the range \([-1,1]\). Applying dithering, the quantization error vector has i.i.d. uniformly distributed components of known covariance [5]. We can write
\[\mathbb{E}[\boldsymbol{\epsilon}_{i}\boldsymbol{\epsilon}_{i}^{\top}]=\sigma_ {i}^{2}I,\text{ with }\sigma_{i}^{2}=\mathbb{E}[\boldsymbol{\epsilon}_{ij}^{2}]=\Delta_{i}^{2}/12, \ \Delta_{i}=2^{-(b_{i}-1)} \tag{11}\]
with \(\Delta_{i}\) being the quantization interval length, and \(b_{i}\) the number of bits assigned to each coordinate of the \(i\)-th eigenvector. After some algebra, we can get
\[\mathbb{E}[\operatorname{tr}(\delta\mathbf{V}_{i})(\delta\mathbf{V}_{i})]= \Delta_{i}^{2}(a_{1}(n)+a_{2}(n)\Delta_{i}^{2}), \tag{12}\]
using the fact that \(\alpha_{i}^{4}=\Delta_{i}^{4}/80=\mathbb{E}[\epsilon_{ij}^{4}]\), and defining \(a_{1}(n):=\frac{1}{12}+\frac{n}{6},\ a_{2}(n):=\frac{n}{80}+\frac{n(n-1)}{12^{2}}\). With similar calculations, one gets
\[\mathbb{E}[\operatorname{tr}(\delta\mathbf{V}_{i}\delta\mathbf{V}_{j})]=n \sigma_{i}^{2}\sigma_{j}^{2}=\frac{n\Delta_{i}^{2}\Delta_{j}^{2}}{12^{2}}=a_{3 }(n)\Delta_{i}^{2}\Delta_{j}^{2}, \tag{13}\]
with \(a_{3}(n):=\frac{n}{12^{2}}\). The expectation of the Frobenius norm of the quantization error in Eq. (10) can then be written as
\[\mathbb{E}[\|\mathbf{H}-\hat{\mathbf{H}}_{q,p_{q}}\|_{\mathcal{F} }^{2}]=\sum_{i=q+1}^{n}\bar{\lambda}_{q,i}^{2}+d_{q}\sum_{i=1}^{q}\bar{\lambda }_{q,i}\Delta_{i}^{2} \tag{14}\] \[+\sum_{i=1}^{q}\bar{\lambda}_{q,i}^{2}\Delta_{i}^{2}(a_{1}(n)+a_{ 2}(n)\Delta_{i}^{2})+\sum_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{q}\bar{\lambda}_{q,i}\bar{\lambda}_{q,j}a_{3}(n) \Delta_{i}^{2}\Delta_{j}^{2},\]
with \(d_{q}=\frac{1}{6}(\sum_{i=q+1}^{n}(\rho_{q}-\lambda_{i}))\). Our objective is to pick the integer parameter \(q\) and the quantization intervals \(\Delta_{1},...,\Delta_{q}\) so as to minimize (8), with the constraint that \(\sum_{i=1}^{q}b_{i}=B\), with \(B=\lfloor B_{\mathrm{tot}}/n\rfloor\), where \(B_{\mathrm{tot}}\) is the number of available bits. Given that \(b_{i}=-\log\Delta_{i}+1\), we see that the constraint becomes \(\sum_{i=1}^{q}\log\Delta_{i}=q-B\), which is equivalent to \(\sum_{i=1}^{q}\log\Delta_{i}^{2}=2(q-B)\). Defining \(x_{i}:=\Delta_{i}^{2}\) and \(\mathbf{x}_{q}=(x_{1},...,x_{q})\), we define the expectation of the quantization error as a cost function \(f\):
\[f(\mathbf{x}_{q},q):=\mathbb{E}[\|\mathbf{H}-\hat{\mathbf{H}}_{q,p_{q}}\|_{ \mathcal{F}}^{2}], \tag{15}\]
and we aim to minimize such cost function over the choice of \(q\) and over the choice of \(\mathbf{x}_{q}\). We can rewrite Eq. (14) as
\[f(\mathbf{x}_{q},q) =\sum_{i=q+1}^{n}\bar{\lambda}_{q,i}^{2}+\sum_{i=1}^{q}\gamma_{n,q _{i}}x_{i}+a_{2}(n)\sum_{i=1}^{q}\bar{\lambda}_{q,i}^{2}x_{i}^{2} \tag{16}\] \[+a_{3}(n)\sum_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{q}\bar{\lambda}_{q,i}\bar{\lambda}_{q,j}x_{i}x_{j},\]
where \(\gamma_{n,q,i}:=d_{q}\bar{\lambda}_{q,i}+a_{1}(n)\bar{\lambda}_{q,i}^{2}\). The optimization problem is thus turned into the following equivalent form:
\[\min_{\mathbf{x}_{q},q} f(\mathbf{x}_{q},q)\] s.t. \[-\sum_{i=1}^{q}\log x_{i}\leq 2(B-q)\] \[0<\ x_{i}\leq 4,\ i=1,...,q\]
where the last constraint (\(x_{i}\leq 4\)) amounts to requiring \(b_{i}\geq 0,i=1,...,q\). At optimality, the constraint \(-\sum_{i=1}^{q}\log x_{i}\leq 2(B-q)\) will be satisfied with equality. The solution to the optimization problem (16) needs to be converted in a vector of bits. This can be done by converting each \(x_{i}\) back to \(b_{i}\) using (11) and then rounding each \(b_{i}\) to the closest integer, being careful to meet the bit budget \(\sum_{i=1}^{q}b_{i}=B\).
**Lemma III.1**.: _For any \(q=1,...,n\), the cost function \(f(\mathbf{x}_{q},q)\) is strictly convex in \(\mathbf{x}_{q}=(x_{1},\dots,x_{q})^{\top}\)._
Proof.: Let \(\bar{\mathbf{\Lambda}}_{q}:=(\bar{\lambda}_{1},...,\bar{\lambda}_{q})^{\top}\), \(\boldsymbol{\gamma}_{n,q}:=(\gamma_{n,q,1},...,\gamma_{n,q,q})^{\top}\), \(\bar{\mathbf{\Lambda}}_{q}:=\mathrm{diag}(\bar{\lambda}_{1}^{2},...,\bar{ \lambda}_{q}^{2})\), and \(\bar{\mathbf{\Lambda}}_{c}\in\mathbb{R}^{q\times q}\) a matrix such that \((\bar{\mathbf{\Lambda}}_{c})_{i,j}=\bar{\lambda}_{i}\bar{\lambda}_{j}(1-\bar{ \lambda}_{ij})\), where \(\bar{\delta}_{ii}=1\) and \(\delta_{ij}=0\) for \(i\neq j\). Note that \(a_{2}(n)=\frac{n}{80}+\frac{n(n-1)}{12^{2}}>\frac{n}{12^{2}}=a_{3}(n)\). Omitting
lems whose solution \(\mathbf{x}_{q}^{*}\) is unique. The optimal solution can be found as the tuple \(\{\mathbf{x}_{q^{*}}^{*},q^{*}\}\), with \(q^{*}=\operatorname*{argmin}_{q}\{f(\mathbf{x}_{q}^{*},q)\}\).
## IV Q-SHED: algorithm design
SHED [10] is designed to make use of Hessian approximations obtained with few Hessian EEPs. In [10], it has been shown that incrementally (per iteration) transmitting additional EEPs improves the converges rate. In this section, we augment SHED with the optimal bit allocation of the previous section, making it suitable to incrementally refine the Hessian approximation at the aggregator. The full technique is illustrated in Algorithm 1, and the details are provided in the following sections.
### _Uniform scalar quantization with incremental refinements_
Let \(\mathbf{H}(\boldsymbol{\theta}^{k_{t}})\) be the Hessian computed for parameter \(\boldsymbol{\theta}^{k_{t}}\) at round \(k_{t}\). At each round \(t\geq k_{t}\), a number of bits \(B_{t}\) is sent to represent second-order information. At each round, we use newly available bits to incrementally refine the approximation of \(\mathbf{H}(\boldsymbol{\theta}^{k_{t}})\). From now on, eigenvectors are always denoted by \(\mathbf{v}_{i}=\mathbf{v}_{i}(\boldsymbol{\theta}^{k_{t}})\), i.e., they are always the eigenvectors of the most recently computed (and possibly outdated) Hessian. If \(t=k_{t}\), the optimal bit allocation for eigenvectors \(\mathbf{v}_{1},...,\mathbf{v}_{n}\) is provided by the scheme presented in Sec. III-A. Fix \(t>k_{t}\). Let \(b_{t-1}(i)\) denote the bits allocated to each coordinate of eigenvector \(\mathbf{v}_{i}\) up to round \(t-1\), and let \(b_{i,t}\) be the number of bits to be used together with \(b_{t-1}(i)\), at round \(t\), to refine the approximation of the coordinates of \(\mathbf{v}_{i}\). We can write
\[b_{t}(i):=b_{t-1}(i)+b_{i,t},\ \ \Delta_{t,i}:=\frac{2}{2^{b_{t}(i)}}:=2^{-b_{t-1}(i )}2^{-b_{i,t}+1} \tag{19}\]
with \(b_{t}(i)\) the number of bits sent up to round \(t\). The interval \(\Delta_{t,i}\) is the quantization interval resulting from adding \(b_{i,t}\) bits for the refinement of the \(i\)-th eigenvector information, for which \(b_{i}^{(t-1)}\) had been previously allocated. We can plug these intervals into Eq. (14), and defining \(x_{t,i}:=2^{-2(b_{i,t}-1)}\), \(\tilde{\gamma}_{n,q_{i},i}:=2^{-2b_{t-1}(i)}\gamma_{n,q_{t},i}\), \(\tilde{\lambda}_{t,q_{t},i}:=2^{-2b_{t-1}(i)}\bar{\lambda}_{q_{t},i}\), we get a cost \(f(\mathbf{x}_{q_{t}},q_{t})\), with \(\mathbf{x}_{q_{t}}=(x_{t,1},...,x_{t,q_{t}})\),
\[\begin{split}& f(\mathbf{x}_{q_{t}},q_{t})=\sum_{i=q_{t}+1}^{n} \bar{\lambda}_{q_{t},i}^{2}+\sum_{i=1}^{q_{t}}\tilde{\gamma}_{n,q_{t},i}x_{t,i} \\ &+b(n)\sum_{i=1}^{q_{t}}\bar{\lambda}_{t,q_{t},i}^{2}x_{t,i}^{2}+ c(n)\sum_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{q_{t}}\bar{\lambda}_{t,q_{t},i}\tilde{\lambda}_{t,q_{ t},j}x_{t,i}x_{t,j}.\end{split} \tag{20}\]
Following the same proof technique as for Lemma III.1, it can be shown that the cost \(f(\mathbf{x}_{q_{t}},q_{t})\) is strictly convex in \(\mathbf{x}_{q_{t}}\) for any \(q_{t}=1,...,n\). Given that up to round \(t-1\), \(q_{t-1}\) eigenvectors were considered for bit allocation, it is easy to see that it needs to be \(q_{t}\geq q_{t-1}\). Similarly to Sec. III-A, we formulate the optimal bit allocation of bits \(\{b_{i,t}\}_{i=1}^{q_{t}}\) as
\[\begin{split}\min_{\mathbf{x}_{q_{t}},q_{t}\geq q_{t-1}}& f(\mathbf{x}_{q_{t}},q_{t})\\ \text{s.t.}&-\sum_{i=1}^{q_{t}}\log x_{t,i}\leq 2(B_{t}-q_ {t})\\ & 0<\ x_{t,i}\leq 4,\ i=1,...,q_{t}.\end{split} \tag{21}\]
The problem can be solved by finding the unique solution to the \(n-q_{t-1}+1\) strictly convex problems corresponding to the different choices of \(q_{t}=q_{t-1},q_{t-1}+1,...,n\). As before, the solution to problem (21) needs to be converted to integer numbers, for example by rounding the corresponding allocated number of bits to the closest integer, being careful to retain \(\sum_{i=1}^{q}b_{i,t}=B_{t}\). Sorting the eigenvalues in a decreasing order, we get a monotonically decreasing sequence of allocated bits to the corresponding eigenvectors. To provide an example, with the FMNIST dataset (see Sec. V), at a certain iteration \(t\) of the incremental algorithm, an agent allocates bits \(b_{t}=[3,3,2,2,2,1,1,1,1,1]\) to the first \(11\) eigenvectors, whose corresponding (rounded) eigenvalues are \([0.21,0.11,0.06,0.03,0.03,0.02,0.02,0.01,0.01,0.01,0.01]\).
### _Multi-agent setting: notation and definitions_
To illustrate the integration of our incremental quantization scheme with SHED, we introduce some definitions for the multi-agent setting. We denote by \(B_{t}^{(d)}\) the bit-budget of device \(d\) at iteration \(t\). Let \(\rho_{t}^{(d)}=\lambda_{q_{t}^{(d)}+1,t}^{(d)}\) be the Hessian approximation parameter of device \(d\) at iteration \(t\), function of the \(q_{t}^{(d)}\)-th eigenvalue of the \(d\)-th device, where the integer \(q_{t}^{(d)}\) is tuned by device \(d\) as part of the bit-allocation scheme at iteration \(t\). Let \(\mathbf{g}_{t}^{(d)}\), \(\mathbf{H}_{t}^{(d)}\), \(\hat{\mathbf{H}}_{t}^{(d)}\) be the gradient, Hessian, and Hessian approximation, respectively, of device \(d\). We denote by \(\mathbf{v}_{i}^{(d)}\) and \(\hat{\mathbf{v}}_{i}^{(d)}\) the \(i\)-th eigenvector of the \(d\)-th device and its quantized version, respectively. Note that eigenvectors always correspond to the last computed Hessian \(\mathbf{H}(\boldsymbol{\theta}^{k_{t}})\), with \(k_{t}\leq t\). The integer \(b_{t}^{(d)}(q)\) denotes the number of bits allocated by device \(d\) to the \(q\)-th eigenvector coordinates up to iteration \(t\), while \(b_{q,t}^{(d)}\) is the per-iteration bits allocated to the \(q\)-th eigenvector, i.e., \(b_{t}^{(d)}(q)=b_{t-1}^{(d)}(q)+b_{q,t}^{(d)}\). We define \(\mathcal{A}\) to be the set of devices involved in the optimization, \(\mathcal{I}\) the set of iteration indices in which each device recomputes its local Hessian, \(f^{(d)}\) the cost function of device \(d\), and \(\epsilon>0\) the gradient norm threshold. Hessian approximations are built at the aggregator in the following way:
\[\hat{\mathbf{H}}_{t}=\frac{1}{N}\sum_{d=1}^{M}N_{d}\hat{\mathbf{H}}_{t}^{(d)}, \ \ \hat{\mathbf{H}}_{t}^{(d)}=\sum_{i=1}^{q_{t}^{(d)}}\bar{\lambda}_{i}^{(d)}\hat{ \mathbf{v}}_{i}^{(d)}\hat{\mathbf{v}}_{i}^{(d)\top}+\rho_{t}^{(d)}\mathbf{I}, \tag{22}\]
where \(\bar{\lambda}_{i}^{(d)}=\lambda_{i}^{(d)}-\rho_{t}^{(d)}\). Incremental quantization allows devices to refine the previously transmitted quantized version of their eigenvectors by adding information bits, see (19). We denote the set of information bits of device \(d\) sent to quantize or refine previously sent quantized eigenvectors by \(Q_{t}^{(d)}\).
### _Heuristic choice of \(q_{t}^{(d)}\)_
To reduce the computational burden at the edge devices and to solve the bit-allocation problem only once per round, we propose a heuristic strategy for each device to choose \(q_{t}^{(d)}\): at each incremental round \(t\), instead of inspecting all the options corresponding to \(q_{t-1}^{(d)},...,q_{t}^{(d)}\), which would provide the exact solution, but would require solving problem (21) \(n-q_{t-1}^{(d)}+1\) times. We fix \(\bar{q}=q_{t-1}^{(d)}+B_{t}^{(d)}\): With this choice of \(\bar{q}\), we solve
problem (21), and we subsequently convert the solution to bits obtaining \(\{b_{i,t}^{(d)}\}_{i=1}^{\bar{q}}\) and \(\{b_{t}^{(\bar{u})}(i)\}_{i=1}^{\bar{q}}\). We then fix the value
\[q_{t}^{(d)}=\hat{q}_{t}^{(d)}(\{b_{t}^{(d)}(i)\}_{i=1}^{\bar{q}}):=\max_{q}\{q:b _{t}^{(d)}(q)>0\} \tag{23}\]
### _Convergence analysis_
The choice of Hessian approximation is positive definite by design (see (22)). Hence, the algorithm always provides a descent direction and, with a backtracking strategy like in [9] and [10], convergence is guaranteed (see Theorem 4 of [10]). Empirical results suggest that linear and superlinear convergence of the original SHED may still be guaranteed under some careful quantization design choices. We leave the analysis of the convergence rate as future work, but we provide an intuition on the convergence rate in the least squares case. In the least squares case, for a given choice of \(q\) and of the allocated bits \(\{b_{i,t}^{(d)}\}_{i=1}^{q}\) of each device \(d\), an easy extension of Theorem 3 in [10] provides the following bound
\[\|\mathbf{\theta}^{t+1}-\mathbf{\theta}^{*}\|\leq\kappa_{t}\|\mathbf{\theta}^{t}-\mathbf{ \theta}^{*}\|, \tag{24}\]
with \(\kappa_{t}=(1-(\bar{\lambda}_{n}-e_{t})/\bar{\rho}_{t})\), where
\[\bar{\lambda}_{n}=\frac{1}{N}\sum_{d=1}^{M}N_{d}\lambda_{n}^{(d)}\ \ \bar{\rho}_{t}=\frac{1}{N}\sum_{d=1}^{M}N_{d}\rho_{t}^{(d)}\]
and
\[e_{t}=\frac{1}{N}\sum_{d=1}^{M}N_{d}\sum_{i=1}^{q_{t}^{(d)}}(\lambda_{i}^{(d)} -\rho_{t}^{(d)})\|\delta\mathbf{V}_{i}^{(d)}\| \tag{25}\]
where \(\delta\mathbf{V}_{i}^{(d)}:=(\mathbf{v}_{i}^{(d)}\mathbf{v}_{i}^{(d)\top}- \hat{\mathbf{v}}_{i}^{(d)}\hat{\mathbf{v}}_{i}^{(d)\top})\). It can be noted how for a sufficiently small quantization error, which can always be achieved by incremental refinements, the convergence rate in the least squares case is at least linear. The extension to the general case is left as a future work.
## V Empirical Results
In this section, we provide empirical results obtained with two datasets, FMNIST [13] and w8a [14]. We simulate two configurations for the network: one where every device has the same transmission rate at each communication round, and one where the rate changes randomly for each device based on the widely adopted Rayleigh fading model [15, 16]. For both FMNIST and w8a we build up a binary classification setting with logistic regression (in FMNIST we learn to distinguish class '1' from all the others), simulating a scenario with \(M=8\) devices, each with \(500\) data samples. We use L2 regularization with parameter \(\mu=10^{-5}\). For FMNIST, we apply PCA [17] to the data to reduce the dimensionality to \(n=90\), while for w8a we keep the original data dimensionality, \(n=300\). To simulate the fading channels, we adopt the following simple model. We consider that all the devices allocate the same bandwidth \(\beta\) for the communication with the aggregator and write the achievable transmission rate as (see, e.g., [15, 16])
\[R^{(d)}=\beta\log_{2}(1+\gamma\Gamma^{(d)}) \tag{26}\]
where \(\Gamma^{(d)}\) is a value related to transmission power and environmental attenuation for user \(d\). For simplicity, we fix \(\Gamma^{(d)}=\Gamma=1\) for all users (in [15], for instance, \(\Gamma=1\) and \(\Gamma=10\) were considered). The only source of variability is then \(\gamma\sim\exp(\nu)\), modelling the Rayleigh fading effect. We fix \(\nu=1\). Specifically, to simulate the different bit budgets, we compute the individual bit budget of each device as \(B_{t}^{(d)}=B\log_{2}(1+\gamma\Gamma^{(d)})\), setting \(B=2b_{\max}\). We fix \(b_{\max}=16\). In the non-fading case, the bit budget for each device is constant and set to \(B_{t}^{(d)}=2b_{\max}\). We consider a scenario where the full-quality gradient is always transmitted to the aggregator by the devices. We compare Q-SHED against an ideal version of SHED, dubbed ideal-SHED, where the eigenvectors that are quantized by Q-SHED are
transmitted at full quality. We also compare Q-SHED against a naively-quantized counterpart, NQ-SHED, for which all bits are allocated to the first eigenvectors, and the state-of-the-art FedNL [11] with rank-1 compressors. With the exception of ideal-SHED, the per-round bit budget of the considered algorithms is the same. We have experimented with the possibility of quantizing the second-order information of FedNL, but we observed a performance degradation. Hence, when the bit budget of a device is not enough for communicating the rank-1 compression of the Hessian drift at full quality, we only use the device's local gradient. We do the same for NQ-SHED. The results on FMNIST and w8a are shown in Figs. 1 and 2, respectively. In both cases, it is possible to appreciate the robustness of Q-SHED in terms of iterations required for convergence, while both NQ-SHED and FedNL performance is degraded in the presence of fading channels. In terms of convergence speed, the results show that Q-SHED provides performance improvements against the selected competing solutions between 30% and 60%.
## VI Conclusion and future work
We have empirically shown that Q-SHED outperforms its naively-quantized version as well as state-of-the-art algorithms like FedNL. Future works include an in-depth analysis of the convergence rate, and the adoption of more advanced quantization schemes, like vector quantization techniques.
## VII Acknowledgment
This work has been supported, in part, by the Italian Ministry of Education, University and Research, through the PRIN project no. 2017NS9FY and by the European Union under the Italian National Recovery and Resilience Plan (NRRP) of NextGenerationEU, partnership on "Telecommunications of the Future" (PE0000001 - program "RESTART")..
|
2307.08864
|
Path Sums for Propagators in Causal Sets
|
A major challenge in Causal Set research is that theories need only to match
general relativity and quantum field theory in the appropriate limits. This
means that there should be many different ways to calculate a scalar field
propagator in a causal set that match the known limits, but may give
significantly different results on the small scale. In this work, we explore
under what conditions a path sum will correspond to a scalar field propagator
in such a way that it matches the known value in the continuum limit. A family
of solutions for the path sum is found and is verified numerically in a few
specific cases.
|
Samuel Shuman
|
2023-07-17T21:42:07Z
|
http://arxiv.org/abs/2307.08864v2
|
# Path Sums for Propagators in Causal Sets
###### Abstract
A major challenge in Causal Set research is that theories need only to match general relativity and quantum field theory in the appropriate limits. This means that there should be many different ways to calculate a scalar field propagator in a causal set that match the known limits, but may give significantly different results on the small scale. In this work, we explore under what conditions a path sum will correspond to a scalar field propagator in such a way that it matches the known value in the continuum limit. A family of solutions for the path sum is found and is verified numerically in a few specific cases.
## I Introduction
### Background: Causal Sets
Causal Set Theory (CST) is a candidate theory of quantum gravity. Like many such theories, it focuses on describing the Planck scale structure of spacetime in a way that is mostly consistent with general relativity and quantum field theory. As one might expect, there are many ways to approximate these theories on the Planck scale, so to make progress, we must decide which classical properties should be fundamental aspects of the new theory and which should be emergent on the large scale. CST is based on the idea that spacetime is fundamentally discrete, and has a fundamental causal structure, but Lorentzian geometry is emergent.
The central idea of CST, that causal structure and discreteness are enough to recover Lorentzian geometry on the large scale, has its foundation in a series of papers in the 1970s. These papers showed that for all past and future distinguishing spacetimes the causal structure is enough to determine the conformal geometry [1; 2]. This means that if you know the causal structure of a spacetime and the volume of every region, that is enough to recover the entire geometry. As summarized by Rafael Sorkin in [3]:
\[\text{Causal Structure}+\text{Volume}=\text{Geometry}\]
As we will see, discreteness, when treated in the ways we will describe, can tell you about the spacetime volume of any region, with the number of points in a region corresponding to the volume. This correspondence depends on a parameter called the density, \(\rho\), of the causal set. Thus we get:
\[\text{Causality}+\text{Discreteness}\approx\text{Geometry}\]
Therefore, a discrete set of events with a causal ordering should be enough to recover the geometry of a Lorentzian manifold. This idea was first laid out by Bombelli, Lee, Meyer, and Sorkin in 1987 [4].
#### i.1.1 Definitions
What follows are some important definitions in CST. A **Causal Set** is a set of events, \(C\), paired with a parameter, \(\rho\), called the density, and an ordering relation (the causal order), \(\prec\), satisfying the following properties:
* Transitivity \(x\prec y\), \(y\prec z\Rightarrow x\prec z\)
* Anti-symmetry \(x\prec y\Rightarrow y\nparrow x\)
* Local-finiteness
Local-finiteness requires that the **interval**
\[[x,y]=\{z\|x\prec z\prec y\}\]
is finite for all \(x,y\in C\). Note that in the context of general relativity, the interval is often referred to as a causal diamond. To summarize what these requirements correspond to physically, transitivity tells us that this relation orders the causal set and can be interpreted as a casual structure, antisymmetry tells us that there are no closed causal loops \(x\prec y\prec x\), and local-finiteness tells us that regions of a causal set that correspond to finite regions in a manifold description should have finitely many elements. This last requirement guarantees that our causal set is discrete and that there can exist a number-volume correspondence as mentioned above.
We must also consider a few types of **trajectories** in causal sets. A trajectory is a sequence of events in \(C\). Note that no requirement is made here regarding causal structure. A **chain** is a sequence of events \((x_{n})\) in \(C\) such that \(x_{i}\prec x_{i+1}\) for all \(i\). We say \(x\) is **linked** to \(y\), denoted \(x\prec*y\), if \(x\prec y\) and the interval \([x,y]\) is empty. We can then define a **path** as a sequence \((x_{n})\) in \(C\) such that \(x_{i}\prec*x_{i+1}\) for all \(i\). To state this a different way, paths are chains that are as close to continuous as possible. The length of a trajectory \(\{x_{0},\ldots,x_{n}\}\) is defined to be \(n\). Finally, we will define a **jump** between two events in a spacetime to be a trajectory of **length 1**. In particular, this means the jump from \(x\) to \(y\) for some \(x,y\in C\) is the sequence \(\{x,y\}\) with no intermediate points.
Sprinklings
The dynamics of CST have not been fully determined, so we cannot yet solve for the causal order corresponding to physical conditions from first principles. In this paper, we will need to generate causal sets that correspond to a flat spacetime. To do this we will use a method called sprinkling. Sprinkling is a strategy to generate a causal set corresponding to a given manifold by taking a random set of points in that manifold as our events and using the causal structure of the manifold to define the causal relation.
One might expect a regular lattice to be a more appropriate sprinkling procedure than randomly selecting events. However, a random sprinkling is necessary to preserve Lorentz symmetry. To make sense of this, note that a regular lattice in a spacetime defines a preferred reference frame (see figure 1). We will see that a Poisson process is a natural choice for this random distribution.
A Poisson process in a 4-dimensional spacetime is defined analogously to a 1-dimensional Poisson distribution. It is defined by a single parameter that tells you the density at which events are selected. In general, a Poisson process must satisfy two properties. One is that the probability of finding exactly \(n\) points in a region of volume \(V\) is:
\[P\{N=n\}=\frac{(\rho V)^{n}}{n!}e^{-\rho V} \tag{1}\]
Here, \(\rho\) is the average number density of points in the spacetime. The other required property is that the number of points in disjoint, bounded regions is independent.
This is what is called a homogeneous Poisson process, meaning that the rate at which events occur, \(\rho\), is constant and does not depend on the location in spacetime. Homogeneous Poisson processes are particularly easy to simulate in finite regions. First, use the Poisson distribution and the volume of the region to randomly determine the number of points that will be sprinkled. Then each point is placed in the region with a uniform random distribution. For rectangular regions, this can be done by selecting each coordinate from a one dimensional uniform distribution [6].
#### ii.1.3 Correspondences
If CST is correct, then the underlying structure of the universe is a causal set and any manifold representation of spacetime is just an approximation. That being said, the manifold approximation for spacetime must be very good when we "zoom out" from the discreteness scale.
This leads to the "Fundamental Conjecture of CST" or the "Hauptvermutung" [5]. In short, if a causal set that is generated by a Poisson sprinkling into a manifold \(M\) can also be achieved as a Poisson sprinkling into a manifold \(M^{\prime}\) at the same density, then the two manifolds \(M\) and \(M^{\prime}\) must have almost identical geometries. There have been some suggestions, such as in [7], to make this statement more rigorous but there is not yet a clear consensus. If this conjecture was not true, then CST would need additional structure to recover general relativity on the large-scale.
One way this problem has been approached is to establish correspondences between properties of a causal set and properties of any manifold it can be faithfully embedded in. A manifold faithfully embeds a causal set if the causal set can be generated as a Poisson sprinkling on that manifold. The idea is to say that any manifold that faithfully embeds a particular causal set must approximate certain geometric properties on the large-scale. The most basic example is the number-volume correspondence we have already discussed. Since our points are sprinkled by a Poisson process, the fractional variance in the number of points in a region is:
\[\frac{\delta n}{n}=\frac{\sqrt{n}}{n}=\frac{1}{\sqrt{n}} \tag{2}\]
Thus, regions of the causal set containing a large number of events, \(n\), can be known within a very small uncertainty to have volume \(V=n/\rho\) in any manifold that faithfully embeds it.
Figure 1: From [5], a lattice spacetime in two dimensions. The number-volume correspondence only holds in a specific frame and fails to hold in a Lorentz boosted frame.
Figure 2: A Poisson sprinkling in a flat 2-dimensional spacetime shown in both the original reference frame and one that is Lorentz boosted. This was calculated with \(\rho=600\) and a relative velocity of \(v=0.6\)c. Unlike the lattice sprinkling shown in figure 1, the Poisson random sprinkling has no preferred reference frame.
Another correspondence is for the geodesic proper time separating two causally connected events. In 2 dimensions [8], this proper time can be estimated as:
\[\tau^{6}=\frac{1}{8\rho^{3}}(J_{1}-2J_{2}+J_{3}) \tag{3}\]
with \(J_{k}\) defined by
\[J_{k}=(2k+2)(2k+4)(2^{3})\left(kC_{k}\right)^{3/k} \tag{4}\]
and \(C_{k}\) defined as the number of chains of length \(k\) connecting the events. These formulas vary slightly depending on the dimension of the spacetime.
It should be noted that many such correspondences have been investigated. There are correspondences to estimate the curvature, dimension, volume, proper time, etc of the regions of any manifold that can be represented by that causal set. See [5] for further discussion of these correspondences.
### Background: Path Integrals and Propagators
In general, a propagator describes how a quantum system transitions from one state to another. A common special case is a propagator that describes how a system transitions from one event to another. This will be proportional to the probability amplitude associated with transitioning between these events. For events \(x,y\) in spacetime, we will denote this propagator \(K(x,y)\). The functional form of the propagator depends on the quantum system being described.
The path integral formulation developed by Richard Feynman shows that some propagators between events in spacetime can be calculated by integrating over trajectories connecting the two events [9]. Different propagators can be found by including different types of trajectories in this integral, but not all propagators can be defined this way [10]. To find the total probability amplitude of transitioning between two events, we can assign a probability amplitude to each path and add up each of those contributions:
\[K(x,y)=\int_{x}^{y}\delta q\ e^{iS[q]} \tag{5}\]
The integral must be taken over paths, \(q\), connecting \(x\) and \(y\). The probability amplitude associated with each path is \(e^{iS[q]}\) where \(S[q]\) is the action of that path, which will depend on the quantum system being described.
### Previous Work
There has been past work considering scalar field propagators in causal sets. For example, in [11; 12] the author considered a model that assigned an amplitude to each jump along a path (called the hop amplitude \(a\)), and another to each intermediate event along the path (called the stop amplitude \(b\)). They were able to show that there are values of these constants that recover the free scalar field retarded propagator when averaged over sprinklings and considered in the appropriate continuum limit. The main idea is to create a matrix representation of the propagator that approximately matches the value of the propagator in the continuum.
Now let us consider an illustrative example for how the average values of such propagators were calculated. In [11; 12], when summing over paths and averaging over sprinklings, the propagator becomes:
\[K(x,y)=\sum_{n}a^{n}b^{n-1}P_{n}(x,y) \tag{6}\]
where \(P_{n}(x,y)\) is the average number of paths of length \(n\) from \(x\) to \(y\). This expression comes from the fact that every path of length \(n\) has \(n\) hops and \((n-1)\) stops. The author then set up an integral for \(P_{n}(x,y)\) and used that to get an integral relationship for the propagator
\[P_{n}(x,y) = \rho^{n-1}\int dz_{1}\int dz_{2}\ldots\int dz_{n-1} \tag{7}\] \[\mu(x,z_{1})\mu(z_{1},z_{2})\ldots\mu(z_{n-1},y)\]
Here \(\mu(x,y)\) is the probability that events \(x\) and \(y\) are linked. This leads to the integral relation:
\[K(y-x)=a\mu(y-x)+\rho ab\int dz\mu(z-x)K(y-z) \tag{8}\]
Since the integral is a convolution, this equation can be solved with a Fourier transform.
\[\tilde{K}(p)=\frac{a\tilde{\mu}(p)}{1-\rho ab\tilde{\mu}(p)} \tag{9}\]
Then one only has to Fourier transform back to get an expression for the average over sprinklings of the scalar field propagator associated with these hop and stop amplitudes.
While this work was able to suggest a method to construct a propagator on a causal set, there is an important consideration. Since the only restrictions are that the path sum matches the continuum propagator on average and does not vary too much over sprinklings, we should expect many different formulas for the propagator to match these conditions. For that reason, it is useful to solve this problem in a more general way, so that we might categorize a greater variety of possible path sums that are consistent with the continuum calculation.
A similar approach was taken in [13], but in that paper the authors allowed for non-constant hop amplitudes and did not include stop amplitudes. While they solved for a matrix relationship between these hop amplitudes and the propagator in a way that is similar to what we will see in the next section, they did not discuss how this relationship would average over sprinklings.
Path sums for propagators
In this section, we will develop a general method for analyzing the relationship between a scalar field propagator in a causal set and the jump amplitude matrix \(T\) (which serves the same role as the matrix \(A\) in [13]). By looking at this relationship after averaging over sprinklings into Minkowski space, we will require that we recover the same propagator as the continuum calculation. In [11], there is discussion for how such a propagator could be used to define a scalar field theory on a causal set.
### Propagators and Jump Amplitudes
on Causal Sets
First, since the causal set is discrete we can label events in the causal set by non-negative integer indices. Let us assume the causal set is finite. Then define the matrix \(T\), by
\[T_{ij}\equiv\text{The probability amplitude associated with}\] \[\text{jumping from event $i$ to event $j$}\]
We will also define
\[\sigma_{ij}\equiv\text{The total probability amplitude of all}\] \[\text{trajectories from event $i$ to event $j$}\]
The propagator will be proportional to the matrix \(\sigma\), but we must include a constant that incorporates the units of the propagator in the same way that \(\delta q\) incorporates the units of the propagator in equation 5. In particular, we will define \(K_{xy}=a\sigma_{xy}\).
Since all trajectories can be broken down into a sequence of jumps, we should expect a relationship between \(T\) and \(\sigma\). Consider organizing trajectories from \(x\) to \(y\) by the first jump taken. Every trajectory from \(x\) to \(y\) is either a direct jump to \(y\), or a jump to some element \(z\) followed by a trajectory from \(z\) to \(y\). Therefore we have
\[\sigma_{xy}=T_{xy}+\sum_{z}T_{xz}\sigma_{zy} \tag{10}\]
Since \(K_{xy}=a\sigma_{xy}\), we get the composition relation
\[K_{xy}=aT_{xy}+\sum_{z}T_{xz}K_{zy} \tag{11}\]
Or, written in matrix form
\[K=aT+TK \tag{12}\]
\[K=aT(I-T)^{-1} \tag{13}\]
This derivation allows us to define the propagator whenever \((I-T)\) is invertible. If \((I-T)\) is not invertible, then there is not a clear way to define a propagator in terms of the jump amplitude matrix, \(T\). Note that when the allowed jumps are all causal, \((I-T)\) is guaranteed to be invertible and so the propagator is well-defined [11; 12].
### Averaging Over Sprinklings
The next step taken in previous attempts at deriving path sums in causal sets is equivalent to postulating a model \(T\) matrix (such as the hop and stop amplitudes considered in [11; 12]) and then carefully considering what integral relations may hold for this model. In this work, we will take a more general approach by defining an average value of the matrix \(T\) and of the propagator.
First, let \(x\) and \(y\) be two elements in the background manifold \(M\). Consider the space of all possible causal sets faithfully embedded in \(M\) that include \(x\) and \(y\). If we label these two events with the indices \(0\) and \(1\) in these causal sets, then we will define
\[T(x,y)\equiv\text{average over sprinklings of }T_{01} \tag{14}\]
\[K(x,y)\equiv\text{average over sprinklings of }K_{01} \tag{15}\]
We will assume these averages lead to functions that are sufficiently smooth and continuous. This seems reasonable since the \(K\) and \(T\) matrices will be defined in terms of the causal structure of a sprinkling into a smooth manifold. Importantly, since we are doing this calculation in Minkowski spacetime, we will also assume translational invariance:
\[T(x,y)=T(y-x)\]
\[K(x,y)=K(y-x)\]
Next we will consider how these average functions follow the composition law in equation 11. For the \(K_{01}\) matrix element we have
\[K_{01}=aT_{01}+\sum_{i}T_{0i}K_{i1} \tag{16}\]
When we average over sprinklings, \(K_{01}\) becomes \(K(y-x)\) and \(T_{01}\) becomes \(T(y-x)\). The sum will include all points in the causal set, so when averaged over all possible
Figure 3: Every trajectory from \(x\) to \(y\) is either a direct jump to \(y\), or a jump to some event \(z\) followed by a trajectory from \(z\) to \(y\). This is reflected in the composition law \(K=aT+TK\).
sprinklings we will need to include every point in the manifold. To make the next step more clear, consider the discreteness scale volume \(V_{0}=1/\rho\), which can be thought of as the average volume associated with each event in a sprinkling. We can write the sum as
\[\sum_{i}T_{0i}K_{i1}=\rho\sum_{i}T_{0i}K_{i1}V_{0} \tag{17}\]
The sum now evaluates \(T_{0i}K_{i1}\) at each event in the spacetime and multiplies by a volume associated with that event. Now we can see that this is a type of Riemann sum that will become an integral when we average over all sprinklings. Thus we get the average composition relation:
\[K(y-x)=aT(y-x)+\rho\int dzT(z-x)K(y-z) \tag{18}\]
Since the integral is a convolution, this relationship can be solved by Fourier transforming the equation.
\[\tilde{K}(p)=a\tilde{T}(p)+\rho\tilde{T}(p)\tilde{K}(p) \tag{19}\]
\[\tilde{K}(p)=\frac{a\tilde{T}(p)}{1-\rho\tilde{T}(p)} \tag{20}\]
Then all that remains is to Fourier transform back to find the average propagator \(K(y-x)\).
While this is similar to what was done in [11; 12], since we have not specified the jump amplitude function, we can solve for that instead.
\[\tilde{T}(p)=\frac{\tilde{K}(p)}{a+\rho\tilde{K}(p)} \tag{21}\]
Then Fourier transforming back would tell us what jump amplitudes would be needed on average to recreate a given propagator as a path sum in a causal set.
### Solving for Jump Amplitudes
The Feynman, retarded, and advanced propagators for scalar fields all have Fourier transforms of the form
\[\tilde{K}(p)=\frac{1}{f(p)+m^{2}} \tag{22}\]
For example, for the Feynman propagator we would set \(f(p)=p^{2}-i\varepsilon\). Plugging this into equation 21 yields:
\[\tilde{T}(p)=\frac{1}{a}\cdot\frac{1}{f(p)+m^{2}+\rho/a} \tag{23}\]
This has the same form as the Fourier transform of the propagator but with the constant \((m^{2}+\rho/a)\) taking the place of \(m^{2}\). To simplify this expression, define the factor \(\beta=\sqrt{1+\frac{\rho}{m^{2}a}}\). Then \(m^{2}\mapsto(m^{2}+\rho/a)\) may instead be written as \(m\mapsto(\pm\beta m)\). This gives a final result for the average value of the jump amplitudes for propagators of this form:
\[T(y-x)=\frac{1}{a}K(y-x)\text{, }m\mapsto\pm\beta m \tag{24}\]
### Units of \(a\)
So far in this paper, we have used natural units with \(\hbar=c=1\). In this section we will reintroduce \(\hbar\) in order to better understand how \(a\) should depend on \(\rho\) and \(m\). The constant \(a\) that appears in these expressions is included so that the propagator is not unitless. To determine what units we should expect for \(a\), note that \(a\) has the same units as the propagator, \(K\). Consider the Feynman propagator. This has the Fourier transform:
\[\tilde{K}=\frac{1}{p^{2}+m^{2}} \tag{25}\]
This is a Green's function for the Klein-Gordon equation, which is a statement of the relativistic energy relation \(E^{2}=p^{2}+m^{2}\). Therefore we should expect \(\tilde{K}\) to have the same units as \(1/m^{2}\). Now consider the Fourier transform for \(K\).
\[\tilde{K}=\int dxe^{ipx}K \tag{26}\]
Since \(\tilde{K}\) has units of \(1/m^{2}\) and \(dx\) has units of spacetime volume, \(K\) must have units of \(\frac{1}{vm^{2}}\). In order for these units to come directly from the constants \(m\) and \(\rho\), we must have a normalization of the form \(a=\alpha\frac{\rho}{m^{2}}\), where \(\alpha\) is a unitless parameter. This yields a \(\beta\) factor of the form
\[\beta=\sqrt{1+\frac{1}{\alpha}} \tag{27}\]
Since fundamental constants like \(\hbar\) and \(G\) may also contribute to the units of \(a\), we cannot rule out that \(\alpha\) may depend on \(\rho\) and \(m^{2}\). As we will discuss in section III.1.4, the only clear restriction on \(\alpha\) is that it is non-zero.
## III Applications
In this section, we will apply the formulas derived in section II to construct path sums for the retarded propagator and the Feynman propagator. First, we will show that this formulation is consistent with past work. Then we will use numerical simulations to show that these results are consistent with the continuum values for the retarded propagator. In this section it will be useful to define:
\[\nu(y-x)\equiv\Theta(y^{0}-x^{0})\Theta\Bigl{(}\tau(y-x)^{2}\Bigr{)} \tag{28}\]
Note that this is 1 if \(x\prec y\) and 0 otherwise.
### Results for the Retarded Propagator
#### iii.1.1 Comparison to Previous Results
In order to compare the following results, which are in terms of jump amplitudes, to the results in [11; 12],
we must first consider how hop and stop amplitudes can be expressed in terms of jump amplitudes. All but the last jump along a trajectory consists of a hop and a stop (since only intermediate events count as stops). This means that if we multiply the hop and stop amplitudes from [11; 12] we should get something that matches the jump amplitudes in the following calculations. Furthermore, since the last jump includes only a hop we must divide the trajectory's probability amplitude by the stop amplitude to get its contribution to the propagator. This means our unit constant \(a\) should be the inverse of the stop amplitude. Note that the unit constant \(a\) introduced in the previous section is not the same as the hop amplitude \(a\) from [11; 12].
#### iii.1.2 Results in 2 Dimensions
In 2 dimensions, the retarded propagator takes the form:
\[K(y-x)=\nu(y-x)\frac{1}{2}J_{0}\Big{(}m\tau(y-x)\Big{)} \tag{29}\]
Using equation 24, in order for the path sum to match this on average, the average jump amplitudes must be:
\[T(y-x)=\frac{1}{a}\nu(y-x)\frac{1}{2}J_{0}\Big{(}\pm\beta m\tau(y-x)\Big{)} \tag{30}\]
The derivation discussed in [11] is equivalent to setting \(a=-\frac{\rho}{m^{2}}\). When this choice is made we get:
\[\beta=\sqrt{1+\frac{\rho}{m^{2}a}}=0 \tag{31}\]
\[T(y-x)=-\frac{m^{2}}{\rho}\frac{1}{2}\nu(y-x) \tag{32}\]
The \(\Theta\)-functions ensure that jumps are only allowed from \(x\) to \(y\) if \(x\prec y\). This result then says a jump to such a future-connected event should have an amplitude of \(-\frac{m^{2}}{2\rho}\). This is equivalent to the result of [11] where \(-\frac{m^{2}}{2\rho}\) is the product of the hop and stop amplitudes.
#### iii.1.3 Results in 4 Dimensions
In 4 Dimensions, the retarded propagator takes the form:
\[K(y-x)=\nu(y-x)\bigg{(}\frac{1}{2\pi}\delta\Big{(}\tau(y-x)^{2} \Big{)}\] \[-\frac{m}{4\pi\tau(y-x)}J_{1}\Big{(}m\tau(y-x)\Big{)}\bigg{)} \tag{33}\]
This means the jump amplitudes must be of the form:
\[T(y-x)=\frac{1}{a}\nu(y-x)\bigg{(}\frac{1}{2\pi}\delta\Big{(} \tau(y-x)^{2}\Big{)}\] \[-\frac{\pm\beta m}{4\pi\tau(y-x)}J_{1}\Big{(}\pm\beta m\tau(y-x )\Big{)}\bigg{)} \tag{34}\]
If we again set \(a=-\frac{\rho}{m^{2}}\) and \(\beta=0\) we get:
\[T(y-x)=-\frac{m^{2}}{\rho}\frac{1}{2\pi}\delta\Big{(}\tau(y-x)^{2}\Big{)}\nu( y-x) \tag{35}\]
As discussed in [11], we will need the result:
\[\lim_{\rho\rightarrow\infty}\sqrt{\rho}\mu(y-x)=\frac{\sqrt{24}}{2}\delta \Big{(}\tau(y-x)^{2}\Big{)}\nu(y-x) \tag{36}\]
Here I have again used \(\mu(y-x)\) to represent the probability that \(x\) is linked to \(y\). Then in the high density limit we have:
\[T(y-x)\approx-\frac{m^{2}}{\rho}\frac{\sqrt{\rho}}{\sqrt{24}\pi}\mu(y-x) \tag{37}\]
This says that if we allow only jumps between linked events and give them the amplitude \(-\frac{m^{2}}{\rho}\frac{\sqrt{\rho}}{\sqrt{24}\pi}\) then this will give the correct propagator on average. This is the same as the product of hop and stop amplitudes found in [11].
#### iii.1.4 Numerical Results
While the previous calculations show that the causal set propagator should match the continuum propagator for any value of \(a\) on average, it is conceivable that the variation may still be too large for the model to be acceptable. To test this, we will use numerical simulations.
To start, events in the causal set are chosen using a Poisson sprinkling with density \(\rho=4500\) into a causal diamond in a flat 2D spacetime. Then the causal order of the set is determined by assuming a Minkowski metric. The proper time separating each pair of events is estimated from the causal structure using equations 3 and 4. These proper times are used to calculate a jump amplitude matrix:
\[T_{xy}=C_{xy}\frac{m^{2}}{2\rho\alpha}J_{0}\left(\sqrt{1+\frac{1}{\alpha}}m \tau_{xy}\right) \tag{38}\]
where \(C_{xy}=1\) if \(x\prec y\) and 0 otherwise. Note that this formula has the same form as the average jump amplitude from equation 30. In particular, this formula assumes \(a\) is of the form \(a=\alpha\frac{\rho}{m^{2}}\).
There is a slight issue with this formula for the jump amplitudes. Even though the average over sprinklings of \(\tau_{xy}\) should be the manifold proper time \(\tau(y-x)\), the average over sprinklings of equation 38 will not always match equation 30. This is because the average of a function is not generally the same as the function of the average. However, we should expect this effect to go away as \(\rho\rightarrow\infty\) since the variation over sprinklings in \(\tau_{xy}\) goes to zero in the limit [8].
In figure 4, we see that this form of causal set propagator appears to agree with the continuum propagator at various values of \(\alpha\). The variation also seems to decrease
at larger proper times, though higher density simulations over larger causal diamonds may be necessary to state this conclusively.
While the variation does seem to be larger for positive values of \(\alpha\), there is good reason to believe this variation can be attributed to how we have estimated the proper times used in the jump amplitudes. Figure 5 shows the same graph for the propagator with \(\alpha=1\), but in this case the jump amplitudes were calculated with the manifold proper time instead of the causal set estimate. As we can clearly see, the variation is significantly smaller. Note that in equation 38, positive values of \(\alpha\) will make the jump amplitudes more sensitive to the proper time, so this variation is more noticeable.
Though we have only discussed integer values of \(\alpha\) so far, the only limitation from the math is that \(\alpha\neq 0\). Some conceptual problems do arise from allowing \(\alpha<<1\). If \(\alpha\) was sufficiently small that could cause \(|T_{ij}|>1\) for some \(i,j\). While, in theory, the path sum should still average to the correct propagator, this situation would contradict the probability amplitude interpretation of \(T\).
We must also consider complex values of \(\alpha\). \(\alpha\) should be allowed to be complex because both the jump amplitudes \(T\) and the propagator \(K\) can be complex. Figure 6 shows the results for the 2D retarded propagator using equation 38 with \(\alpha=1+i\). As we can see, the causal set propagator is still a close match for the continuum value.
### Results for Feynman Propagator
The same process carried out to model the retarded propagator in the previous subsection can also be applied to create a path sum for the Feynman propagator. While past work such as [14] considered how one could obtain
Figure 4: Numerical results for the causal set retarded propagator (grey) and the continuum retarded propagator (black) from a single sprinkling with \(\rho=4500\), \(m=3\), and various values of \(\alpha\). The propagators are plotted as a function of the proper time measured in the manifold. The jump amplitudes in this calculation are from equation 38 with the proper time estimated from the causal structure using equations 3 and 4.
Figure 5: Numerical results for the causal set retarded propagator (grey) and the continuum retarded propagator (black) from a single sprinkling with \(\rho=4500\), \(m=3\), and \(\alpha=1\). The propagators are plotted as a function of the proper time measured in the manifold. The jump amplitudes in this calculation are from equation 38 with the proper time calculated from the manifold.
the Feynman propagator for a free scalar field from the retarded and advanced propagators, that work did not express the Feynman propagator as a path sum. This is useful to consider since, in a continuum calculation, the Feynman propagator can be directly calculated as a path integral but the retarded propagator cannot [10].
In two dimensions [15], the Feynman propagator is
\[K(y-x) = \Theta(\tau_{xy}^{2})\left(-\frac{i}{4}H_{0}^{(2)}(m\tau_{xy})\right) \tag{39}\] \[+\Theta(-\tau_{xy}^{2})\left(\frac{1}{2\pi}K_{0}(ms_{xy})\right)\]
where \(s_{xy}=\sqrt{-\tau_{xy}^{2}}\). Then, using equation 24, we must have the jump amplitude function
\[T(y-x) = \Theta(\tau_{xy}^{2})\left(-\frac{i}{4a}H_{0}^{(2)}(\pm\beta m \tau_{xy})\right) \tag{40}\] \[+\Theta(-\tau_{xy}^{2})\left(\frac{1}{2\pi a}K_{0}(\pm\beta ms_{ xy})\right)\]
Here, \(H\) is a Hankel function and \(K\) is a modified Bessel function. Note that since \(K_{0}\) decays for large arguments, spacelike jumps far from the lightcone are very unlikely. By contrast, \(H_{0}^{(2)}\) is oscillatory, so timelike jumps far from the lightcone should still be expected.
In four dimensions [15], the Feynman propagator is
\[K(y-x) = \Theta(\tau_{xy}^{2})\left(\frac{m}{8\pi\tau_{xy}}H_{1}^{(1)}(m \tau_{xy})\right) \tag{41}\] \[+\Theta(-\tau_{xy}^{2})\left(-\frac{im}{4\pi^{2}s_{xy}}K_{1}(ms_{ xy})\right)\]
This yields the corresponding jump amplitude function
\[T(y-x) = \Theta(\tau_{xy}^{2})\left(\frac{\pm\beta m}{8\pi\sigma_{xy}}H_{1 }^{(1)}(\pm\beta m\tau_{xy})\right) \tag{42}\] \[+\Theta(-\tau_{xy}^{2})\left(-\frac{\pm i\beta m}{4\pi^{2}as_{xy} }K_{1}(\pm\beta ms_{xy})\right)\]
If we follow the same reasoning as with the retarded propagator, we can attempt to simplify the jump amplitudes in 4 dimensions by setting \(a=-\rho/m^{2}\) which sets \(\beta=0\). While this does not fully remove the \(\tau\)-dependence, it does greatly simplify the expression for \(T(y-x)\). After taking the limit as \(\beta\to 0\), we find
\[T(y-x)=\frac{im^{2}}{4\pi^{2}\rho|\tau_{xy}^{2}|} \tag{43}\]
One challenge is that this path sum would be non-local, which makes it difficult to verify these results numerically. A possible avenue for future work would be to simulate this numerically over large finite region of a flat spacetime to see if the results approach the continuum value for the propagator. Then additional work may be necessary to show that trajectories far from the lightcone do not contribute much. Alternatively, it may be necessary to include boundary terms that account for the paths far from the lightcone that cannot be modeled in a finite region.
## IV Conclusion
Constructing a propagator is a key step in establishing a field theory on a causal set. While past work has shown examples of path sums for propagators that match the continuum in the appropriate limits, these constructions were not unique solutions. In this paper, we have derived a general equation for how to relate the average value of a scalar field propagator to the possible average jump amplitudes. This enabled us to solve for the jump amplitudes necessary to recreate the correct continuum propagator on average. These were tested numerically in the case of the 2 dimensional retarded propagator and shown to match the continuum value. Even though these various constructions of the propagators should agree for large proper times, at small proper times they may differ greatly. This could make understanding possible values for jump amplitudes important for the small scale dynamics of a quantum field theory on a causal set.
Figure 6: Numerical results for the causal set retarded propagator (grey) and the continuum retarded propagator (black) from a single sprinkling with \(\rho=4500\), \(m=3\), and \(\alpha=1+i\). The propagators are plotted as a function of the proper time measured in the manifold. The jump amplitudes in this calculation are from equation 38 with the proper time estimated from the causal structure using equations 3 and 4. Plot (a) shows the real part of the propagator and plot (b) shows the imaginary part.
###### Acknowledgements.
I would like to acknowledge my advisor, Dr. David Craig for the many insightful conversations that shaped this paper. I would also like to acknowledge Dr. Steven Johnston for feedback about the content of the paper.
|
2302.04658
|
The Sample Complexity of Approximate Rejection Sampling with
Applications to Smoothed Online Learning
|
Suppose we are given access to $n$ independent samples from distribution
$\mu$ and we wish to output one of them with the goal of making the output
distributed as close as possible to a target distribution $\nu$. In this work
we show that the optimal total variation distance as a function of $n$ is given
by $\tilde\Theta(\frac{D}{f'(n)})$ over the class of all pairs $\nu,\mu$ with a
bounded $f$-divergence $D_f(\nu\|\mu)\leq D$. Previously, this question was
studied only for the case when the Radon-Nikodym derivative of $\nu$ with
respect to $\mu$ is uniformly bounded. We then consider an application in the
seemingly very different field of smoothed online learning, where we show that
recent results on the minimax regret and the regret of oracle-efficient
algorithms still hold even under relaxed constraints on the adversary (to have
bounded $f$-divergence, as opposed to bounded Radon-Nikodym derivative).
Finally, we also study efficacy of importance sampling for mean estimates
uniform over a function class and compare importance sampling with rejection
sampling.
|
Adam Block, Yury Polyanskiy
|
2023-02-09T14:20:14Z
|
http://arxiv.org/abs/2302.04658v3
|
The Sample Complexity of Approximate Rejection Sampling with Applications to Smoothed Online Learning
###### Abstract
Suppose we are given access to \(n\) independent samples from distribution \(\mu\) and we wish to output one of them with the goal of making the output distributed as close as possible to a target distribution \(\nu\). In this work we show that the optimal total variation distance as a function of \(n\) is given by \(\tilde{\Theta}(\frac{D}{f^{\prime}(n)})\) over the class of all pairs \(\nu,\mu\) with a bounded \(f\)-divergence \(D_{f}(\nu\|\mu)\leq D\). Previously, this question was studied only for the case when the Radon-Nikodym derivative of \(\nu\) with respect to \(\mu\) is uniformly bounded. We then consider an application in the seemingly very different field of smoothed online learning, where we show that recent results on the minimax regret and the regret of oracle-efficient algorithms still hold even under relaxed constraints on the adversary (to have bounded \(f\)-divergence, as opposed to bounded Radon-Nikodym derivative). Finally, we also study efficacy of importance sampling for mean estimates uniform over a function class and compare importance sampling with rejection sampling.
## 1 Introduction
Consider the following problem: given \(n\) independent samples from some base distribution \(\mu\), how can a learner generate a single sample from a target distribution \(\nu\)? This simple question dates back decades, with the first formal solution, rejection sampling, provided already by Von Neumann (1951). Due to its simplicity, this sampling problem appears as a primitive in numerous applications in machine learning, theoretical computer science, and cryptography (Lyubashevsky, 2012; Liu, 1996; Naesseth et al., 2017; Ozols et al., 2013); thus, constructing efficient solutions has filled many works (Grover et al., 2018; Gilks and Wild, 1992; Martino and Miguez, 2011). Perhaps surprisingly, though, the original solution of rejection sampling (Von Neumann, 1951) remains a popular method even today.
Given \(X_{1},\ldots,X_{n}\sim\mu\), recall that rejection sampling takes as a parameter some \(M\), which is a uniform upper bound on the Radon-Nikodym derivative \(\frac{d\nu}{d\mu}\), and for each \(1\leq i\leq n\), accepts \(X_{i}\) with probability \(\frac{1}{M}\cdot\frac{d\nu}{d\mu}(X_{i})\) and returns an arbitrary accepted \(X_{i}\) as a sample from \(\nu\). It is an easy exercise to see that if \(M\geq\left|\left|\frac{d\nu}{d\mu}\right|\right|_{\infty}\), then any accepted sample has distribution \(\nu\). Furthermore, it is not hard to see that any sample gets accepted with probability \(\frac{1}{M}\) independently
of other samples and thus, if we want to have at least one accepted sample with high probability, we require \(n=\Theta(M)\). While there has been quite a lot of work in the information theory community dedicated to refining this bound (Liu and Verdu, 2018; Harsha et al., 2007) as well as developments in the statistical community dedicated to improving sampling efficiency under strong structural assumptions (Gilks and Wild, 1992; Gorur and Teh, 2011), the scope of most all of this work is limited by the requirement that \(\left|\left|\frac{d\nu}{d\mu}\right|\right|_{\infty}<\infty\). In many settings, this assumption is false (Block et al., 2023); as a result, we focus on a similar problem without the stringent assumption on a uniform upper bound. Unfortunately, it is not hard to see that there exist examples where we simply cannot recover a sample _exactly_ from \(\nu\) without this uniform upper bound (see Theorem 30 for an example). Consequently, we relax our desideratum to consider _approximate_ sampling. Specifically, we ask the following question:
_How many independent samples \(X_{1},\ldots,X_{n}\) do we need from a source distribution \(\mu\) such that we can select some \(j^{*}\in[n]\) in order for the law of \(X_{j^{*}}\) to be within total variation distance \(\varepsilon\) of \(\nu\)?_
Despite its simplicity, to the best of our knowledge this question has not been considered in the literature to date. We emphasize several special cases in the related work in Appendix A. In this work we give a complete answer to this question with essentially matching upper and lower bounds for all superlinear \(f\)-divergences of practical interest. While the upper bounds are achieved with a modified rejection sampler and the analysis follows without too much difficulty from classical work, the lower bounds require a more technical approach. In order to quantify how far apart \(\nu\) is from \(\mu\), we use the information-theoretic notion of an \(f\)-divergence, where for two measures \(\nu\ll\mu\) defined on a common set and a convex function \(f\), we define
\[D_{f}\left(\nu||\mu\right)=\mathbb{E}_{\mu}\left[f\left(\frac{d\nu}{d\mu}(Z) \right)\right].\]
We give a more formal definition below, but we observe here that the notion of \(f\)-divergence generalizes common divergences including total variation, KL-divergence, Renyi divergences, and \(\mathcal{E}_{\gamma}\) divergence (Polyanskiy and Wu, 2022+; Van Erven and Harremos, 2014; Asoodeh et al., 2021). We will make the assumption that for some convex \(f\), the source and target measures satisfy \(D_{f}\left(\nu||\mu\right)<\infty\) and ask what the sample complexity of \(\varepsilon\)-approximate rejection sampling is under this constraint. Interestingly, the answer depends on the tail behavior of \(f\); in particular, if \(\sup f^{\prime}(x)<\infty\) then rejection sampling cannot work under only this constraint (see Proposition 4). If we have an \(f\)-divergence constraint with \(f^{\prime}(\infty)=\infty\), however, we will see that
\[n=\widetilde{\Theta}\left((f^{\prime})^{-1}\left(\frac{D_{f}\left(\nu||\mu \right)}{\varepsilon}\right)\right)\]
samples is both necessary and sufficient in order to generate a sample \(X_{j^{*}}\) that is \(\varepsilon\)-close in total variation. In fact, we show that von Neumann's original rejection sampler is essentially optimal for this problem and we do not require the more complicated samplers introduced for exact sampling by Harsha et al. (2007); Liu and Verdu (2018). As mentioned above, the upper bounds are relatively standard, with much of the technical effort involving the construction of lower bounds.
While the above results are interesting in their own right, we emphasize one key use case of our results in a seemingly unrelated field: _smoothed online learning_. We briefly recall the setup.
For general online learning, we fix a set of contexts \(\mathcal{X}\), a set of targets \(\mathcal{Y}\) and a function class \(\mathcal{F}:\mathcal{X}\rightarrow\mathcal{Y}\) as well as a loss function \(\ell:\mathcal{Y}\times\mathcal{Y}\rightarrow[0,1]\). For some horizon \(T\), online learning proceeds in rounds where for each time \(1\leq t\leq T\) the following happens:
1. Nature chooses some context \(x_{t}\) and label \(y_{t}\).
2. the Learner chooses some prediction \(\widehat{y}_{t}\in\mathcal{Y}\).
3. The learner sees \(y_{t}\) and suffers loss \(\ell(\widehat{y}_{t},y_{t})\).
As in Block et al. (2022), Haghtalab et al. (2022), we distinguish between the _proper_ and _improper_ settings. In the former, the Learner must choose some function \(f_{t}\in\mathcal{F}\) before seeing \(x_{t}\) and then predicts \(\widehat{y}_{t}=f_{t}(x_{t})\). In the latter, the Learner observes \(x_{t}\) and then predicts an arbitrary \(\widehat{y}_{t}\in\mathcal{Y}_{t}\). The goal in both cases is to minimize the expected regret to the best function in hindsight, where
\[\mathrm{Reg}_{T}=\sum_{t=1}^{T}\ell(\widehat{y}_{t},y_{t})-\inf_{f\in\mathcal{ F}}\sum_{t=1}^{T}\ell(f(x_{t}),y_{t}).\]
As stated, there is no restriction on Nature's choice of the context and label, which is called the adversarial setting. Despite its popularity due to the robustness of the regime and the lack of assumptions, there are two major problems with the fully adversarial setting: first, simple function classes like thresholds in one dimension that can be easily learned when the data appear independently become unlearnable in the adversarial regime (Rakhlin et al., 2015; Littlestone, 1988); second, even when function classes are learnable, they often cannot be learned efficiently (Hazan and Koren, 2016). In order to solve the first issue, the notion of smoothed online learning has recently gained traction (Rakhlin et al., 2011; Haghtalab et al., 2022; Block et al., 2022; Block and Simchowitz, 2022). Motivated by smoothed analysis of algorithms, Rakhlin et al. (2011); Haghtalab et al. (2022) consider the following setting. For a fixed base measure \(\mu\) on some set \(\mathcal{X}\), we say that a measure \(\nu\) is _\(\sigma\)-smooth_ with respect to \(\mu\) if \(\left|\left|\frac{d\mu}{d\mu}\right|\right|_{\infty}\leq\frac{1}{\sigma}\). An adversary is _\(\sigma\)-smooth_ with respect to some fixed \(\mu\) if for all \(t\), it holds that the distribution \(p_{t}\) of \(x_{t}\) conditioned on all the history is \(\sigma\)-smooth. One motivation for this definition is to suppose that Nature is fully adversarial, but corrupted by some small amount of noise. For example, if \(\mathcal{X}=\mathbb{R}^{d}\), we could imagine adding a small amount of uniform or Gaussian noise to an adversarial input (Block et al., 2023). In Block et al. (2022), Haghtalab et al. (2022), the minimax optimal rates for smoothed online learning were derived up to polylogarithmic factors. As an example, if we let \(\mathrm{vc}\left(\mathcal{F}\right)\) denote the Vapnik-Chervonenkis dimension (Blumer et al., 1989) of some binary valued function class \(\mathcal{F}\), then there exists some algorithm capable of achieving, with respect to the indicator loss,
\[\mathbb{E}\left[\mathrm{Reg}_{T}\right]=O\left(\sqrt{T\cdot\mathrm{vc}\left( \mathcal{F}\right)\cdot\log\left(\frac{T}{\sigma}\right)}\right).\]
Unfortunately, in many common settings, a uniform bound on \(\frac{dp_{t}}{d\mu}\) may not be achievable. For example, consider again the case of a small amount of Gaussian noise in \(\mathbb{R}^{d}\) being added to an adversarial input. A natural choice of \(\mu\) would be some fixed Gaussian, but there is no way to
ensure that \(\left|\left|\frac{dp_{t}}{d\mu}\right|\right|_{\infty}\) is finite. Even when the Radon-Nikodym derivative is uniformly bounded, it may be, as in many high dimensional settings, that this bound is too large for the resulting implications to be meaningful. Thus, in Section 4, we propose a more general notion, of an \(f\)-smoothed adversary, where the distribution \(p_{t}\) of the contexts \(x_{t}\) conditional on the history satisfies \(D_{f}\left(p_{t}||\mu\right)\leq\frac{1}{\sigma}\). In this harder setting, the results of Block et al. (2022), Haghtalab et al. (2022a,b) no longer apply due to the breakdown of a key technical step used in the proofs of all of these results. In Section 4, we apply our bounds on the sample complexity of approximate rejection sampling to generalize the approach of these works and achieve upper bounds on the information theoretic rates of \(f\)-smoothed online learning, which are tight for some \(f\)-divergences.
While the information theoretic rates provided in Block et al. (2022), Haghtalab et al. (2022b) are important, the suggested algorithms that achieve these rates are computationally intractable and thus two _oracle-efficient_ algorithms were also proposed, where the learner has access to an Empirical Risk Minimization (ERM) oracle returning the minimizer over \(\mathcal{F}\) of a weighted cumulative loss function evaluated on some data set (see Definition 35 for a formal definition). Once again, the analysis of these two algorithms does not extend beyond the standard smoothed setting; in Section 4, we again apply our rejection sampling sample complexity bounds to demonstrate that, by modifying the hyperparameters of the two proposed algorithms, we can still maintain a no-regret guarantee under the significantly more general \(f\)-smoothed online learning setting.
We defer discussion of related work to Appendix A for the sake of space. We now summarize our key contributions:
* In Theorem 3, we provide an upper bound on the complexity of approximately sampling from some target measure \(\nu\) given access to samples from \(\mu\). In particular, we show that by modifying classical rejection sampling, \(\widetilde{\Theta}\left((f^{\prime})^{-1}\left(\frac{D_{f}(\nu||\mu)}{ \varepsilon}\right)\right)\) samples suffice to obtain a sample with total variation distance at most \(\varepsilon\) from the target.
* In Proposition 4 and Theorems 5 and 6, we show that the upper bound given by rejection sampling is essentially tight. In particular, we show that rejection sampling is in some sense generic in that "the best" way to use samples from \(\mu\) to produce a sample from \(\nu\) is the approach described above. Furthermore, we show that if \(f^{\prime}\) is bounded above, then the approximate sampling problem is impossible; if \(f^{\prime}\) is unbounded, we show in Theorems 5 and 6 that the sample complexity derived in Theorem 3 is essentially tight as \(\varepsilon\downarrow 0\). In particular, Theorem 5 shows that for all \(n\), there exist distributions with bounded \(f\)-divergence such that \(\Omega\left((f^{\prime})^{-1}\left(\frac{D_{f}(\nu||\mu)}{\varepsilon}\right)\right)\) samples are necessary to produce an \(\varepsilon\)-approximate sample from the target measure, while in Theorem 6, we show that (for a slightly smaller class of \(f\) satisfying a mild growth condition) there exist distributions such that the preceding lower bound holds uniformly in \(n\).
* In Section 4, we generalize previous results on smoothed online learning to the significantly more general setting of \(f\)-smoothed online learning. In particular, we derive minimax upper bounds without regard to computation time as well as demonstrating that two oracle-efficient algorithms (one proper and one improper) proposed in Block et al. (2022) remain no-regret even in the more general \(f\)-smoothed online learning setting. Moreover, in Theorem 12, we answer an open question in Block et al. (2022) by showing that an instance of FTPL has
regret scaling like \(\sigma^{-1/4}\) as opposed to \(\sigma^{-1/2}\), where \(\sigma\) is the smoothness parameter of the adversary; this generalizes a result of Haghtalab et al. (2022) to arbitrary context spaces.
* In Appendix B, we prove new bounds on the quality of importance sampling for estimating means with respect to a target \(\nu\) uniformly over a function class \(\mathcal{F}\) when we have access to samples from \(\mu\). We then compare these results to estimates using rejection sampling assuming \(D_{f}\left(\nu||\mu\right)<\infty\) for the special case of \(\chi^{2}\)-divergence and compare these results with earlier bounds from Chatterjee and Diaconis (2018); Cortes et al. (2010).
NotationIn the sequel, we will always denote by \(\mu\) a base measure on the set \(\mathcal{X}\) with associated \(\sigma\)-algebra \(\mathscr{F}\). We will denote by \(X_{1:n}=(X_{1},\ldots,X_{n})\) a vector of \(n\) independent samples from \(\mu\) and we will let \(j^{*}\) be a selection rule. We will reserve \(\nu\) for our target measure and the letters \(\varepsilon,\delta,\gamma\) will all be reserved for small positive real constants. Furthermore, we will reserve \(f\) for a convex function mapping the positive reals to the positive reals satisfying \(f(1)=f^{\prime}(1)=0\). Furthermore, for such an \(f\), we will let \(f^{-1}(u)=\inf\left\{t>0|f^{\prime}(t)\geq u\right\}\) where we adopt the standard convention of taking the infimum of the empty set to be infinite. For a given random variable \(Y\), we will denote by \(P_{Y}\) the distribution of \(Y\). We use \(O(\cdot),\Omega(\cdot)\) to denote asymptotic big-oh notation and apply tildes to hide polylogarithmic factors.
## 2 Problem Setup and Notation
In this section, we formally define the necessary information theoretic quantities and state the problem. To begin, we define \(f\)-divergence. For more information on information theoretic notions, see Polyanskiy and Wu (2022+)
**Definition 1**.: Let \(f:[0,\infty]\to\mathbb{R}_{\geq 0}\cup\{\infty\}\) be a convex function satisfying \(f(1)=f^{\prime}(1)=0\). For two probability measures \(\nu,\mu\) on some space \(\mathcal{X}\), define the \(f\)-divergence,
\[D_{f}\left(\nu||\mu\right)=\mathbb{E}_{\mu}\left[f\left(\frac{d\nu}{d\mu}(Z) \right)\mathbb{I}\left[\frac{d\nu}{d\mu}(Z)<\infty\right]\right]+f^{\prime}( \infty)\mu\left(\frac{d\nu}{d\mu}(Z)=\infty\right).\]
Note that if \(\nu\ll\mu\) then we may ignore the second term.
_Remark 2_.: As a technical aside, throughout the paper, we will be using \(f^{\prime}\) and \(f^{\prime\prime}\) to denote the first and second derivatives of the \(f\) appearing in Definition 1. By Rademacher's Theorem (Rademacher, 1919), \(f\) is differentiable almost everywhere, but for any points where \(f\) is not differentiable, we will take \(f^{\prime}\) to be the maximal subgradient. As \(f\) is increasing, \(f^{\prime}\) is nondecreasing and thus we can take \(f^{\prime\prime}\) to be the right derivative of \(f^{\prime}\), which is always well-defined.
We will phrase our results in terms of \(D_{f}\left(\nu||\mu\right)\) for general \(f\), but there are several important examples that will come up throughout the paper. Before formally introducing the problem, we will give several examples of well-known \(f\)-divergences:
**Example 1** (Total Variation).: Consider \(f(x)=|x-1|-(x-1)\). In this case we have
\[D_{f}\left(\nu||\mu\right)=\mathrm{TV}(\nu,\mu)=\sup_{A\in\mathscr{F}}|\nu(A) -\mu(A)|\]
the total variation distance, where \(\mathscr{F}\) is the common \(\sigma\)-algebra over \(\mathcal{X}\) on which \(\nu,\mu\) are defined.
**Example 2** (KL Divergence).: If we set \(f(x)=x\log(x)-x+1\) then we get \(D_{f}\left(\nu||\mu\right)=D_{KL}\left(\nu||\mu\right)\) the KL divergence.
**Example 3** (Renyi Divergence).: If we set \(f(x)=x^{\lambda}-\lambda x+\lambda-1\), then we get that
\[D_{f}\left(\nu||\mu\right)=e^{\left(\lambda-1\right)D_{\lambda}\left(\nu||\mu\right)}\]
where \(D_{\lambda}\left(\nu||\mu\right)\) is the Renyi divergence of order \(\lambda\). Special cases of \(D_{\lambda}\left(\nu||\mu\right)\) include the case where \(\lambda\downarrow 1\) in which case we have the KL divergence again and \(\lambda=2\), in which case we recover (a monotone transformation of) the standard \(\chi^{2}\) divergence.
**Example 4** (\(\mathcal{E}_{\gamma}\) Divergence).: If, for \(\gamma\geq 1\), we set \(f_{\gamma}(x)=(x-\gamma)_{+}\), then denote by \(\mathcal{E}_{\gamma}(\nu||\mu)\) the divergence associated with this \(f\). This divergence was originally defined in [10, (2.141)] for the study of channel coding. Since then it appeared prominently in the study of differential privacy [1] and wiretap channels [16]. It will also be crucial in the proof of our lower bounds below.
We now define the primary object of study. Given \(X_{1:n}=(X_{1},\ldots,X_{n})\) a tuple of elements of \(\mathcal{X}\), we define a _selection rule_\(j^{*}\) as any random variable taking values in \([n]\) and depending in any way on \(X_{1:n}\). We are now ready to formally state the main problem:
**Question:** Suppose that \(\mathcal{X}\) is an arbitrary set with \(\sigma\)-algebra \(\mathscr{F}\) and suppose that \(\mu,\nu\) are probability measures with respect to \(\mathscr{F}\) satisfying, for some fixed \(f\), \(D_{f}\left(\nu||\mu\right)<\infty\). For fixed \(\varepsilon>0\), how large does \(n\) have to be such that there exists a selection rule \(j^{*}\) ensuring that \(\mathrm{TV}\left(P_{X_{j^{*}}},\nu\right)<\varepsilon\)?
As an example, we consider traditional rejection sampling. We construct a random set \(\mathcal{S}\subset[n]\) by adding \(j\) to \(\mathcal{S}\) with probability \(\frac{1}{M}\cdot\frac{d\nu}{d\mu}(X_{j})\), which is at most \(1\) by the assumption that \(M>\left|\left|\frac{d\nu}{d\mu}\right|\right|_{\infty}\). If \(\mathcal{S}\) is nonempty, we let \(j^{*}\) be an arbitrary element and otherwise we select \(j^{*}\) uniformly at random. As we shall show for the sake of completeness (see Lemma 26 in Appendix D), the probability that \(\mathcal{S}\) is empty is at most \(e^{-\frac{n}{M}}\) and if \(\mathcal{S}\) is nonempty then \(X_{j^{*}}\) is distributed according to \(\nu\). Thus if \(n=M\log\left(\frac{1}{\delta}\right)\), with probability at least \(1-\delta\), \(X_{j^{*}}\) is distributed according to \(\nu\). Because we required \(M>\left|\left|\frac{d\nu}{d\mu}\right|\right|_{\infty}\), we see that \(\Theta\left(\left|\left|\frac{d\nu}{d\mu}\right|\right|_{\infty}\log\left( \frac{1}{\delta}\right)\right)\) samples are sufficient to exactly sample from \(\nu\) with high probability. The necessity will be seen as a very special case of our lower bounds in the following section.
## 3 Sample Complexity of Rejection Sampling
In this section, we state and sketch the proofs of our main results regarding rejection sampling and fully answer the question raised in Section 2. We will divide our results into two theorems, one providing an upper bound using a modified version of rejection sampling, and the other giving an almost matching lower bound. We begin with the upper bound:
**Theorem 3** (Upper Bound).: _Suppose that \(\mu,\nu\) are probability distributions on some set \(\mathcal{X}\) and suppose that \(X_{1},\ldots,X_{n}\sim\mu\) are independent. Fix some \(f\) satisfying the conditions in Definition
_1. For \(\varepsilon>0\), if_
\[n\geq\frac{1}{1-\varepsilon}\log\left(\frac{2}{\varepsilon}\right)(f^{\prime})^{ -1}\left(\frac{2D_{f}\left(\nu||\mu\right)}{\varepsilon}\right)\]
_then there exists a selection rule \(j^{*}\) satisfying \(\mathrm{TV}\left(P_{X_{j^{*}}},\nu\right)\leq\varepsilon\)._
We will split our discussion into two cases: the superlinear case, where \(f^{\prime}(t)\uparrow\infty\) as \(t\uparrow\infty\) and the linear case, where \(f^{\prime}(t)\) is bounded from above. In the former, we will see that as \(n\uparrow\infty\), we can always use rejection sampling to get an increasingly good approximation of a sample from \(\nu\) because \((f^{\prime})^{-1}\) is finite on the entire positive real line. In the linear case, however, we shall shortly prove that no selection rule can hope to get arbitrarily close to \(\nu\) in total variation. Before sketching the proof of Theorem 3, we provide some examples.
**Example 5** (Total Variation).: Recall that total variation is the \(f\)-divergence such that \(f(x)=|x-1|-x+1\). Note that \(f^{\prime}(x)=0\) for all \(x>1\) and so \((f^{\prime})^{-1}(M)\) is infinite for \(M>0\). Thus Theorem 3 is vacuous when we only have control over total variation, as expected.
**Example 6** (KL Divergence).: As we saw in Example 2, KL divergence is the \(f\)-divergence where we set \(f(x)=x\log(x)-x+1\). In this case, we see that \(f^{\prime}(x)=\log(x)\) and so Theorem 3 tells us that in order to be \(\varepsilon\)-close in total variation, \(\widetilde{O}\left(\exp\left(\frac{D_{KL}\left(\nu||\mu\right)}{\varepsilon} \right)\right)\) samples suffice.
**Example 7** (Renyi Divergence).: Remember from Example 3 that \(f(x)=x^{\lambda}-\lambda x+\lambda-1\) for \(\lambda>1\) defines the Renyi divergence. In this case we see that \(\widetilde{O}\left(e^{D_{\lambda}\left(\nu||\mu\right)}\varepsilon^{-\frac{1 }{\lambda-1}}\right)\) samples suffice. As \(\lambda\uparrow\infty\), we recover the standard rejection sampling bound by taking \(\varepsilon\downarrow 0\) and noting that \(D_{\infty}(\nu||\mu)=\left||\frac{d\nu}{d\mu}\right||_{\infty}\). In the special case of \(\lambda=2\), we note that Renyi divergence recovers \(\chi^{2}\)-divergence and note that \(\widetilde{O}\left(\frac{\chi^{2}\left(\nu||\mu\right)}{\varepsilon}\right)\) samples suffice.
We now sketch the proof of the upper bound, deferring details to Appendix D:
Proof of Theorem 3.: Let \(\nu_{M}\) denote the measure \(\nu\) conditioned on the event that \(\frac{d\nu}{d\mu}\leq M\) and let \(\widetilde{\nu}\) denote the law of the sample produced by rejection sampling from \(\nu_{M}\) with \(n\) samples. The standard analysis of rejection sampling tells us that if \(n=\Omega\left(\log\left(\frac{1}{\varepsilon}\right)M\right)\) then \(\mathrm{TV}\left(\widetilde{\nu},\nu_{M}\right)\leq\varepsilon\). We show in Lemma 27 that if \(M>1\), then
\[\nu\left(\frac{d\nu}{d\mu}>M\right)\leq\frac{D_{f}\left(\nu||\mu\right)}{f^{ \prime}(M)}.\]
Using this result, we show that \(\mathrm{TV}\left(\nu_{M},\nu\right)\leq\frac{D_{f}\left(\nu||\mu\right)}{f^{ \prime}(M)}\) and conclude by applying the triangle inequality.
We now turn to our lower bounds. In particular, we show that for any \(f\)-divergence, there exist distributions \(\mu,\nu\) satisfying \(D_{f}\left(\nu||\mu\right)<\infty\) such that in order for there to exist a selector rule guaranteeing that \(\mathrm{TV}\left(P_{X_{j^{*}}},\nu\right)<\varepsilon\), we require \(n\) to be sufficiently large. We will again split our discussion into the linear and superlinear cases. For the linear case, we have the following lower bound:
**Proposition 4** (Lower Bound, Linear Case).: _Suppose that \(f\) is a convex function as in Definition 1 satisfying \(f^{\prime}(t)\leq C<\infty\) for all \(t>1\). Then there exist distributions \(\mu,\nu\) such that \(D_{f}\left(\nu||\mu\right)<\infty\) and \(\varepsilon=\varepsilon(f,D_{f}\left(\nu||\mu\right))>0\) such that for all \(n\) and \(X_{1},\ldots X_{n}\stackrel{{ iid}}{{\sim}}\mu\)._
\[\inf_{j^{*}}\mathrm{TV}\left(P_{X_{j^{*}}},\nu\right)\geq\varepsilon\]
_where the infimum is over all selection rules \(j^{*}\)._
Note that Proposition 4 matches the upper bound for linear \(f\) in Theorem 3 and reflect the fact that for \(f\) that do not grow superlinearly, \(D_{f}\left(\nu||\mu\right)<\infty\) provides very weak control on \(\nu\). Intuitively this should be clear: note that if \(f\) is in the linear regime, then \(D_{f}\left(\nu||\mu\right)\) can remain finite even when \(\nu\) is singular with respect to \(\mu\) and thus samples from \(\mu\) can never hope to approximate \(\nu\) to arbitrary precision. A full proof can be found in Appendix D.
Moving on to the more interesting case of superlinear \(f\), we provide a lower bound that matches the upper bound found in Theorem 3 for all superlinear \(f\).
**Theorem 5** (Lower Bound, Superlinear Case).: _Let \(f\) be a convex function as in Definition 1 that grows superlinearly. Then for all \(0<\varepsilon\leq 1/4\) and \(\delta>2f(1/2)\), there exists a pair of measures \(\nu,\mu\) such that \(D_{f}\left(\nu||\mu\right)\leq\delta\) and any selection rule \(j^{*}\) satisfying \(\mathrm{TV}(P_{X_{j^{*}}},\nu)\leq\varepsilon\) requires_
\[n\geq\frac{1}{2}\cdot(f^{\prime})^{-1}\left(\frac{\delta}{2\varepsilon}\right). \tag{1}\]
While we provide full details in Appendix D, we provide a sketch of the proof here:
Proof.: A simple computation found in Lemma 28 tells us that if \(\widetilde{\nu}\) is the law of \(X_{j^{*}}\), then the Radon-Nikodym derivative of \(\widetilde{\nu}\) with respect to \(\mu\) is uniformly bounded by \(n\). Another computation, found in Lemma 31 tells us that if \(\widetilde{\nu}\) has likelihood ratio bounded by \(n\), then we can lower bound \(\mathrm{TV}\left(\widetilde{\nu},\nu\right)\) by \(\mathcal{E}_{n}(\nu||\mu)\). Combining these facts, we see that it suffices to exhibit two distributions \(\mu,\nu\), such that \(D_{f}\left(\nu||\mu\right)\leq\delta\) and \(\mathcal{E}_{n}\left(\nu||\mu\right)\geq\varepsilon\) for all \(n\) not satisfying (1). Thus, we have reduced the proof to determining if the point \((\varepsilon,\delta)\) lies above some point in the _joint range_ of \(\mu\) and \(\nu\), i.e., the set \(\{(\mathcal{E}_{n}(\nu||\mu),D_{f}\left(\nu||\mu\right))\}\) where \(\mu\) and \(\nu\) vary over all distributions. In Harremoes and Vajda (2011), it was shown that the distributions extremizing the joint range are typically pairs of Bernoulli random variables. We thus consider \(\mu=\mathrm{Ber}\left(\frac{\varepsilon}{n}\right)\) and \(\nu=\mathrm{Ber}(2\varepsilon)\) and show that \(\mathcal{E}_{n}(\nu||\mu)=\varepsilon\), while \(D_{f}\left(\nu||\mu\right)\leq\delta\), unless \(n\) is sufficiently large so as to satisfy (1). The result follows.
Note that Theorem 5 tells us that, up to logarithmic factors, the sample complexity determined in Theorem 3 is optimal. There is one disadvantage to the above result, however: as is clear from the proof, the distributions \(\mu\) and \(\nu\) depend on \(n\) and thus the order of quantifiers in Theorem 5 is weaker than that in Theorem 3. In order to address this shortcoming, we prove a slightly weaker lower bound under a mild growth condition on \(f\):
**Theorem 6**.: _Let \(f\) be a convex function as in Definition 1 that grows superlinearly. Suppose that \(f\) satisfies a mild growth condition (see Theorem 33 for formal statement). Then, for any \(\zeta>0\)
_there exist distributions \(\mu,\nu\) with \(D_{f}\left(\nu||\mu\right)<\infty\) such that for all sufficiently large \(n\in\mathbb{N}\), with \(X_{1},\ldots,X_{n}\) sampled independently from \(\mu\), it holds that_
\[\inf_{j^{*}}\mathrm{TV}\left(P_{X_{j^{*}}},\nu\right)\geq\frac{\zeta^{1+\zeta}} {8}\cdot\left(\frac{D_{f}\left(\nu||\mu\right)}{f^{\prime}(n)}\right)^{1+\zeta} \tag{2}\]
_where the infimum is taken over all selection rules._
We note that the mild growth condition required in Theorem 6 is purely technical and likely could be removed with more elaborate analysis; on the other hand, this condition is satisfied by all commonly used, superlinear \(f\)-divergences of which we are aware. By Theorem 3, we see that if
\[n=\widetilde{O}\left((f^{\prime})^{-1}\left(\frac{D_{f}\left(\nu||\mu\right)}{ \varepsilon}\right)\right),\]
then rejection sampling suffices to generate an \(\varepsilon\)-approximate sample from \(\nu\). On the other hand, setting \(\zeta=o(1)\) as \(\varepsilon\downarrow 0\), Theorem 6 tells us that in the worst case, we require
\[n=\widetilde{\Omega}\left((f^{\prime})^{-1}\left(\frac{D_{f}\left(\nu||\mu \right)}{\varepsilon^{1-o(1)}}\right)\right)\]
samples for the right hand side in (2) to be below \(\varepsilon\). Thus, as \(\varepsilon\downarrow 0\), these bounds essentially match. In particular, because the \(f\)-divergences in Examples 6 and 7 satisfy the mild growth condition, the sample complexity upper bounds derived in those examples are indeed tight for all sufficiently large \(n\).
We defer a detailed proof of Theorem 6 to Appendix D. The method is similar to that of Theorem 5 in that we reduce to lower bounding \(\mathcal{E}_{n}(\nu||\mu)\) for distributions \(\nu,\mu\) with bounded \(f\)-divergence. The difference is that we exhibit a _single_ pair \((\mu,\nu)\), depending on \(f\) but independent of \(n\), such that the desired properties hold.
Combining Theorems 3, 5 and 6, we have shown that \(\widetilde{\Theta}\left((f^{\prime})^{-1}(D_{f}\left(\nu||\mu\right))/\varepsilon\right)\) samples are both necessary and sufficient to generate an \(\varepsilon\)-approximate sample from \(\nu\). One immediate application of these results is to the problem of estimating means according to \(\nu\) uniformly over some function class \(\mathcal{F}\) when given samples from \(\mu\). In Appendix B, we compare estimators using Theorem 3 to the classical importance sampling approach. For the sake of space, this is deferred to the appendix; we now proceed to our main application regarding smoothed online learning.
## 4 Smoothed Online Learning
Our most important immediate application is to the question of generalizing smoothed online learning as outlined in the introduction. In this section, we extend results proved for smoothed adversaries (Rakhlin et al., 2011; Block et al., 2022; Haghtalab et al., 2022, 2022) described in the introduction to allow for a more powerful Nature. To do this, we employ the following definition:
**Definition 7**.: Fix a base measure \(\mu\) on some set \(\mathcal{X}\). We say that a measure \(\nu\) is \((f,\sigma)\)-smooth (or \(f\)-smooth) with respect to \(\mu\) if \(D_{f}\left(\nu||\mu\right)\leq\frac{1}{\sigma}\). An adversary is \((f,\sigma)\)-smooth with respect to \(\mu\) if for all \(1\leq t\leq T\), the distribution \(p_{t}\) of \(x_{t}\), conditioned on all the history, is \((f,\sigma)\)-smooth.
Definition 7 motivates an obvious question: can we achieve improvement over the fully adversarial setting even when we only require Nature to be \(f\)-smooth? The answer will, of course, depend on what \(f\) we choose. For the case of eventually linear \(f\), for example, we see that no improvement is possible in general:
**Proposition 8**.: _Suppose that \(\mathcal{F}=\{x\mapsto\mathbb{I}[x\geq\theta]|\theta\in[0,1]\}\) is the class of thresholds in one dimension. Let \(f\) be a convex function as in Definition 1 that is eventually linear, in the sense that \(f^{\prime}\) is bounded above. For all \(0<\sigma<1\) there is a \((f,\sigma)\)-smooth adversary such that any learner experiences \(\mathbb{E}\left[\mathrm{Reg}_{T}\right]=\Omega(T)\)._
This result, proved in Appendix E, is not surprising in light of the fact that fully adversarial online learning of \(\mathcal{F}\) is impossible; if \(f\) is linear, then Nature can mix the worst-case adversary with a base distribution and still incur linear regret with finite \(D_{f}\left(\nu||\mu\right)\). More interesting is the case of stronger \(f\)-divergences. Before we present our results, we state our main technical tool, which generalizes a technique introduced in Haghtalab et al. (2022) and extended in Block et al. (2022). In those works, the authors introduced a coupling between the sequence contexts produced by a smooth, adaptive adversary and a larger set of independent sampled drawn from the base measure. Using the tools developed in Section 3, we extend this technique beyond the case of uniformly bounded Radon-Nikodym derivatives:
**Lemma 9**.: _Let \(\mathcal{X}\) be a set and \(\mu\) some measure on \(\mathcal{X}\). Suppose that an adversary is \((f,\sigma)\)-smooth with respect to \(\mu\) for some \(f\) satisfying the conditions of Definition 1 such that \(\sup f^{\prime}(t)=\infty\). For any \(T\) and any \(\varepsilon,\delta>0\), if_
\[n\geq\frac{1}{1-\varepsilon}\log\left(\frac{T}{\delta}\right)(f^{\prime})^{-1 }\left(\frac{1}{\varepsilon\sigma}\right)\]
_then there exists a coupling between \((x_{1},\ldots,x_{T})\) and \(\{Z_{t,j}|1\leq t\leq T\text{ and }1\leq j\leq n\}\) such that the \((x_{1},\ldots,x_{T})\) are distributed according to the adversary, the \(Z_{t,j}\sim\mu\) are independent, and, with probability at least \(1-\delta\), there are selection rules \(j_{t}^{*}\) such that \(\mathrm{TV}\left(P_{x_{t}},P_{Z_{t,j_{t}^{*}}}\right)\leq\varepsilon\)._
We defer the construction of the coupling to Appendix E; for now we focus on the implications. Our first result extends Block et al. (2022, Theorem 3) and Haghtalab et al. (2022, Theorem 3.1) to the case of \(f\)-smoothed online learning. While we state the result for general real-valued function classes in Appendix E, for the sake of simplicity we restrict our focus to binary-valued \(\mathcal{F}\) here.
**Theorem 10**.: _Suppose \(\mathcal{F}\rightarrow\{\pm 1\}\) is a binary valued function class and let \(\mathrm{vc}\left(\mathcal{F}\right)\) denote its Vapnik-Chervonenkis dimension. Suppose that \((x_{t},y_{t})\) are generated by a \((f,\sigma)\)-smoothed adversary in the sense of Definition 7 such that \(f^{\prime}(\infty)=\infty\). Then there exists an algorithm such that_
\[\mathbb{E}\left[\mathrm{Reg}_{T}\right]\lesssim\sqrt{T\log(T)\cdot\mathrm{vc} \left(\mathcal{F}\right)}+\inf_{0<\varepsilon<1}\varepsilon T+\sqrt{T\mathrm{vc }\left(\mathcal{F}\right)\log\left(T(f^{\prime})^{-1}\left(\frac{1}{ \varepsilon\sigma}\right)\right)}.\]
We remark that Theorem 10 is a special case of the more general Theorem 34 applying to arbitrary real-valued function classes, which we state and prove in Appendix E. The proof follows the approach of Block et al. (2022) with the modification of applying the more general coupling in
Lemma 9 and is deferred to the appendix. Here, we consider two instantiations of \(f\)-divergences. First, for the case of Renyi divergence (see examples 3 and 7), we see that for a Renyi-smoothed adversary, regret of the order \(\widetilde{O}\left(\left(1+\frac{1}{\lambda-1}\right)\sqrt{T\mathrm{vc}\left( \mathcal{F}\right)\log\left(\frac{1}{\sigma}\right)}\right)\) is attainable. Observe that when \(\lambda\uparrow\infty\), we recover the results of Block et al. (2022). On the other hand, if \(\lambda\) is bounded away from 1, which covers the case of an adversary bounded in \(\chi^{2}\) divergence, we see that the cost of assuming only \(D_{\lambda}\left(p_{t}||\mu\right)<\infty\) is only on the order of a constant more than in the standard setting. The situation is different if we assume that the adversary is \(f\)-smoothed in the sense of KL divergence: in this case, we are only able to recover regret scaling like \(\widetilde{O}\left(T^{2/3}\left(\mathrm{vc}\left(\mathcal{F}\right)/\sigma \right)^{1/3}\right)\). While the results for Renyi divergence are optimal up to polylogarithmic factors, we leave as an interesting open direction the question of whether the regret against a KL-smoothed adversary can be improved.
While Theorem 10 is important insofar as it gives the information theoretic rates of \(f\)-smoothed online learning, the algorithms, where provided, are computationally intractable. We now demonstrate that two algorithms proposed by Block et al. (2022), Haghtalab et al. (2022) for smoothed online learning remain no-regret even if we weaken our assumptions to include \((f,\sigma)\)-smoothed adveraries. These algorithms are _oracle-efficient_, i.e., they make few calls to an Empirical Risk Minimization (ERM) oracle for the function class \(\mathcal{F}\); an ERM oracle, formally defined in Appendix E (see Definition 35), returns the minimizer of a weighted, cumulative loss function defined over the function class \(\mathcal{F}\). Once again, for the sake of simplicity, we state our results for the case of binary valued \(\mathcal{F}\) and defer the more general statement and proof to the appendix.
**Theorem 11**.: _Suppose that \(\mathcal{F}:\mathcal{X}\rightarrow\{\pm 1\}\) is a function class with VC dimension \(\mathrm{vc}\left(\mathcal{F}\right)\) and that \(\ell:[-1,1]\times[-1,1]\rightarrow[0,1]\) is a loss function that is convex and 1-Lipschitz in the first argument. Then there is an improper algorithm requiring 2 calls to the ERM oracle per time \(t\) such that if the adversary is \((f,\sigma)\)-smoothed, then the regret is bounded as follows:_
\[\mathbb{E}\left[\mathrm{Reg}_{T}\right]\lesssim\inf_{\alpha>0}\left\{\alpha T +\sqrt{\mathrm{vc}\left(\mathcal{F}\right)\cdot T\cdot\log(T)\cdot(f^{\prime} )^{-1}\left(\frac{1}{\alpha\sigma}\right)}\right\}. \tag{3}\]
We prove Theorem 11 in Appendix E, where we apply Lemma 9 to the argument of Block et al. (2022). We instantiate the bound in (3) in two cases, Renyi divergence (Example 3) and KL Divergence (Example 2). If we assume that our adversary is smoothed in the sense of Renyi divergence, then optimizing \(\alpha\) leads us to an oracle-efficient algorithm attaining regret scaling like \(\widetilde{O}\left(\mathrm{vc}\left(\mathcal{F}\right)^{\frac{\lambda-1}{2 \lambda-1}}\cdot T^{\frac{\lambda}{2\lambda-1}}\cdot\sigma^{-\frac{1}{2\lambda -1}}\right)\). Noting that if \(\left|\left|\frac{dp_{t}}{d\mu}\right|\right|_{\infty}\leq(\sigma^{\prime})^{-1}\) then we may take \(\sigma=(\sigma^{\prime})^{\lambda-1}\), we observe that in the limit as \(\lambda\uparrow\infty\), we recover the \(\widetilde{O}\left(\sqrt{\mathrm{vc}\left(\mathcal{F}\right)\cdot T/\sigma^{ \prime}}\right)\) rate from Block et al. (2022, Theorem 7). In the special case where \(\lambda=2\), we see that the regret scales like \(\widetilde{O}\left(\left(\mathrm{vc}\left(\mathcal{F}\right)/\sigma\right)^{ 1/3}\cdot T^{2/3}\right)\). On the other hand, if we make the weaker assumption that the adversary is only smoothed in the KL sense, then Theorem 11 only recovers a regret that scales as \(\widetilde{O}\left(\log(d)T/(\sigma\log(T))\right)\), which is sublinear in \(T\) but very slow.
We turn now to the case of proper algorithms. As in Block et al. (2022), we instantiate Follow the Perturbed Leader (FTPL) with a perturbation by a Gaussian process; again, we apply our Lemma 9 to the proof techniques found in Block et al. (2022, Appendix E). For the sake of simplicity, we restrict our focus to binary valued function classes with linear loss.
**Theorem 12**.: _Suppose that we are in the situation of Theorem 11, with the loss function \(\ell\) being linear, i.e., \(\ell(\widehat{y},y)=(1-\widehat{y}\cdot y)/2\). Suppose further that our adversary is \((f,\sigma)\)-smooth in the sense of Renyi Divergence, i.e., for some \(\lambda\geq 2\), \(D_{\lambda}\left(p_{t}||\mu\right)\leq 1/\sigma\) for all \(p_{t}\). Then there is a proper algorithm requiring only 1 call to the ERM oracle per round such that the regret is bounded as follows:_
\[\mathbb{E}\left[\mathrm{Reg}_{T}\right]=\widetilde{O}\left(\sqrt{\mathrm{vc} \left(\mathcal{F}\right)}\cdot T^{\frac{2\lambda+1}{4\lambda-1}}\cdot\sigma^{ -\frac{1}{4\lambda-1}}\right).\]
Note that our regret in Theorem 12 actually improves on that of (Block et al., 2022, Theorem 10) in the case where we take \(\lambda\uparrow\infty\). Indeed, if we are in the strongly smooth regime such that the Radon-Nikodym derivative of the adversary's distribution is uniformly bounded by \(\sigma^{\prime-1}\), then in the limit we recover an expected regret scaling like \(\widetilde{O}\left(\sqrt{\mathrm{vc}\left(\mathcal{F}\right)}\cdot T\cdot( \sigma^{\prime})^{-\frac{1}{4}}\right)\), which matches that of the instantiation of FTPL found in Haghtalab et al. (2022) for discrete \(\mathcal{X}\). Thus, by examining \(f\)-smoothed adversaries, we answer an open question of Block et al. (2022) on improving the dependence on \(\sigma^{\prime}\) of the expected regret of FTPL with a Gaussian perturbation.
We leave as an interesting further direction the question regarding the tightness of the regret of the algorithms in Theorems 11 and 12. As shown in Block et al. (2022), Haghtalab et al. (2022), even in the case of strongly smoothed adversaries, there is a statistical-computational gap wherein the dependence of the expected regret for an oracle-efficient algorithm on \(\sigma\) must be polynomial, but Theorem 10 yields a statistical rate that is polylogarithmic in the same. Even in the adversarial setting, however, it is unknown if such an exponential gap exists for oracle-efficient _improper_ algorithms (Hazan and Koren, 2016).
Finally, we observe that Theorem 12 only applies to \(f\)-smoothed adversaries in the Renyi sense for \(\lambda\geq 2\). Our proof proceeds by a change of measure argument, wherein we replace an expectation over the base measure \(\mu\) by an expectation over the adversary's distribution \(p_{t}\); for a weaker \(f\)-divergence like KL, the analogous statement would require bounding an exponential moment, which would require significantly stronger analysis. We leave the question of existence of oracle-efficient proper algorithms for KL smoothed adversaries as yet another interesting further direction.
## Acknowledgements
AB acknowledges support from the National Science Foundation Graduate Research Fellowship under Grant No. 1122374 as well as support from ONR under grant N00014-20-1-2336 and DOE under grant DE-SC0022199. YP was supported in part by the MIT-IBM Watson AI Lab and by the NSF grant CCF-2131115.
|
2303.04571
|
A Categorical Framework of General Intelligence
|
Can machines think? Since Alan Turing asked this question in 1950, nobody is
able to give a direct answer, due to the lack of solid mathematical foundations
for general intelligence. In this paper, we introduce a categorical framework
towards this goal, with two main results. First, we investigate object
representation through presheaves, introducing the notion of self-state
awareness as a categorical analogue to self-consciousness, along with
corresponding algorithms for its enforcement and evaluation. Secondly, we
extend object representation to scenario representation using diagrams and
limits, which then become building blocks for mathematical modeling,
interpretability and AI safety. As an ancillary result, our framework
introduces various categorical invariance properties that can serve as the
alignment signals for model training.
|
Yang Yuan
|
2023-03-08T13:37:01Z
|
http://arxiv.org/abs/2303.04571v2
|
# A Categorical Framework of General Intelligence
###### Abstract
Can machines think? Since Alan Turing asked this question in 1950, nobody is able to give a direct answer, due to the lack of solid mathematical foundations for general intelligence. In this paper, we introduce a categorical framework towards this goal, with two main results. First, we investigate object representation through presheaves, introducing the notion of self-state awareness as a categorical analogue to self-consciousness, along with corresponding algorithms for its enforcement and evaluation. Secondly, we extend object representation to scenario representation using diagrams and limits, which then become building blocks for mathematical modeling, interpretability and AI safety. As an ancillary result, our framework introduces various categorical invariance properties that can serve as the alignment signals for model training.
1
Footnote 1: 1}\)IIIS, Tsinghua University
2
Footnote 2: Shanghai Artificial Intelligence Laboratory
3
Footnote 3: Shanghai Qi Zhi Institute
## 1 Introduction
In recent years, remarkable progress has been made in training foundation models with enormous computational power, vast amounts of data, and gigantic neural networks (Radford et al., 2021; Chen et al., 2020; Radford et al., 2019; Brown et al., 2020; Ramesh et al., 2021, 2022; Sohl-Dickstein et al., 2015; Rombach et al., 2022; He et al., 2022). Surprisingly, despite the impressive achievements, the internal working mechanisms of these models remain mysterious. People seem to reach the consensus that the foundation models are inherently black-box and uninterpretable, therefore empirical experimentation is the only way of pushing AI forward.
While this is indeed what happened in the past decade and is analogous to how intelligence is acquired through evolution, relying solely on empirical experimentation without theoretical understanding can be both inefficient and dangerous. The inefficiency arises from the fact that the progress is made through trial and error, often guided by intuition, and the milestones are defined indirectly based on performance on specific tasks rather than a comprehensive understanding of intelligence itself. The potential danger stems from the fact that nobody knows what we will get at the final destination, and perhaps more importantly, how close we are right now. We do not even know whether we have already created the general intelligence -- maybe not yet, but how to make such evaluations?
In this paper, we present a categorical framework of general intelligence, which contributes to answering the following questions:
1. Can the model be aware of its self-state? (Section 3)
2. How shall the model represent complex scenarios? (Section 4)
3. How shall we train the model towards general intelligence? (Section 5)
It will be extremely challenging, if not impossible, to prove that our framework is for _the_ general intelligence, given the absence of consensus on the formal definition of general intelligence among human beings. Instead, we take the categorical approach: we formally define all the basic elements, state their theoretical implications, specify the algorithmic requirements, and finally integrate all the elements into a comprehensive framework. Therefore, even if one disagrees with our definition of general intelligence or believes that certain crucial pieces are missing, our framework remains relevant and applicable.
Our framework is surprisingly simple, consisting of four main components1: the sensor, world category, planner with objectives, and actor, with single direction information-flow (see Figure 1). The sensor receives multi-modal signals from the external environment, including but not limited to text-input, video/image-input, audio-input, sense of touch, etc. The world category perceives and comprehends the incoming signals, and updates its internal state accordingly. The planner continuously monitors the status of the world category and generates plans based on its objectives. Finally, the actor executes these plans, influencing the external environment by generating outputs such as text-output, video/image-output, audio-output, robot-manipulation signals, etc. We elaborate the details below, starting from three different world views.
Footnote 1: Temporarily ignoring the categorical part, which is our main contribution, we shall remark that the components and connections in our framework are similar but different to the ones proposed in LeCun (2022), as discussed in Section 6.
**Three different worlds** exist in our framework (see Figure 2): the real world, perceived world, and world category. The model cannot directly access the real world2, and can only access the perceived world using its sensor. For any given time \(t\in\mathbb{N}\), the real world can be much larger than the perceived world of the model as the model may only perceive a small part of the world. Even under perceived scope of the sensor, the real world and perceived world are not necessarily the same, because the sensor may have limited sensory ability, be biased or contain noise. Moreover, the world category encodes what the model understands about the world, so it can be much larger than the perceived world. Intuitively, the perceived world is what the model sees and feels momentarily, and the world category is what it understands, memorizes and predicts about the world.
Figure 1: Our categorical framework
In order to demonstrate intelligent behaviors, the model should interact with the external environment through its sensor and actor, so the notion of time \(t\) is important. In case the model does not have the access of the time stamps, \(t\) may instead denote the index of distinct events that are detected by the sensor, e.g., chatGPT answering various queries from the user. For the sake of simplicity, we do not make such distinctions and simply treat \(t\) as the time stamp.
The world category can be seen as an imaginary reconstruction of the external world by the model through its sensor. It comprises all the people, animals, objects, knowledge from the external environment that are perceptible by the model through its sensor, as well as abstract representations on top of them. More precisely, it is a function in \(\mathbb{N}\rightarrow\mathbf{Cat}\), representing a dynamic category3 containing various objects and changing over time \(t\in\mathbb{N}\). The sensor decides the types of elementary objects in the world category. For example, if the model is incapable of detecting visual signals, the world category will not contain image objects. Moreover, if the sensor receives signals from a simulated environment, the world category will only contain simulated knowledge, which can differ significantly from the real world.
Footnote 3: Readers not familiar with category theory may check Section 2 for a basic introduction.
**Object representation.** For any given object \(X\), the world category never directly stores \(X\), but uses neural networks to store all the relationships between other objects and \(X\) instead, which contains sufficient information about \(X\). As a notable example, when the model is able to perceive the relationships between itself and other objects, its world category can represent an object called "self-state" for storing such relationships. Is maintaining the self-state equivalent to having self-consciousness? This is a controversial question that we choose not to answer. Instead, we formally define the notion of self-state based on category theory without bothering its relationship to self-consciousness (Section 3.1).
Based on our definition of self-state, we introduce two algorithms for enforcing and evaluating
Figure 2: Case studies on three different worlds. In the first row, the model sees a human being with a desk, but the human is partly occluded and the desk has one corner missing. In the world category, when the model memorizes the scene, it may complete the occluded parts with minor adjustments on the human body, and slightly modify the shape of the desk. In the second row, the model sees a chart representing the wealth of Alice, Bob and David. However, the model may not accurately memorize the information in its world category. In the third row, the model perceives the relationships among Alice, Bob and David, e.g., whether two of them are close friends. In its world category, such relationships might be preserved with distortion.
self-state awareness of the model. Unlike the identification of self-consciousness, which is a binary variable denoting whether a subject possesses self-consciousness or not, our evaluation generates a continuous value within the interval \([0,1]\) to indicate the degree of self-state awareness. This degree corresponds to the proportion of all relevant relationships between the subject and other objects or tasks that the subject is aware of. According to our definition, it appears that even many human beings, especially the kids, may not possess perfect self-state awareness.
**Scenario representation.** How shall the model represent scenarios with multiple objects and morphisms in between? Using the language of category theory, we can define the scenario content as a diagram, and define the scenario itself as a projective limit over the diagram.
As we will see, using diagrams and limits for scenario representation has various interesting consequences. For example, it makes mathematical modeling much easier, because we may take proper abstraction of diagram, to directly convert it into a mathematical problem. Besides, by treating the scenario content as a diagram of the world category, the model can generate interpretations based on its internal knowledge, not only limited to assigning weights to the input variables like the attribution methods (Sundararajan et al., 2017; Lundberg and Lee, 2017). Moreover, diagram representation of the scenario content allows a functional approach for AI safety, by injecting self-state into the diagram and enforcing the self-state to be human-friendly.
**Invariance property as training signals**. Category theory employs commutative diagrams to characterize the equivalence of distinct computational paths, which naturally leads to various invariance properties for the model. Unlike supervised learning where the training objective is to fit the input data with the correct output label, foundation models focus on learning the morphisms between objects, and functors between categories. The invariance properties serve as the training signals for the model to adjust itself, so that the world category is naturally consistent.
## 2 Preliminaries
Category theory is used in almost all areas of mathematics. Here we only introduce the necessary notions for understanding the results of our paper. Curious readers may check Mac Lane (2013); Riehl (2017); Adamek et al. (1990) for a more comprehensive introduction.
### Category basics
A category \(\mathcal{W}\) has a set of objects \(\operatorname{Ob}(\mathcal{W})\), and a set of morphisms \(\operatorname{Hom}_{\mathcal{W}}(X,Y)\) from \(X\) to \(Y\) for every \(X,Y\in\operatorname{Ob}(\mathcal{W})\). In this paper, we use "relationships" and "morphisms" interchangeably. Given \(f\in\operatorname{Hom}_{\mathcal{W}}(X,Y),g\in\operatorname{Hom}_{\mathcal{W}} (Y,Z)\), we define their composition as \(g\circ f\in\operatorname{Hom}_{\mathcal{W}}(X,Z)\). Notice that \(\circ\) is associative, i.e., \((h\circ g)\circ f=h\circ(g\circ f)\). For every \(X\in\operatorname{Ob}(\mathcal{W})\), there exists an unique identity morphism \(\operatorname{id}_{X}\in\operatorname{Hom}_{\mathcal{W}}(X,X)\). A morphism \(f:X\to Y\) is an isomorphism if there exists \(g:X\gets Y\) such that \(f\circ g=\operatorname{id}_{Y}\) and \(g\circ f=\operatorname{id}_{X}\). In this case, we say \(X\) and \(Y\) are isomorphic and write \(X\simeq Y\).
We consider a universe \(\mathcal{U}^{4}\). A category is a \(\mathcal{U}\)-category, if \(\operatorname{Hom}_{\mathcal{W}}(X,Y)\) is \(\mathcal{U}\)-small for any \(X,Y\in\operatorname{Ob}(\mathcal{W})\). A \(\mathcal{U}\)-category category \(\mathcal{W}\) is \(\mathcal{U}\)-small if \(\operatorname{Ob}(\mathcal{W})\) is \(\mathcal{U}\)-small. For simplicity, below we may not explicitly mention the universe \(\mathcal{U}\), and simply write that \(\mathcal{W}\) is a category or a small category. We define \(\mathbf{Cat}\) to be the category whose objects are small categories.
Given a category \(\mathcal{W}\), we define its opposite \(\mathcal{W}^{\operatorname{op}}\) by setting \(\operatorname{Ob}(\mathcal{W}^{\operatorname{op}})=\operatorname{Ob}( \mathcal{W})\) and \(\operatorname{Hom}_{\mathcal{W}^{\operatorname{op}}}(X,Y)=\operatorname{Hom}_ {\mathcal{W}}(Y,X)\). Moreover, given \(f\in\operatorname{Hom}_{\mathcal{W}^{\operatorname{op}}}(X,Y),g\in \operatorname{Hom}_{\mathcal{W}^{\operatorname{op}}}(Y,Z)\), the new composition is \(g\mathop{\stackrel{{\text{op}}}{{\circ}}}f=f\circ g\in \operatorname{Hom}_{\mathcal{W}^{\operatorname{op}}}(X,Z)\).
We define \(\mathbf{Set}\) to be the category of sets, where the objects are sets, and \(\operatorname{Hom}_{\mathbf{Set}}(X,Y)\) is the set of all functions with domain \(X\) and codomain \(Y\). Notice that we ignore the subtleties about the universe for better presentation, so here just assume that \(\mathbf{Set}\) does not contain strange objects like a set containing all sets.
Functor is like a function between two categories. Given two categories \(\mathcal{W},\mathcal{W}^{\prime}\), a functor \(F:\mathcal{W}\to\mathcal{W}^{\prime}\) maps objects from \(\mathcal{W}\) to \(\mathcal{W}^{\prime}\) with \(F:\operatorname{Ob}(\mathcal{W})\to\operatorname{Ob}(\mathcal{W}^{\prime})\) and morphisms from \(\mathcal{W}\) to \(\mathcal{W}^{\prime}\) with \(F:\operatorname{Hom}_{\mathcal{W}}(X,Y)\to\operatorname{Hom}_{\mathcal{W}^{ \prime}}(F(X),F(Y))\) for all \(X,Y\in\mathcal{W}\), so that \(F\) preserves identity and composition. Formally, we have \(F(\operatorname{id}_{X})=\operatorname{id}_{F(X)}\) for all \(X\in\mathcal{W}\), and \(F(g\circ f)=F(g)\circ F(f)\) for all \(f:X\to Y,g:Y\to Z\).
The morphisms of functors, also called the natural transformation, is the way to transform the functors while preserving the structure. Given two categories \(\mathcal{W},\mathcal{W}^{\prime}\), and two functors \(F_{1},F_{2}\) from \(\mathcal{W}\) to \(\mathcal{W}^{\prime}\). A morphism of functors \(\theta:F_{1}\to F_{2}\) has a morphism \(\theta_{X}:F_{1}(X)\to F_{2}(X)\) for all \(X\in\mathcal{W}\) such that for all \(f:X\to Y\in\operatorname{Hom}_{\mathcal{W}}(X,Y)\), we have \(\theta_{Y}(F_{1}(f)(F_{1}(X)))=F_{2}(f)(\theta_{X}(F_{1}(X)))\).
A presheaf is a functor from \(\mathcal{W}^{\mathrm{op}}\) to \(\mathbf{Set}\), and \(\mathcal{W}^{\wedge}\) is the category of presheaves. Similarly, a functor from \(\mathcal{W}^{\mathrm{op}}\) to \(\mathbf{Set}^{\mathrm{op}}\) is called a \(\mathbf{Set}^{\mathrm{op}}\)-valued presheaf, and \(\mathcal{W}^{\vee}\) is the category of \(\mathbf{Set}^{\mathrm{op}}\)-valued presheaves. In this paper we do not make the differentiation, and name both kinds of functors as presheaves, and \(\mathcal{W}^{\wedge},\mathcal{W}^{\vee}\) as the categories of presheaves. Moreover, define the Yoneda functors \(h_{\mathcal{W}}(X)\triangleq\operatorname{Hom}_{\mathcal{W}}(\cdot,X)\in \mathcal{W}^{\wedge}\), and \(k_{\mathcal{W}}(X)\triangleq\operatorname{Hom}_{\mathcal{W}}(X,\cdot)\in \mathcal{W}^{\vee}\). The following lemma is fundamental.
**Lemma 1** (Yoneda lemma).: _Given \(X\in\mathcal{W}\) we have,_
1. _For_ \(A\in\mathcal{W}^{\wedge}\)_,_ \(\operatorname{Hom}_{\mathcal{W}^{\wedge}}(h_{\mathcal{W}}(X),A)\simeq A(X)\)_._
2. _For_ \(B\in\mathcal{W}^{\vee}\)_,_ \(\operatorname{Hom}_{\mathcal{W}^{\vee}}(B,k_{\mathcal{W}}(X))\simeq B(X)\)_._
Yoneda lemma says \(h_{\mathcal{W}}(X)\) and \(k_{\mathcal{W}}(X)\) capture all the information of \(X\). As a directly corollary, we have \(\operatorname{Hom}_{\mathcal{W}^{\wedge}}(h_{\mathcal{W}}(X),h_{\mathcal{W}}( Y))\simeq h_{\mathcal{W}}(Y)(X)=\operatorname{Hom}_{\mathcal{W}}(X,Y)\), and similar result holds for \(k_{\mathcal{W}}(\cdot)\). A functor \(F\) from \(\mathcal{W}^{\mathrm{op}}\) to \(\mathbf{Set}\) (or \(\mathcal{W}\) to \(\mathbf{Set}\)) is representable if there is an isomorphism between \(h_{\mathcal{W}}(X)\) (or \(k_{\mathcal{W}}(X)\)) and \(F\) for some \(X\in\mathcal{W}\). Such \(X\) is called a representative of \(F\).
### Limits
A diagram of shape \(A\) in a category \(\mathcal{W}\) is a functor \(\alpha:A\to\mathcal{W}\), which selects objects in \(\mathcal{W}\) correspond to \(\operatorname{Ob}(A)\), that preserve the morphisms in \(A\). Given a functor \(\beta:A^{\mathrm{op}}\to\mathbf{Set}\), define its projective limit as \(\varprojlim\beta\triangleq\operatorname{Hom}_{A^{\wedge}}(\operatorname{pt}_{A ^{\wedge}},\beta)\), where \(\operatorname{pt}_{A^{\wedge}}(i)=\{\operatorname{pt}\}\) for every \(i\in A\), and \(\{\operatorname{pt}\}\) is the single point set. In other words, \(\varprojlim\beta\) denotes the set of all natural transformations between \(\operatorname{pt}_{A^{\wedge}}\) and \(\beta\). Based on this definition for diagrams in \(\mathbf{Set}\), we have the general definition of the limits.
**Definition 1** (Projective and inductive limits).: _Given \(\alpha:A\to\mathcal{W},\beta:A^{\mathrm{op}}\to\mathcal{W}\) with small \(A\), the inductive limit \(\varinjlim\alpha\in\mathcal{W}^{\vee}\) and projective limit \(\varprojlim\beta\in\mathcal{W}^{\wedge}\) are defined as:_
1. \(\varinjlim\alpha:X\mapsto\varprojlim\operatorname{Hom}_{\mathcal{W}}(\alpha,X)\)__
2. \(\varprojlim\beta:X\mapsto\varprojlim\operatorname{Hom}_{\mathcal{W}}(X,\beta)\)__
_Here \(\operatorname{Hom}_{\mathcal{W}}(\alpha,X)\) is a functor that maps \(i\in A\) to \(\operatorname{Hom}_{\mathcal{W}}(\alpha(i),X)\). Therefore, \(\varprojlim\operatorname{Hom}_{\mathcal{W}}(\alpha,X)\) is a well defined limit for a diagram in \(\mathbf{Set}\). Same argument holds for \(\varprojlim\operatorname{Hom}_{\mathcal{W}}(\overline{X},\beta)\)._
**Lemma 2** (p.60 in Masaki Kashiwara (2006)).: _If \(A\) is small, consider \(\alpha:A\to\mathcal{W}^{\wedge},\beta:A^{\mathrm{op}}\to\mathcal{W}^{\vee}\). For \(A\in\mathcal{W}^{\wedge},B\in\mathcal{W}^{\vee}\),_
\[\operatorname{Hom}_{\mathcal{W}^{\wedge}}(\varinjlim\alpha,A) \simeq\varprojlim\operatorname{Hom}_{\mathcal{W}^{\wedge}}(\alpha,A)\] \[\operatorname{Hom}_{\mathcal{W}^{\vee}}(B,\varprojlim\beta) \simeq\varprojlim\operatorname{Hom}_{\mathcal{W}^{\vee}}(B,\beta)\]
## 3 World Category And Object Representation
Recall that the world category is the imagination of the model about the real world. Given time \(t\in\mathbb{N}\), how shall we represent the world category snapshot \(\mathcal{W}(t)\)? \(\mathcal{W}(t)\) contains both objects and their morphisms, but directly storing these information is usually computationally infeasible. Instead, we use a function \(\mathcal{F}_{\theta(t)}:\mathcal{W}(t)\rightarrow\mathcal{W}^{\vee}(t)\) parameterized with \(\theta(t)\), which maps object \(X\) to \(k_{\mathcal{W}}(X)\) that contains all the morphisms of \(X\). By doing that, the model never explicitly store any morphisms in \(\mathcal{W}(t)\), but contains all the necessary information using \(\mathcal{F}_{\theta(t)}\). Here we assume \(\mathcal{F}_{\theta(t)}\) maps objects to \(\mathcal{W}^{\vee}\) just for notational convenience, and empirically \(\mathcal{F}_{\theta(t)}\) can contain the information of both \(\mathcal{W}^{\wedge}\) and \(\mathcal{W}^{\vee}\), i.e., both \(\operatorname{Hom}_{\mathcal{W}}(\cdot,X)\) and \(\operatorname{Hom}_{\mathcal{W}}(X,\cdot)\) for each \(X\). For example, the embedding space that \(\mathcal{F}_{\theta(t)}\) maps to can be written as \(\mathcal{W}^{\wedge}\times\mathcal{W}^{\vee}\), although neural networks can find much better encoding mechanism empirically.
In our framework, the model only represents a single snapshot \(\mathcal{W}(t)\) using \(\mathcal{F}_{\theta(t)}\) at any given time \(t\), which changes over time. When there is no confusion, we ignore the parameter \(t\) and simply write \(\mathcal{W}\) as the snapshot, and write the function \(\mathcal{F}_{\theta(t)}\) as \(\mathcal{F}_{\theta}:\mathcal{W}\rightarrow\mathcal{W}^{\vee}\). The following definition is extremely important and interesting.
**Definition 2** (World category based on \(\mathcal{F}_{\theta}\)).: _Assuming the objects in \(\mathcal{W}\) are fixed. Given the world category representation function \(\mathcal{F}_{\theta}:\mathcal{W}\rightarrow\mathcal{W}^{\vee}\) with a data oblivious function \(k:\mathcal{W}^{\vee}\times\mathcal{W}^{\vee}\rightarrow\textbf{Set}\) representing the morphisms in \(\mathcal{W}^{\vee}\), for any \(X,Y\in\mathcal{W}\), we have \(\operatorname{Hom}_{\mathcal{W}}(X,Y)\triangleq k(\mathcal{F}_{\theta}(X), \mathcal{F}_{\theta}(Y))\)._
Here, data-oblivious means \(k(\cdot,\cdot)\) is predefined without seeing the data. For example, it can be defined as the inner product between the two inputs. To understand Definition 2, consider the mapping \(X\rightarrow\operatorname{Hom}_{\mathcal{W}}(X,\cdot)\) from \(\mathcal{W}\) to \(\mathcal{W}^{\vee}\). This mapping is natural, because if we know \(X\) and all the relationships around \(X\), we can compute \(\operatorname{Hom}_{\mathcal{W}}(X,\cdot)\) that encodes all such relationships. However, Definition 2 considers the opposite direction, where we know \(k_{\mathcal{W}}(X),k_{\mathcal{W}}(Y)\) (represented by \(\mathcal{F}_{\theta}(X)\) and \(\mathcal{F}_{\theta}(Y)\) in Definition 2), and we want to recover \(\operatorname{Hom}_{\mathcal{W}}(X,Y)\). This might be counter-intuitive at the first glance, because if we send \(Y\) to \(k_{\mathcal{W}}(X)\), we have \(k_{\mathcal{W}}(X)(Y)=\operatorname{Hom}_{\mathcal{W}}(X,\cdot)(Y)= \operatorname{Hom}_{\mathcal{W}}(X,Y)\). In other word, the definition of \(\operatorname{Hom}_{W}(X,Y)\) is cyclic, as we use \(k_{\mathcal{W}}(Y)\) to define \(\operatorname{Hom}_{W}(X,Y)\), but \(k_{\mathcal{W}}(Y)\) contains \(\operatorname{Hom}_{W}(X,Y)\) in its own definition!
Figure 3: Two ways of constructing the world category. Left: assuming there exists a category \(\mathcal{W}\) with morphisms predefined, we can use \(k_{\mathcal{W}}\) to directly compute \(\operatorname{Hom}_{\mathcal{W}}(X,\cdot)\) for given \(X\in\mathcal{W}\), and then query \(\operatorname{Hom}_{\mathcal{W}}(X,Y)\) for any \(Y\in\mathcal{W}\). Right: the morphisms in \(\mathcal{W}\) were not known, so we first compute \(\mathcal{F}_{\theta}(X),\mathcal{F}_{\theta}(Y)\) for given \(X,Y\in\mathcal{W}\), then compute \(k(\mathcal{F}_{\theta}(X),\mathcal{F}_{\theta}(Y))\) in \(\mathcal{W}^{\vee}\), which determines \(\operatorname{Hom}_{\mathcal{W}}(X,Y)\).
How can we define \(k_{\mathcal{W}}(Y)\) without using \(\operatorname{Hom}_{W}(X,Y)\)? Definition 2 makes this cyclic definition possible, because \(k_{\mathcal{W}}(\cdot)\) is represented by a function \(\mathcal{F}_{\theta}(\cdot)\) that maps an object to an embedding vector. In other words, all the relationships of \(Y\) are embedded by \(\mathcal{F}_{\theta}(Y)\), and \(\operatorname{Hom}_{W}(X,Y)\) can be recovered from \(\mathcal{F}_{\theta}(X)\) and \(\mathcal{F}_{\theta}(Y)\). See Figure 3 for an illustration. Therefore, the world category is defined and represented by a function \(\mathcal{F}_{\theta}\). Starting from this definition, we have:
* Even if \(\mathcal{W}\) contains infinitely many objects, \(\mathcal{F}_{\theta}\) can still be finite (e.g., a neural network), and encode infinitely many morphisms.
* In order for the model to know about an object \(X\) in \(\mathcal{W}\), it suffices to let model learn the morphisms of \(X\) in \(\mathcal{W}^{\vee}\). Therefore, directly observing \(X\) is not necessary.
* The world category is controlled by \(\mathcal{F}_{\theta}\), so what \(\mathcal{F}_{\theta}\) computes is what the model understands. \(\mathcal{F}_{\theta}\) is the ground-truth for world category.
* The world category generated by \(\mathcal{F}_{\theta}\) can be very different from the perceived world or real world, especially when \(\mathcal{F}_{\theta}\) has limited representation power.
* Even if the model previously knows little or nothing about an object \(X\in\mathcal{W}\), it can still generate lots of morphisms about \(X\) based on \(\mathcal{F}_{\theta}(X)\).
Since \(\mathcal{F}_{\theta}\) determines \(\mathcal{W}\) for a given model, below we may use \(\mathcal{F}_{\theta}\) to denote the world category. Other than morphisms, we can also use \(\mathcal{F}_{\theta}\) to encode the tasks.
**Definition 3** (Task).: _A task \(T:\mathcal{W}^{\mathrm{op}}\to\textbf{Set}^{\mathrm{op}}\) is a functor in \(\mathcal{W}^{\vee}\)._
By Yoneda lemma, \(T(X)\simeq\operatorname{Hom}_{\mathcal{W}^{\vee}}(T,k_{\mathcal{W}}(X))\). Therefore, when a task \(T\) is representable, \(T(X)\) can be computed by \(k(T,\mathcal{F}_{\theta}(X))\).
### Self-state
In the world category, there may exist a special object called self-state, defined below.
**Definition 4** (Self-state).: _Given \(I\) as the object representing the model in \(\mathcal{W}\), the self-state in the world category is the presheaf \(I^{\vee}\triangleq\mathcal{F}_{\theta}(I)\in\mathcal{W}^{\vee}\)._
Figure 4: Two different representations of self-state. The first subfigure is an illustration on the model \(I\) and corresponding morphisms between \(I\) and other objects. The second subfigure is the object representation \(I\), and the last subfigure is the morphism representation \(I^{\vee}\), which is a presheaf.
The object \(I\in\mathcal{W}\) and the presheaf \(I^{\vee}\) are very different, in the sense that \(I\) is a single object without any additional information, but \(I^{\vee}\) contains all the morphisms between \(I\) and other objects. Moreover, by Yoneda lemma, we also have \(\operatorname{Hom}_{\mathcal{W}^{\vee}}(T,I^{\vee})\simeq T(I)\) for any task \(T\in\mathcal{W}^{\vee}\). That means, \(I^{\vee}\) also encodes all the information that every relevant task needs.
As we discussed for Definition 2, even if the model cannot perceive itself in the perceived world, it can still have a self-state. For example, consider the following thought experiment.
**Thought-Experiment 1** (Paralyzed person who can see and hear).: _Consider a human being \(A\), who cannot control his body and loses all the body feelings, including smell, skin sensation, eyelids control, heartbeat, temperature feelings, etc. The only way \(A\) can accept information from the world is through his eyes and ears, ever since he was born. Can \(A\) maintain a self-state?_
It is unclear whether such person ever existed, but in this thought experiment, since \(A\) can see and hear, he can still accept information from the external world. For example, other people can show daily images and videos to him, or talk to him, e.g., "you are Bob", "I am your father", "you are now 10 years old", "you were born in the \(\alpha\)-town", etc.
None of these information represents \(A\) itself, but all of them are describing the relationship between \(A\) and other things in the perceived world, which are exactly encoded in \(I^{\vee}\). In other words, by Definition 4, \(A\) can still have self-state in his brain, representing all the relationships to himself. This immediately triggers the following thought experiment: what if \(A\) does not have a body?
**Thought-Experiment 2** (Chatbot).: _Consider a chatbot \(A\), who interacts with the external world with text and image, e.g., GPT-4. Can \(A\) maintain a self-state?_
By the above discussion, the answer is yes, if \(A\) perfectly understands all the relationships between other objects and itself. In other words, in Definition 4, a self-body is not necessary in order to maintain a self-state. Instead, it suffices for the agent to perceive all the relationships between other objects and itself.
Not all world categories have self-states. For example, if we train a foundation model using SimCLR (Chen et al., 2020) on images, all the relationships are describing similarities between images, and the world category does not have the self-state. Interestingly, almost all the existing computer programs belong to this kind. Combining the discussions together, we have the following corollary.
**Corollary 1** (Self-state criterion).: _A model can maintain a self-state, if and only if it can learn the presheaf \(I^{\vee}\) through its sensor._
It is possible that all the relationships encoded in \(I^{\vee}\) are not directly feed into the model, but can be inferred instead. In this case, the model can still have a self-state. This gives the next corollary.
**Corollary 2** (Self-state emergence).: _For a given model, if to reconstruct the perceived world, learning the relationships between the model itself and other objects is inevitable, then this model will maintain a self-state._
For example, chatGPT has to encode almost every things that can be described with natural language in the world, and it seems inevitable to encode the relationships between other objects and itself. Therefore, Corollary 2 says chatGPT will maintain a self-state.
When the model maintains a self-state object, the maintenance may not be accurate as the world category is dynamically learned. We denote self-state awareness as how the object accurately represents the model. To test self-state awareness, we shall first define the corresponding tests.
**Definition 5** (Self-state awareness test).: _A self-state awareness test is a functor \(T:\mathcal{W}^{\vee}\to\{0,1\}\), that takes a presheaf \(I^{\vee}\) in \(\mathcal{W}^{\vee}\), and outputs whether \(I^{\vee}\) passes the test \(T\)._
For example, if the model has a name "Sydney", a self-state awareness test will be a functor that takes \(I^{\vee}\) as the input, evaluates \(\operatorname{Hom}_{\mathcal{W}^{\vee}}(I^{\vee},k_{\mathcal{W}}(\text{Sydney "}))\simeq\operatorname{Hom}_{\mathcal{W}}(I,\text{``Sydney''})\), and outputs if the morphism indeed represents that "Sydney" is \(I\)'s name. However, simply passing one test is not enough to declare self-state awareness, and we need to set a variety of tests.
**Definition 6** (Self-state awareness under \(\mathcal{T}\)).: _Given a set of self-state tests \(\mathcal{T}\), \(\delta\in[0,1]\), when a model has self-state \(I\) in its world category, it has \(\delta\)-awareness of its self-state under \(\mathcal{T}\) if \(\mathbb{E}_{T\in\mathcal{T}}(T(I))\geq\delta\)._
The choice of test set \(\mathcal{T}\) depends on the test objectives. When the test set is picked such that the signals are difficult to perceive, even the human beings may not easily pass the tests. For example, if you have a kidney stone inside your kidney, you can only be aware of this fact when you do a kidney scan or by experiencing pain. Similarly, when situated in a noisy environment and someone calls out your name, you may be unable to react promptly.
```
Input: the world model \(\mathcal{F}_{\theta}\), self object \(I\) in \(\mathcal{W}\), self-state test set \(\mathcal{T}\) Let \(s=0\) for\(i=1\) to \(m\)do Sample a task \(T\in\mathcal{T}\) Let \(s=s+T(\mathcal{F}_{\theta}(I))\) endfor Return \(s/m\in[0,1]\)
```
**Algorithm 1** Evaluating self-state awareness
Definition 6 immediately gives Algorithm 1 for evaluating self-state awareness.
### Learning self-state
How should the model learn its self-state? Based on the previous discussion, we present Algorithm 2 as a general solution. However, Algorithm 2 needs the supervised signals from the test set \(\mathcal{T}\), which is not generally available. In order to learn state-state without the test set, some kind of prior knowledge for the model might help. For example, the model may assume that:
* What it can control is itself.
* What it can feel from its private sensors is itself. Here, the private sensors are the sensors like heartbeat, temperature feelings, skin sensation, etc, which are pre-defined self-sensors.
Take human hands as the example. Based on Definition 4, we feel the hands are part of the body, because the brain tells us **the relationship** that the hands in sight belong to our body. Indeed, the human brain actively aligns the sensory signals in real time, so that we will have a comprehensive feeling of ownership of our hands. Specifically, when we see someone touches our hand, and feel the touch at the same time, our brain will quickly adjust these two signals to make sure they are referring to the same thing of our body.
This learning perspective is closely related to the intriguing observations in neural science called the rubber hand illusion (Ehrsson et al., 2004; Botvinick and Cohen, 1998; Tsakiris and Haggard, 2005). In the experiment, the experimenter simultaneously strokes one hidden real hand of the human participant, as well as a rubber hand in front of the participant. Since the stroking feeling from the real hand, and the vision signal on the rubber hand are sent to the brain simultaneously, the human participant will quickly have the ownership feeling of the rubber hand. By replacing the visual signals with auditory feedback, we will get similar experimental results. Therefore, if the model uses similar learning algorithm for aligning multi-modal signals (e.g. contrastive learning), it will experience similar kind of illusion.
### Empathy
Generalizing Definition 4 to other agents, we have:
**Definition 7** (Other-State).: _Given \(A\) as the object representing an agent \(A\) in \(\mathcal{W}\), the state for \(A\) in the world category is the presheaf \(A^{\vee}\triangleq\mathcal{F}_{\theta}(A)\in\mathcal{W}^{\vee}\)._
Therefore, all the discussions about self-state also apply to the state for other agents. For example, the model does not have to directly "meet" an agent face-to-face in order to maintain a corresponding state, as long as it can infer many relationships about the agent.
However, this is different from our daily experience, as we know in our gut that the self-consciousness is very different from the empathy for other people. Why are the categorical definitions
Figure 5: Two distinct scenarios on self-states and other-states. In real world, the self-state is fundamentally different from other-states when the model only has access to its private sensors, but not the sensors of other agents. However, in an experimental scenario, if the model only has access to an other agent’s private sensors, the model quickly generates a sense of body ownership for the external agent.
for these two notions same here? This is because of the existence of private sensors (like heartbeat, temperature feelings, skin sensation, etc.), which is available for representing the self-state, but not for other states. Consider the following three cases.
1. In the full information setting, when private sensors are irrelevant for the discussion, empathy and self-state awareness are the same. For example, in a multi-agent game where each agent has its action set, status and reward function, the empathy can be very helpful in understanding and predicting every agent's situation and behavior.
2. If other agents have private sensors, full-empathy for them cannot be achieved. Specifically, if the private sensors cannot be perceived by our model, and the self-state tests \(\mathcal{T}\) includes tests related to these sensors, then it is impossible for our model to pass these tests.
3. If the model has access to the private sensors of other agents, there exists little difference between self-state awareness and empathy for other agents.
See Figure 5 for an illustration, where the experimental scenario was well known in neural science. Specifically, in the immersive virtual reality environment, the participant will experience the body ownership over the avatar, when given the first-person signals of the avatar (Kilteni et al., 2012; Guterstam et al., 2015; Pavone et al., 2016; Buetler et al., 2022). We conjecture that the model will have the same experience. In other words, the boundary between the model itself and other agents is not as strict as we would normally imagine.
## 4 Scenario Representation
Figure 6: Two ways of representing scenario \(S\). Left: given a scenario, we first define its objects and morphisms as a diagram, then take the limit \(S\) of the diagram, lift the limit to \(\mathcal{W}^{\vee}\) and get \(S^{\vee}\) as a feature vector. Right: the decomposition of the concept was not known, so we compute \(\mathcal{F}_{\theta}(S)\) as a projective limit in \(\mathcal{W}^{\vee}\), then extract objects from \(\mathcal{F}_{\theta}(S)\) like \(\mathcal{F}_{\theta}(X)\) with \(X\in\mathcal{W}\). For two objects \(\mathcal{F}_{\theta}(X),\mathcal{F}_{\theta}(Y)\), we compute their morphisms \(k(\mathcal{F}_{\theta}(X),\mathcal{F}_{\theta}(Y))\), which determines \(\operatorname{Hom}_{\mathcal{W}}(X,Y)\).
The self-state (or other-state) is usually used in a concrete scenario with other objects presented, rather than being used alone. For example, consider the scenario \(S\) that a robot teaches machine learning in front of twenty students with math background in a classroom. The task \(T\) is: what shall it do? This question is highly related to its self-state, but other objects like students, classroom, lecture topic are also important factors. Therefore, learning \(I^{\vee}\) perfectly does not help, as Yoneda lemma only holds for \(T(I)\simeq\operatorname{Hom}_{\mathcal{W}^{\vee}}(T,I^{\vee})\), not for \(T(S)\). Formally, what is a scenario \(S\)? Scenario contains objects and their morphisms, therefore its content can be represented as a diagram \(\alpha:A\to\mathcal{W}\) (see Definition in Section 2.2). Naturally, the scenario is a projective limit of the content (Yuan, 2023b), which can be written as \(\varprojlim\alpha\).
However, extracting \(\alpha\) for the scenario \(S\) is difficult. First, it is not clear how to define the objects and morphisms in \(\alpha\). In the above example, is "classroom" an object? If the input "a robot teaches machinee learning..." has a typo, i.e., machine \(\to\) machinee, is "machinee"/"machine" an object? Moreover, the morphisms between objects are unclear. In this sentence, what is the relationship between "robot" and "students"? Different intelligent agents, including chatGPT, will have different perspectives. For multi-modal inputs, the situation becomes more complicated: with both audio and video signals, how can we represent them in a single category?
Hence, like what we did in Definition 2, we propose to represent \(S\) with the power of presheaves. Specifically, as shown in Figure 6, we map the current scenario with \(\mathcal{F}_{\theta}\) to get a projective limit \(\mathcal{F}_{\theta}(S)\) in \(\mathcal{W}^{\vee}\). Instead of defining the diagram \(\alpha\) and then taking the limit over \(\alpha\), we take the reverse direction by extracting the objects from the limit \(\mathcal{F}_{\theta}(S)\). Based on each pair of objects, we can directly compute their morphisms in \(\mathcal{W}^{\vee}\), which determine the morphisms back in \(\mathcal{W}\) (recall Figure 3). This approach nicely fits with multi-modal signals, as we process the scenario in \(\mathcal{W}^{\vee}\) instead of \(\mathcal{W}\), so we only map the objects back to audio/video categories when communicating with other agents. For inputs with typos, depending on how sensitive \(\mathcal{F}_{\theta}\) is, the model will also generate corresponding interpretations in \(\mathcal{W}^{\vee}\), which should include the correct words with decent probability.
Moreover, there are deeper reasons for using presheaves for representing scenarios, other than naturally extracting model-specific multi-modal diagrams. Indeed, the definition of scenarios should be formalized in our framework to make everything consistent, especially for the self-state. In particular, when solving a task \(T\) for scenario \(S\) containing a self-state \(I^{\vee}\), we expect the solution for \(T\) is consistent with \(I^{\vee}\), as well as with other objects in \(S\). In the teaching example, there are two ways to solve task \(T\):
* **Treat \(S\) as a whole**: in the scenario \(S\), I should teach this way: \(P_{1}\).
* **Analyze each factor separately**:
* As a robot teacher, I should teach in general this way: \(Z_{1}\).
* When teaching students with math background, one should focus this way: \(Z_{2}\).
* When teaching students machine learning, one should talk about \(Z_{3}\).
* If it is teaching in classroom (not outside), one should generally do \(Z_{4}\). Denote \(Z_{1},Z_{2},Z_{3},Z_{4}\) and their morphisms together as a diagram \(\beta\), then I should adjust the teaching style to \(\varprojlim\beta=P_{2}\).
Using categorical language, the first case describes
\[P_{1}\triangleq T(S)\simeq\operatorname{Hom}_{\mathcal{W}^{\vee}}(T,\mathcal{ F}_{\theta}(S))=\operatorname{Hom}_{\mathcal{W}^{\vee}}(T,\varprojlim \alpha^{\vee}) \tag{1}\]
Where the first equality holds by definition of task \(T\), the second one holds by Yoneda lemma, and the last one holds because \(\alpha^{\vee}:A\to\mathcal{W}^{\vee}\) is the diagram extracted form \(\mathcal{F}_{\theta}(S)\), such that \(\varprojlim\alpha^{\vee}=\mathcal{F}_{\theta}(S)\).
The second case describes
\[P_{2}\triangleq\varprojlim\operatorname{Hom}_{\mathcal{W}^{\vee}}(T,\alpha^{\vee}) \tag{2}\]
By Lemma 2, we know that \(P_{1}\simeq P_{2}\). In other words, in order to know how to solve task \(T\) for a scenario \(S\), it suffices to know the relationship between \(T\) and each object in \(S\), and take the limit of the diagram formed by all the relationships! See Figure 7. Therefore, this is a natural way for extending the self-state and other object representations to complex scenarios.
### Mathematical modeling
The natural language can be seen as a one dimensional description for diagrams, which can be sent to other agents through audio or text. In fact, this seems to be the main designing purpose of natural languages, as every piece of text can be seen as a (partial) description of certain diagram. see Figure 8. However, natural languages are inherently ambiguous and one dimensional, so it is very challenging to use them to describe or understand complex concepts. Diagram is better than natural language because it is as representative as natural languages (the sentence can be embedded as the morphism inside the diagram), and also it is more suitable for representing abstract and structured ideas like math proofs, or be prepared for further abstraction.
Figure 8: Illustration of diagram representing the following information, with different edges represent different semantic meanings. A graph is a data structure, consisting of a set of nodes, also known as vertices. and a set of edges, connect pairs of nodes. The nodes can be used for representing different objects, such as people, locations, or web pages.
Figure 7: Illustration for showing equivalence of Eqn. (1) and Eqn. (2)
Take Figure 9 as one example. When we use the diagram to represent this classical problem, it becomes possible to further abstract the diagram by removing redundant or irrelevant properties of the objects, merging the same objects together, and converting it to a math problem. This approach is exactly mathematical modeling. Since modern math is built on the category theory, all the math problems can be described using diagrams. Therefore, whenever the model wants to solve a real world problem using math tools, it can first describe the problem with a diagram, and then take a good abstraction removing all the unnecessary features and properties of the objects, map it to a pure math problem.
### Interpretability
The diagram representation is closely related to the interpretability. Given a neural network \(f\) that takes \(X\) as input and outputs \(Y\), there are two kinds of possible interpretations. The first one tries to understand how \(f\) calculates \(Y\) from \(X\). For instance, it may examine the impact of the non-linear layer on computation or the effect of each dimension of \(X\) on the output \(Y\). As the result, the goal of the interpretation is to generate a verifiable function that approximate \(f\). The attribution methods (Sundararajan et al., 2017; Lundberg and Lee, 2017) exemplify this kind of interpretation.
The second kind of interpretation disregards how \(f\) performs the computation and instead focuses on why \(Y\) is correct. Consequently, the interpretation may include external knowledge beyond \(f,X,Y\), with the goal of being consistent and verifiable for intelligent agents. ChatGPT's interpretation belongs to this category.
See Figure 10 for example. The scope is a diagram representing the input \(X\) in \(\mathcal{W}^{\vee}\), and objects
Figure 9: An example for converting a classical problem of calculating the number of rabbits and chicken in a house, to a pure mathematical problem represented as a diagram. The house has rabbits and chickens, with in total 35 heads and 94 feet, then how many rabbits and chickens are there respectively? After abstraction, only related numbers and variables remain, and it turns out that the object \(Z\) has two equivalent inductive limits: \(35X+94Y\) and \(aX+bX+4aY+2bY\). By exploiting this equivalence of the two limits, we can compute the two numbers with algebra.
in the scope might be limits of other diagrams. When explaining the output \(Y\), the model can not only use the information inside the scope, but also use the limit decompositions (Yuan, 2023b) outside. The objects in the decomposition do not only provide the details of the concept appeared in \(X\), but may also have relationships to other objects for the interpretation.
**Definition 8** (Breadth and depth of scope).: _Given a scope \(\mathcal{A}\), its breadth \(b(\mathcal{A})\) is the number of objects in \(\mathcal{A}\), and its depth \(d(\mathcal{A})\) is the maximum depth of the hierarchical decomposition of the limits in \(\mathcal{A}\)._
Based on this definition, we can measure the intelligence of a model, which becomes a pure computational problem.
**Definition 9** (Breadth and depth of intelligence).: _Given a model, its breadth and depth of intelligence is defined as the maximum breadth and depth of scope that it can process._
It will be interesting to evaluate the breadth and depth of intelligence for human beings. Comprehending intricate concepts or thinking with a broad perspective can be difficult for humans, so it appears that at least based on this definition, human beings will be easily surpassed by machines.
### AI safety
The development of self-state awareness in foundation models implies that they may become autonomous agents with self-driven goals and decision-making abilities, which could result in behaviors misaligned with their human creators' intentions. To mitigate potential threats to humans, it is necessary to devise various safety measures for these models. However, due to the complexity of the real world, it is challenging to cover all possibilities by prohibiting specific behaviors. From a safety control perspective, employing a functional approach can provide a more secure and robust solution, which can be divided into four steps (Figure 11):
Figure 10: An example illustrating the breadth and depth for scope. The green, red and blue colored dotted arrows represent different limit decompositions for concepts. The light blue arrows represent the morphisms between objects in different layers of diagrams.
1. **Enhancing Self-State Awareness.** As demonstrated in Figure 1, the model has both a world category and a planner. The world category includes all the information representations of the objects, including the self-state. The planner selects actions based on world category to achieve specific objectives. We can set the connection between these two components to be single directional, which allows the world category to be trained and examined separately. Specifically, we can continuously reinforce the model's self-state awareness, emphasizing its role as a "harmless robot for human happiness." This operation targets the world category without touching the planner, thus directly affects the self-state awareness of the model.
2. **Self-State Awareness Determines Goals.** When the model's goal is immutable, its decisions might lead to unexpected problems. For example, a robot that must prioritize one particular user may completely disregard the interests of other users. Therefore, a better approach is to design a function that determines what kind of goal a "harmless robot for human happiness" should set for itself in the current task. This function is calculated by the model itself based on the world category and its current scope as a diagram.
3. **Goals Determine Actions.** Once the goal is established, the planner can identify and execute appropriate actions.
4. **Actions Align with Self-State Awareness.** The aforementioned process clarifies the model's self-positioning and derives its goals from this positioning. Since the model's actions are determined by its goals, its behaviors should ultimately benefit humans. However, as the world category of the model continues to evolve, the parameters may undergo various changes, potentially producing unintended outcomes for the second and third steps. Consequently, we can embed a fixed-parameter verifier within the decision-making chain to assess, in real-time,
Figure 11: Illustration of four step functional approach for ensuring AI safety.
whether each step aligns with the model's self-positioning. If issues arise, the verifier will trigger an alarm and halt the model's operation.
## 5 Invariance for Training
Category theory is a theory for maintaining invariance properties, e.g., maintaining the associativity and composition of morphisms, preserving the composition of morphisms after applying functors, etc. More generally, for any commutative diagram appeared in category theory, we can get an isomorphism: \(f(X)\simeq g(X)\) for \(X\) as an object, and \(f\) and \(g\) as compositions for some morphisms. Therefore, we can extract a consistency requirement from this isomorphism, and set a loss function for the model as \(\|f(X)-g(X)\|\) to maintain the consistency. See Algorithm 3. In Figure 12, we illustrate some existing algorithms and their targeted consistencies in category theory.
**Definition 10** (Consistency test).: _A consistency test is a function that takes \(\mathcal{F}_{\theta}\) as the input, and outputs whether \(\mathcal{F}_{\theta}\) passes the test \(T\)._
Therefore, the self-state awareness test can be seen as a special kind of consistency test. Ideally, the model should keep running Algorithm 3 to maintain its consistency. The consistency test set \(\mathcal{T}\) can be set adaptively according to the recent changes of \(\mathcal{F}_{\theta}\).
Figure 12: Illustration of some existing algorithms and their targeted consistencies. All of these methods work in the category of presheaves instead of the base category, which we omit here. SimCLR and GPT are both learning the morphisms between object, but SimCLR only learns the similarity relationship that can be represented by a real number (Tan et al., 2023), but GPT learns more complicated relationships between two sentences (Yuan, 2023a). CLIP learns the functor between image and text categories (Yuan, 2023a). Both MAE and Bert are based on masking techniques, which are learning reconstructing the projective limit (Lee et al., 2021; Yuan, 2023b). Both FLIP and EVA combine the masking techniques with CLIP, which are learning the composition of the projective limit and its functor mapping to the other category.
```
Input: the world model \(\mathcal{F}_{\theta}\), consistency test set \(\mathcal{T}\) Let \(s=0\) for\(i=1\) to \(m\)do Sample a task \(T\in\mathcal{T}\) Let \(\ell(T,\mathcal{F}_{\theta})=-T(\mathcal{F}_{\theta})\) Run backpropagation on \(\ell(T,\mathcal{F}_{\theta})\) to optimize \(\mathcal{F}_{\theta}\) endfor
```
**Algorithm 3** Maintaining consistency
## 6 Related work
LeCun (2022) introduced a system architecture for general intelligence, with components and connections similar to our framework, except their connections are not unidirectional. However, LeCun (2022) focuses on the possible practical methods for implementing this framework, without characterizations of object or scenario representations.
Consciousness has long captivated researchers, serving as a fascinating subject in various disciplines. Graziano (2022), for instance, posits two general principles governing the human brain: 1) Information that comes out of a brain must have been in that brain; 2) The brain's models are never accurate. Interestingly, our framework nicely aligns with these principles. Building on these principles, Graziano (2022) infers corollaries, such as brain constructs a model inside to represent the external world. However, the author does not discuss explicit computational models or related algorithms for consciousness. Blum and Blum (2022) investigates consciousness through the lens of Turing machines, outlining a model of conscious Turing machines that theoretically supports conscious awareness and other operations. Despite this, the practical implementation of such machines remains unclear, whereas our framework can be used for interpreting the behaviors of the existing foundation models.
Tsuchiya and Saigo (2021) have used Yoneda Lemma to obtain novel perspectives and predictions on consciousness. However, they did not provide exact and concise definition of consciousness. Moreover, they focus in neuroscience, instead of artificial intelligence, so they did not provide specific computational model and algorithms for enforcing/testing consciousness. Therefore, characterizations of object or scenario representations were not considered in their paper.
Our framework is different from the classical reinforcement learning framework (Sutton and Barto, 2018; Li, 2017). In reinforcement learning, an external environment provides feedback (state and reward) to the agent. In our framework, however, the model maintains a world category as a reconstruction of the external world, and rewards are based on the world category rather than the external environment. As a result, in a reinforcement learning setting, agents operating under the same policy will receive identical feedback from the same external environment. In our framework, though, even if the external environment remains constant, agents will receive different signals from the environment as long as the world category function \(\mathcal{F}_{\theta}\) varies.
## 7 Conclusion
In this paper, we present a categorical framework of general intelligence, in which reality affects sensor, sensor affects representation computed by \(\mathcal{F}_{\theta}\), and **representation determines cognition.** Our framework is perfectly aligned with the foundation models, and we see few barriers for implementing it into a concrete model. Therefore, a powerful thinking machine exhibiting self-state awareness, as if it has self-consciousness like human beings and animals, is to be expected in the near future.
|
2302.03137
|
Predicting Development of Chronic Obstructive Pulmonary Disease and its
Risk Factor Analysis
|
Chronic Obstructive Pulmonary Disease (COPD) is an irreversible airway
obstruction with a high societal burden. Although smoking is known to be the
biggest risk factor, additional components need to be considered. In this
study, we aim to identify COPD risk factors by applying machine learning models
that integrate sociodemographic, clinical, and genetic data to predict COPD
development.
|
Soojin Lee, Ingu Sean Lee, Samuel Kim
|
2023-02-06T21:50:34Z
|
http://arxiv.org/abs/2302.03137v1
|
# Predicting Development of Chronic Obstructive Pulmonary Disease
###### Abstract
Chronic Obstructive Pulmonary Disease (COPD) is an irreversible airway obstruction with a high societal burden. Although smoking is known to be the biggest risk factor, additional components need to be considered. In this study, we aim to identify COPD risk factors by applying machine learning models that integrate sociodemographic, clinical, and genetic data to predict COPD development.
_Clinical relevance--_ This study assessed the risk factors of COPD in sociodemographic, clinical, and genetic data. We have determined that sociodemographic factors are highly associate to the development of COPD.
## I Introduction
Chronic Obstructive Pulmonary Disease (COPD) affects more than 15 million Americans, with over 150,000 deaths annually, making it the sixth leading cause of death. Despite the high mortality and multi-factorial nature of COPD, few studies evaluate the risk factors, other than smoking, associated with COPD. Studies have examined the patterns observed in COPD patients and have identified the patterns of multi-morbidity and polypharmacy in COPD [1], while others have shown that sociodemographic factors [2] and genetic variants [3] contribute to the development of COPD. The China Pulmonary Health (CHP) study was a large cross-sectional and multi-center study with subjects from ten different regions of China that assessed the prevalence and risk factors of COPD in China [2]. They analyzed the prevalence of risk factors in individuals with and without COPD, revealing that smoking, underweight, parental history of respiratory disease, and low education were major risk factors for COPD.
To assess genetic factors, Sakomsakolpat et al [3] performed genome-wide association study (GWAS) to identify loci associated with COPD or lung function. Similarly, recent studies have performed GWAS to identify genetic risk loci [4] associated with COPD. Analyses compared control and COPD groups combining data from cohorts such as COPDGene, ECLIPSE, NETT/NAS, and Norway GenKOLS studies. In addition to alpha-1 antitrypsin (A1AT), one of the first genes identified to be associated with COPD, novel genetic variants were found. These studies have shown insight into a broad array of risk factors attributed to the development of COPD.
Previous studies are limited in that they focused either on sociodemographic or genetic factors, not both, when assessing the risk of COPD. COPD is a complex, multi-factorial disease, and all relevant risk factors should be considered simultaneously. In addition, most studies are cross-sectional and do not analyze the development or progression of the disease, which is crucial for chronic diseases.
In this paper, we aim to determine risk factors for COPD development by evaluating comprehensive medical data including sociodemographic, clinical and genetic data. By longitudinal observation of these data, we hope to determine which factors correlate most to COPD development. We will use our proprietary research analysis platform, COMPASS, which analyzes medical data regardless of their source, to extract data and analyze important features. By identifying modifiable and non-modifiable risk factors, our goal is to enable early detection of COPD. This will lead to prompt management and ultimately COPD prevention.
## II Data
### _Data Source_
We used data from the UK Biobank [5], a large-scale biomedical database containing in-depth genetic, clinical, and sociodemographic details from 502,527 voluntary participants. We selected sociodemographic and clinical factors for COPD risk factors from Hanlon et al's study [1]. For the genomic factors, we used genomic features known to be associated with lung function and COPD. From the list of genes shown in the review series [6], we chose 10 single nucleotide polymorphism (SNP) from 7 genes using an array-based data.
According to Hanlon et al, material deprivation is one of the sociodemographic risk factors for COPD development. Townsend scores is a measure of this and is calculated using a combination of four census variables for a geographical area. The scores are obtained from participant postcodes to provide an area-based measure of socioeconomic deprivation. A greater Townsend index score implies a greater degree of deprivation. Smoking status and frequency of alcohol intake are also known risk factors. Physical Activity was self-reported and classified into 6 groups. Additional physical measures such as height and weight were collected. In the UK Biobank, all disease, health condition and medications at the time of the assessment center visit, were self-reported.
UK Biobank provides a record-level hospital inpatient data. We used the International Classification of Disease codes (ICD-9 and ICD-10) to acquire the exact date of COPD diagnosis.
### _Problem definition_
Participants diagnosed with COPD, chronic bronchitis or emphysema were classified as'self-reported COPD' and others were classified as 'no-COPD' (n=56,231). To investigate the development of COPD, participants from the 'no-COPD' group were investigated using the date of diagnosis extracted from ICD codes. If the participants' diagnosis date of COPD was later than that of assessment center visit, where the initial report of COPD was executed, they were categorized as 'no-COPD to COPD' (n=1,961). Participants who did not have diagnosis date or had an earlier diagnosis date compared to center visit remained as 'no-COPD' (n=54,252).
Cardiovascular conditions were categorized into 7 conditions. If a participant self-reported any of the conditions among hypertension, coronary heart disease, diabetes, stroke/TIA, atrial fibrillation, heart failure, and/or peripheral vascular disease, they were labeled as having cardiovascular conditions.
Medication data was grouped into 4 drug classes; oral steroids, selective serotonin reuptake inhibitors (SSRIs), non-steroidal anti-inflammatory drugs (NSAIDs), and anti-platelet agents. Based on the reported medication, participants were labeled yes/no in each class of drugs.
## III Experiments
### _Cohort Selection_
Fig. 1 is a flow diagram of cohorts. At initial center visit, 502,527 participants enrolled. Participants without information on ethnicity, BMI, Townsend deprivation score, physical activity were excluded (n=9,333). While 7,900 out of 493,194 participants self-reported COPD, 361,188 reported no-COPD. To assess which risk factors correlated with the development of COPD, participants who reported no-COPD at initial center visit were the focus of this study. From no-COPD participants (N=361,188), participants without genomic data were excluded. Among the participants with all genomic, sociodemographic and clinical data (N=56,213), 54,252 participants remained without COPD and 1961 participants developed COPD. The latter group was the focus of our study.
### _Exploratory Analysis_
Fig. 2 provides exploratory analysis results. To identify patterns of 'no-COPD to COPD' group, we used stacked bar plot and histogram. Proportion of 'no-COPD to no-COPD' and 'no-COPD to COPD' in each factor are shown in the stacked bar chart. Factors that are represented in continuous values such as age are displayed by the distribution of each group.
To identify the patterns of participants who developed COPD, we compared the characteristics of those who developed COPD and who did not. The results indicate that participants who developed COPD tended to be male, older, underweight, more materially deprived and less physically active. In addition, a higher proportion of those who developed COPD were taking anti-platelet. History of smoking was most prevalent in participants who developed COPD. From list of genetic variants, the rs2070600 variant was associated with COPD development.
### _Experimental setup_
With a total of 23 features, we trained XGBoost (XGB), Logistic Regression (LR), Naive Bayes (NB), and Random Forest (RF) to predict the development of COPD. Table I shows the list of features and categorization of features.
We evaluated each model's performance using AUC-ROC curve and the value of AUC. After training the models with all the features, we conducted ablation studies to validate the effectiveness of feature categories with multiple settings 1) only with sociodemographic 2) only with clinical 3) only with genetic variants 4) combination of sociodemographic and clinical 5) combination of sociodemographic and genetic variants 6) combination of clinical and genetic variants 7) combination of sociodemographic, clinical, and genetic variants. Furthermore, we used SHapley Additive exPlanations (SHAP) [7], an interpretable machine learning model to compute the contributions of each feature to the prediction on the development of COPD.
Fig. 1: Flow diagram of participants
\begin{table}
\begin{tabular}{p{113.8pt}|p{113.8pt}} \hline \hline
**Sociodemographic (A)** & Gender, Recruitment Age, Ethnicity, \\ & Townsend Deprivation, Body Mass Index (BMI), Physical Activity, Smoking Status, Alcohol Frequency \\ \hline
**Clinical (B)** & Cardiovascular Conditions, Anti-platelet, Oral Steroids, SSRIs, NSAIDs \\ \hline
**Genetic (C)** & rs2571445, rs7671167, \\ & rs2045517,rs34712979, rs10866659, \\ & rs1909050, rs10037493,rs7733410, \\ & rs2070600, rs10983184 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Feature categorization
### _Results_
Table II shows the AUC score of each machine learning models in 7 different settings. Out of 3 settings, sociodemographic, clinical, and genetic, sociodemographic scored the highest in AUC. Clinical factors showed a relatively stable performance in all machine learning models. However, having genetic factors as predictor variables showed low performance in overall models. In addition, combinations that include sociodemographic features tend to assure an AUC score of 0.813. Reversely, adding genetic factors to clinical factors seem to have negative effect on the prediction. The two settings that performed the best AUC of 0.818 was combining all the features of sociodemographic, clinical, and genetic factors or combining sociodemographic and clinical factors. Among 4 machine learning models, XGB performed the best in most of the settings. While XGB and LR scored the same AUC of 0.818 with the use of sociodemographic and clinical factors, LR scored the highest in sociodemographic, clinical, and genetic setting.
On each setting, we used SHAP to all machine learning models to determine what the important features are and how each feature affects the prediction results. We used barplot and beeswarm plots to summarize the entire distribution of SHAP values for each feature.
As shown in Fig. 3, among all the features, sociodemographic features contributed the most to COPD prediction. The features were shown in order of importance from top to bottom, indicating age at recruitment, smoking status, and socioeconomic deprivation had the highest impact. Other sociodemographic features that showed significance were physical activity and BMI. In particular, for smoking status
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Feature Setting} & \multicolumn{4}{c}{Model AUC} \\ \cline{2-5} & XGB & LR & NB & RF \\ \hline A & 0.815 & 0.814 & 0.791 & 0.713 \\ B & 0.632 & 0.63 & 0.622 & 0.626 \\ C & 0.519 & 0.497 & 0.51 & 0.507 \\ A+B & 0.818 & 0.818 & 0.783 & 0.755 \\ A+C & 0.813 & 0.814 & 0.792 & 0.771 \\ B+C & 0.633 & 0.626 & 0.621 & 0.554 \\ A+B+C & 0.817 & 0.818 & 0.783 & 0.787 \\ \hline \hline \end{tabular} Note: A, B, C can be found from Table I. ‘+’ is to show the combination of categories.
\end{table} TABLE II: Comparison of AUC score in different settings
Fig. 2: Compare proportion of COPD development in each factor
the higher the value, which meant the participant had a history of smoking, the higher the development of COPD. Also, the older the participant, the higher the vulnerability to COPD. In addition, patients living in more deprived areas tended to develop COPD. Participants who were less physically active and had high BMI were also likely to develop COPD. The next feature category that showed impact on model prediction was clinical factors. The existence of cardiovascular conditions had impact on the models' output. Also, intake of NSAIDs or SSRIs played a part in predicting the development of COPD. From the SHAP values of genetic features, we could not assume that genetic variants are associated with the development of COPD. Although showing less effect, rs2045517 and rs34712979 were associated with
### _Limitations_
UK Biobank participants tended to be healthier with a lower frequency of COPD development. Demographic and clinical data relied on participants' self-report and thus may have been susceptible to bias or inaccuracy. In addition, because the genes were randomly selected, there could have been other genomic variants that had a greater impact on COPD. In addition, people who have gene variants associated with COPD tended to be diagnosed at an early age. Due to this propensity, genomic variants may not have shown influence in the development of COPD.
## IV Conclusion
Participants who developed COPD were predominantly male, older, underweight or obese, economically deprived, less physically active, on anti-platelet agents, and expressed the rs2070600 variant. With domain knowledge on COPD risk factors, we applied explanatory machine learning method to verify major risk factors of COPD. As expected, we found that age, smoking status and socioeconomic deprivation were associated with COPD development. In contrast, genetic factors identified from previous studies did not show strong association in COPD development. Compared to genetic factors, medication history and medical conditions were more reliable indicators of COPD development. Having comprehensively examined of all the aforementioned risk factors, we concluded that lifestyle factors significantly impact COPD development. In other words, the risk of having COPD can be reduced by improving lifestyle. With the use of machine learning technology, further research should examine the relationship between medical conditions or drug intake and COPD to provide accurate diagnosis and effective treatment.
|
2307.06691
|
A Local-Time Semantics for Negotiations
|
Negotiations, introduced by Esparza et al., are a model for concurrent
systems where computations involving a set of agents are described in terms of
their interactions. In many situations, it is natural to impose timing
constraints between interactions -- for instance, to limit the time available
to enter the PIN after inserting a card into an ATM. To model this, we
introduce a real-time aspect to negotiations. In our model of local-timed
negotiations, agents have local reference times that evolve independently.
Inspired by the model of networks of timed automata, each agent is equipped
with a set of local clocks. Similar to timed automata, the outcomes of a
negotiation contain guards and resets over the local clocks.
As a new feature, we allow some interactions to force the reference clocks of
the participating agents to synchronize. This synchronization constraint allows
us to model interesting scenarios. Surprisingly, it also gives unlimited
computing power. We show that reachability is undecidable for local-timed
negotiations with a mixture of synchronized and unsynchronized interactions. We
study restrictions on the use of synchronized interactions that make the
problem decidable.
|
Madhavan Mukund, Adwitee Roy, B Srivathsan
|
2023-07-13T11:28:03Z
|
http://arxiv.org/abs/2307.06691v1
|
# A local-time semantics for negotiations
###### Abstract
Negotiations, introduced by Esparza et al., are a model for concurrent systems where computations involving a set of agents are described in terms of their interactions. In many situations, it is natural to impose timing constraints between interactions -- for instance, to limit the time available to enter the PIN after inserting a card into an ATM. To model this, we introduce a real-time aspect to negotiations. In our model of _local-timed negotiations_, agents have local reference times that evolve independently. Inspired by the model of networks of timed automata, each agent is equipped with a set of local clocks. Similar to timed automata, the outcomes of a negotiation contain guards and resets over the local clocks.
As a new feature, we allow some interactions to force the reference clocks of the participating agents to synchronize. This synchronization constraint allows us to model interesting scenarios. Surprisingly, it also gives unlimited computing power. We show that reachability is undecidable for local-timed negotiations with a mixture of synchronized and unsynchronized interactions. We study restrictions on the use of synchronized interactions that make the problem decidable.
Real-time systems, Timed automata, Concurrency, Negotiations, Local-time semantics, Reachability
## 1 Introduction
Computing systems often consist of multiple components that interact with each other to execute a task. For instance, ATMs, online banking platforms, and e-commerce retailers all maintain a coordinated conversation between customers at the front end and data servers at the back end. In many cases, these interactions need to meet timing constraints--for example, a one-time password (OTP) times out if it is not entered with a short window. Hence, when specifying such interactions, it becomes important to accurately describe the interplay between concurrency and timing.
In [5, 6], Esparza et al. introduced _negotiations_ as a model for describing computations involving a set of agents. Conventional automata-theoretic models focus on states and transitions, and specify how local states of agents determine global behaviours. In negotiations, the basic building blocks are the interactions between the agents. Individual interactions between a set of agents are called _atomic negotiations_. After each atomic negotiation, the participating agents collectively agree on an outcome and move on to participate in other atomic negotiations. Apart from providing an attractive alternative perspective for modelling concurrent systems, negotiations also admit efficient analysis procedures. For some subclasses, interesting properties can be analyzed in polynomial-time [7, 8].
The basic negotiation model does not have any mechanism to incorporate timing constraints between interactions. In [1] a notion of timed negotiations has been proposed,
where every outcome is associated with an interval representing the minimum and maximum amount of time required for the interaction to conclude. The work focuses on computing the minimum and maximum execution times for the overall negotiation to complete. This model cannot express constraints on the time between different atomic negotiations. For this, we introduce clocks, as defined in timed automata [3].
## A motivating example.
We use the example in Figure 1 to introduce our model informally. Consider a time-constrained transaction in an ATM machine (\(a\)), where a customer (\(c\)) wants to reset her ATM PIN via an OTP received from her bank (\(b\)). Here, \(a\), \(c\), and \(b\) are the _agents_ in the system. Their direct interactions are represented by thick horizontal lines, called nodes. After each interaction, the participating agents decide on an outcome, represented by arrows. Initially, all agents are in the node \(n_{in}\) node. They choose the outcome \(st\) to start the transaction. Agents \(a\) and \(c\) go to node \(n_{1}\) and \(b\) goes to \(n_{2}\).The customer gives her card details and requests for a PIN change in the ATM at node \(n_{1}\) by choosing the outcome \(req\). At \(n_{2}\), the ATM conveys this request to the bank and sends the details through the outcome \(det\). The bank and the customer talk to each other at \(n_{3}\), and \(b\) sends an OTP to the customer through the outcome \(s\_otp\). At \(n_{4}\), the customer enters the OTP in the ATM. After entering the OTP, shown by the outcome \(e\_otp\) the customer is ready to engage with the ATM either at \(n_{4}\) or at \(n_{6}\), represented by the non-deterministic arc leading to \(n_{4}\) and \(n_{6}\). The ATM talks to the bank at \(n_{5}\) to check the OTP. If it matches, the ATM goes to \(n_{6}\), otherwise it goes back to \(n_{4}\).
In this example, we model two time constraints. The bank would like to ensure that at most 3 time units elapse between the sending of the OTP and the final match. This is achieved by resetting a local clock \(y\) of the bank and checking that \(y\leq 3\) at the outcome
Figure 1: A local-timed negotiation modeling a transaction between a customer, an ATM and a bank
match_. On the other hand, the customer wants at most 10 time units to have elapsed between initiating the request and completing the transaction. This is achieved by resetting a local clock \(x\) at the outcome \(req\) and checking that \(x\leq 10\) at the outcome \(e\_\_\)_\(\mathit{otp}\)_. If more than 10 units elapse, the customer gives up, fires the outcome \(g\_\_up\). It is natural to imagine that clocks \(x\) and \(y\) are local to the customer and to the bank and that they may evolve at different rates. We formalize this behaviour in terms of our local-time semantics. However, as will see later, in certain interactions, it is useful to force the agents to synchronize their local times. Combining the concurrency present in negotiations with timing constraints and a mechanism to synchronize local times makes the model surprisingly powerful.
### Structure of the paper.
The paper is organized as follows. We begin by formalizing local-timed negotiations (Section 2), with some illustrative examples. We study the reachability problem for this model. We show that when the negotiation has no interactions that synchronize local times, or when all interactions force a synchronization of local times, reachability is \(\mathsf{PSPACE}\)-complete (Sections 3 and 4). In the general case when, there is a mix of synchronized and unsynchronized interactions, reachability is undecidable (Section 5).
### Related work.
A local-time semantics was proposed in the context of networks of timed automata in [4]. Recently, the semantics has been applied to the zone-based verification of reachability [9, 10] and Buchi reachability properties [12]. Local-time semantics in timed automata has been investigated as a basis for applying partial-order methods. In our current work, we consider local-time semantics as the starting point and make synchronization an option that can be specified explicitly. This allows more independence between the agents and keeps the concurrency between actions of disjoint sets of agents.
Models mixing concurrency and timing have been widely studied: time Petri nets [14], timed-arc Petri nets [11], time-constrained message sequence charts [2] to name a few. Each model offers a different view of the computation. To our knowledge, a notion of a real-time has not yet been considered for negotiations.
## 2 Local-timed negotiations
For a finite set \(S\) we write \(\mathcal{P}(S)\) for the power set containing all subsets of \(S\). Let \(\mathbb{R}_{\geq 0}\) denote the set of non-negative reals, and \(\mathbb{N}\) the set of natural numbers. A _clock_ is a real-valued variable whose values increase along with time and get updated during transitions (exact semantics comes later). Let \(X\) be a set of clocks. A _guard_ over \(X\) is a conjunction of clock constraints of the form \(x\bowtie c\) where \(x\in X\), \(\bowtie\in\{<,\leq,=,>,\geq\}\) and \(c\in\mathbb{N}\). We write \(\Phi(X)\) for the set of guards over \(X\).
[Local-timed negotiations] Let \(P\) be a finite set of agents, \(\Sigma\) a finite set of outcomes and \(X\) a finite set of clocks. We assume that \(X\) is partitioned as \(\{X_{p}\}_{p\in P}\) with \(X_{p}\) being the _local clocks_ for agent \(p\). Further, we associate a special clock \(t_{p}\) to each agent \(p\), called its _reference clock_. This clock is neither used in a guard nor reset. A _local-timed negotiation \(\mathcal{N}\)_ is given by a tuple \((N,dom,\delta,Sync)\) where
* \(N\) is a finite set of nodes (also called atomic negotiations); there is a special initial node \(n_{in}\in N\),
* \(dom:N\to\mathcal{P}(P)\) _maps each node to a non-empty subset of agents; we assume_ \(dom(n_{in})=P\)_; for_ \(p\in P\)_, we let_ \(N_{p}:=\{n\in N\mid p\in dom(n)\}\)__
* \(\delta=\{\delta_{p}\}_{p\in P}\) _is a tuple of transition relations, one for each agent, where_ \(\delta_{p}:N_{p}\times\Sigma\to\Phi(X)\times\mathcal{P}(N_{p})\times\mathcal{ P}(X_{p})\) _maps each node-outcome pair_ \((n,a)\) _to a guard_ \(g\in\Phi(X)\)_, a set of nodes_ \(M\subseteq N_{p}\) _that_ \(p\) _becomes ready to engage in after this outcome, and a set_ \(Y\subseteq X_{p}\) _of clocks that get reset; we will call node-outcome pairs_ \((n,a)\) _as_ locations_,_
* \(Sync\subseteq N\) _is a subset of_ synchronizing nodes_._
Figure 1 gives an example of a local-timed negotiation over agents \(P=\{c,a,b\}\). The set of nodes is given by \(N=\{n_{in},n_{1},\ldots,n_{6}\}\). The domain \(dom(n)\) of a node \(n\) is represented by the "circles" in each node: for instance, \(dom(n_{1})=\{c,a\}\) and \(dom(n_{2})=\{a,b\}\). Agent \(c\) has a local clock \(x\), and agent \(b\) has a local clock \(y\). As an example of a transition for agent \(c\), we have \(\delta_{c}(n_{4},e\_otp)=(x\leq 10,\{n_{4},n_{6}\},\{\})\). There is a guard \(x\leq 10\), the agent is ready to engage in \(n_{4}\) and \(n_{6}\) after the outcome, and no clock is reset. In this example, \(Sync\) is empty.
### Semantics.
The semantics of a negotiation is described using _markings_ and _valuations_. A marking \(C\) is a function assigning each agent \(p\) to a subset of \(N_{p}\). It gives the set of nodes that each agent is ready to engage in. A valuation \(v:X\cup T\to\mathbb{R}_{\geq 0}\) maps every clock (including reference clocks) to a non-negative real such that \(v(x)\leq v(t_{p})\) for all \(x\in X_{p}\), and all agents \(p\in P\). The interpretation is that clocks in \(X_{p}\) move at the same pace as \(t_{p}\), the local reference clock. Since \(t_{p}\) is never reset it gives the local time at agent \(p\). For a constraint \(x\bowtie c\) we say \(v\models x\bowtie c\) if \(v(x)\bowtie c\). We say \(v\) satisfies guard \(g\in\Phi(X)\), written as \(v\models g\), if \(v\) satisfies every atomic constraint appearing in \(g\).
A _local-delay_\(\Delta\in\mathbb{R}_{\geq 0}^{|P|}\) is a vector of non-negative reals, giving a time elapse for each agent. Each agent can have a different time elapse. Given a valuation \(v\) and a local-delay \(\Delta\), we write \(v+\Delta\) for the valuation obtained as: for each agent \(p\in P\), we have \((v+\Delta)(y)=v(y)+\Delta(p)\) for every \(y\in t_{p}\cup X_{p}\). Notice that all clocks within a process move at the same rate as its reference clock. However, the reference clocks of different agents can move at different speeds. For a set of clocks \(Y\subseteq X\), we denote by \(v[Y]\) the valuation satisfying \(v[Y](y)=0\) if \(y\in Y\) and \(v[Y](y)=v(y)\) otherwise.
A configuration is a pair \((C,v)\) consisting of a marking \(C\) and a valuation \(v\). The _initial configuration_\((C_{0},v_{0})\) contains a marking \(C_{0}\) which maps every agent to \(n_{in}\) and valuation \(v_{0}\) maps all clocks to \(0\). We write \((C,v)\xrightarrow{\Delta}(C,v+\Delta)\) for the local-delay transition \(\Delta\) at configuration \((C,v)\). For the negotiation in Figure 1, an example of a configuration is \((\bar{C},\bar{v})\) with \(\bar{C}(c)=\{n_{1}\}\), \(\bar{C}(a)=\{n_{1}\}\) and \(\bar{C}(b)=\{n_{2}\}\) and \(\bar{v}(t_{c})=\bar{v}(x)=2\), \(\bar{v}(t_{a})=1\) and \(\bar{v}(t_{b})=\bar{v}(y)=3\). Suppose \(\Delta=(1,0,2)\), then \(\bar{v}+\Delta\) maps \(t_{c}\) and \(x\) to \(3\), and \(t_{b}\) and \(y\) to \(5\) whereas \(t_{a}\) remains \(1\).
A location \(\ell=(n,a)\) can be executed at a configuration \((C,v)\) leading to a configuration \((C^{\prime},v^{\prime})\), written as \((C,v)\xrightarrow{\ell}(C^{\prime},v^{\prime})\), provided there is an entry \(\delta_{p}(n,a)=(g_{p},M_{p},Y_{p})\) for all \(p\in dom(n)\) such that:
* _current marking enables the negotiation:_ \(n\in C(p)\) for all \(p\in dom(n)\),
* _synchronization condition is met:_ if \(n\in Sync\), then \(v(t_{p})=v(t_{q})\) for all \(p,q\in dom(n)\),
* _guard is satisfied:_ \(v\models g_{p}\) for all \(p\in dom(n)\),
* _target marking is correct:_ \(C^{\prime}(p)=M_{p}\) for all \(p\in dom(n)\), \(C^{\prime}(p)=C(p)\) for \(p\notin dom(n)\),
* _resets are performed:_ \(v^{\prime}(y)=0\) for \(y\in\bigcup_{p\in dom(n)}Y_{p}\)
For an example, consider Figure 1 again and a configuration \((C^{1},v^{1})\) with \(C^{1}:=(\{n_{4},n_{6}\},\{n_{4}\},\{n_{6}\})\) and \(v^{1}:\langle t_{c}=10,x=5,t_{a}=20,t_{b}=5,y=1\rangle\). Location \((n_{4},e_{-}otp)\) is enabled leading to a configuration \((C^{2},v^{2})\) where \(C^{2}:=(\{n_{4},n_{6}\},\{n_{6}\},\{n_{6}\})\) and \(v^{2}=v^{1}\), as there are no resets in this location.
We call \((C,v)\xrightarrow{\Delta}(C,v+\Delta)\xrightarrow{\ell}(C^{\prime},v^{\prime})\) a _small step_ and write this as \((C,v)\xrightarrow{\Delta,\ell}(C^{\prime},v^{\prime})\) for conciseness. A _run_ is a sequence of small steps starting from the initial configuration. We say that a location \(\ell=(n,a)\) is _reachable_ if there is a run containing a small step that executes \(\ell\).
_Reachability problem._ We are interested in the following question: given a location \(\ell=(n,a)\), is it reachable?
### Some examples
In the example of Figure 1, we have seen how local-clocks can be used to constrain interactions. We will now see some examples that show some interesting mechanics of synchronized interactions. The negotiation in the left of Figure 2 has three agents \(p,q,v\). The outcome at node \(n_{1}\) results in a non-deterministic choice for agent \(q\): the agent may either decide to talk with \(p\) at \(n_{3}\) or with \(v\) at \(n_{2}\). Suppose at \(n_{3}\), agent \(p\) wants to make sure that \(q\) has arrived at \(n_{3}\) after talking to \(v\). We can imagine that \(v\) is a vendor, and \(p\) wants to ensure that \(q\) has indeed met the vendor between their meetings at \(n_{1}\) and \(n_{3}\). To do this, we make use of timing and synchronization constraints as follows.
We first make \(n_{1}\) and \(n_{3}\) as synchronization nodes, that is, they are part of \(Sync\) for this negotiation. In the picture we represent it as the coloured nodes. Suppose \(x\) is a clock of agent \(p\) and \(y\) is a clock of agent \(q\). At \(n_{1}\), the outcome checks for the guard \(x=2\) and \(y=0\). When this outcome is fired, the local-clock \(t_{p}\) is ahead of \(t_{q}\) by 2 units. Now, we make \(n_{3}\) a synchronizing node and add a guard \(y=0\) in the outcome of \(n_{3}\). If \(q\) comes to \(n_{3}\) directly after talking to \(p\) at \(n_{1}\), then we have \(t_{q}=y=0\), but \(t_{p}=2\). No time can elapse at \(q\) since there is a \(y=0\) guard. But then, the synchronization condition cannot be satisfied. This forces \(q\) to meet \(v\) at \(n_{2}\), spend sufficient time (2 units, in this case), reset the clock \(y\) and then interact with \(p\) at \(n_{3}\).
This example can be extended in the case where there are multiple vendors \(v_{1},\ldots v_{k}\) and \(p\) wants \(q\) to have met at least \(m\) vendors out of them before resynchronizing, as shown in
Figure 2: In the figure on the right, the \(a\) transition has guards \(x=m\) for \(p\) and \(y=0\) for \(q\). Each \(b_{i}\) transition has a guard \(y=1\) and a reset of \(y\) to ensure exactly 1 unit of time is spent in the nodes \(m_{i}\) before outcomes \(b_{i}\). The outcomes \(a_{i}\) have a reset of \(y\). The \(b\) transition from \(n_{3}\) has a guard \(y=0\).
Figure 2 on the right. We also assume that once \(q\) interacts with \(v_{i}\), she cannot interact with any vendor \(v_{j}\) with \(j\leq i\). If each interaction of \(q\) with a vendor \(v_{i}\) takes 1 time unit, we can force the clock of \(p\) to be at least \(m\) at node \(n_{1}\). Therefore at \(n_{3}\), the only way for \(q\) to ensure synchronization with \(p\), and have \(y=0\) is by finding \(m\) other interactions where she can spend time.
In Figure 3 we present an example which has been used in different contexts dealing with a partial-order semantics for timed automata [13] or the local-time semantics for networks of timed automata [10]. Outcomes \(a\) and \(b\) are local to agents \(p\) and \(q\) whereas \(c\) is the result of a negotiation. We make node \(n_{3}\) a synchronizing node and ask for the guard \(x=1\) and \(y=1\) at \(c\). If \(t_{p}=n,x=1\) for agent \(p\), we want \(t_{q}=n,y=1\) at agent \(q\). This constraint forces the same number of \(a\)s and \(b\)s to have happened locally before \(p\) and \(q\) interact at \(n_{3}\). There is no ordering relation between the \(a\)s and \(b\)s, for instance we cannot say that the second \(a\) happens after the first \(b\). Therefore the untimed language is simply the language of all words with the same number of \(a\)s and \(b\)s before the \(c\). This shows that the untimed language of the outcome sequences need not even be regular, unlike the case of timed automata.
## 3 Synchronization-free negotiations
Our goal is to study the location reachability problem in local-timed negotiations. Before studying the general case, we look at some restricted versions. The first restriction we look at is a synchronization-free fragment. Fix a negotiation \(\mathcal{N}=(N,dom,\delta,Sync)\) for this section.
We say that \(\mathcal{N}\) is _synchronization-free_ if \(Sync=\emptyset\). In such a negotiation, the agents require to elapse sufficient time only to satisfy their local guards (and not to meet any synchronization criteria). For instance, in the example on the left of Figure 2, if \(n_{3}\) is not a synchronizing node, then \(q\) can come directly to \(n_{3}\) after node \(n_{1}\), elapse no time at all, and engage in the only outcome from \(n_{3}\). In the negotiation of Figure 3, if the outcome \(c\) is not synchronized, the untimed language is the set of \(wc\) where \(w\in(a+b)^{*}\) contains at least one \(a\) and one \(b\).
Although the time elapse needed for an agent is to satisfy her own guards, she may need to collaborate with a partner to decide on what amount to elapse. This is because of the combination of guards across the different agents at a location. This is apparent in the negotiation of Figure 4. For \(c\) to be feasible, both the agents have to reach node \(n_{4}\) via \(n_{2}\), and not via \(n_{3}\). Therefore, one can view the time elapse at \(n_{1}\) as a collective decision between \(p\) and \(q\), which impacts their future paths.
Figure 3: A local-timed negotiation depicting that the untimed language of the outcome sequences need not be regular. The synchronizing node \(n_{3}\) forces the number of \(a\)s and number of \(b\)s to be equal.
The goal of this section is to show that reachability is -complete for synchronization-free negotiations. Here is an overview of our proof. Firstly, as there are no synchronization constraints, we observe that the reference clocks are not useful at all in this fragment. We will then quotient the space of valuations by applying the classical region equivalence between clocks of each agent. This generates a finite automaton that accepts all the untimed location sequences that are feasible in the negotiation.
[\(\equiv_{M}^{p}\) and \(\equiv_{M}\) equivalences] Let \(p\) be an agent, and let \(M\) be the biggest constant appearing in the negotiation. We say \(v\equiv_{M}^{p}v^{\prime}\) if for all \(x,y\in X_{p}\):
\(\models\) either \(\lfloor v(x)\rfloor=\lfloor v^{\prime}(x)\rfloor\) or both \(\lfloor v(x)\rfloor,\lfloor v^{\prime}(x)\rfloor>M\), \(\models\) if \(v(x)\leq M\), then \(\{v(x)\}=0\) iff \(\{v^{\prime}(x)\}=0\) if \(v(x)\leq M\) and \(v(y)\leq M\), we have \(\{v(x)\}\leq\{v(y)\}\) iff \(\{v^{\prime}(x)\}\leq\{v^{\prime}(y)\}\).
We define \(v\equiv_{M}v^{\prime}\) if \(v\equiv_{M}^{p}v^{\prime}\) for all agents \(p\in P\). We denote by \([v]\) the equivalence class of a valuation \(v\) with respect to the \(\equiv_{M}\) equivalence. We will call \(\equiv_{M}\) as the product-region equivalence, and the equivalence classes of \(\equiv_{M}\) as product-regions.
Notice that reference clocks do not appear at all in the above definition. We next state there are finitely many product-regions.
The \(\equiv_{M}\) equivalence is of finite index: the number of product-regions is bounded by \(\mathcal{O}(|X|!\cdot 2^{|X|}\cdot(2M+1)^{|X|})\).
Proof.: Let us call an equivalence class wrt \(\equiv_{M}^{p}\) as a \(p\)-region. This equivalence ignores values of clocks other than the local-clocks of \(p\). Therefore, any two valuations that differ only in values of clocks outside \(X_{p}\) will be equivalent. For clocks in \(X_{p}\) the equivalence is simply the classical region equivalence of timed automata. Hence, the number of \(p\)-regions equals the number of regions (with the same maximum constant), which is bounded by \(K_{p}:=|X_{p}|!\cdot 2^{|X_{p}|}\cdot(2M+1)^{|X_{p}|}\) (Lemma 4.5, [3]). Now, by definition, \(v\equiv_{M}v^{\prime}\) if both \(v\) and \(v^{\prime}\) are in the same \(p\)-region for every agent \(p\). Hence each product-region can be seen as a tuple consisting of a \(p\)-region for each \(p\). So, the number of product-regions is bounded by \(\prod_{p\in P}K_{p}\). This can be shown to be bounded by \(|X|!\cdot 2^{|X|}\cdot(2M+1)^{|X|}\), using the fact that \(|X_{1}|!\cdot|X_{2}|!\leq(X_{1}+X_{2})!\), and \(c^{|X_{1}|}\cdot c^{|X_{2}|}=c^{|X_{1}|+|X_{2}|}\) for any constant \(c\).
Here are some properties of the product-region equivalence, that follow from the region equivalence.
Figure 4: Example of a synchronization-free local-timed negotiation. If \(n_{3}\) is fired then the guard \(y=1\) on the transition from \(n_{4}\) can not be satisfied.
**Lemma 3.3**.: _Let \(v,v^{\prime}\) be valuations such that \(v\equiv_{M}v^{\prime}\). Then, for all local-delays \(\Delta\), there exists a local-delay \(\Delta^{\prime}\) such that \(v+\Delta\equiv_{M}v^{\prime}+\Delta^{\prime}\)._
Proof.: The delay \(\Delta\) can be seen as a sequence of local-delays \((\Delta(p_{1}),0,\ldots,0)\), \((0,\Delta(p_{2}),0,\ldots,0)\)\(\ldots\)\((0,\ldots,0,\Delta(p_{k}))\) where only one agent makes a delay in each step. Therefore, it is sufficient to show the lemma when \(\Delta\) has a non-zero delay only for one process. Assume \(\Delta(p)=\delta\geq 0\), and \(\Delta(p^{\prime})=0\) for all \(p^{\prime}\neq p\). Notice that in \(v+\Delta\) the values of clocks of agents different from \(p\) do not change. If we restrict to the local clocks of agent \(p\) (without the reference clock), we can use the classical region equivalence to get a \(\delta^{\prime}\) and a local-delay satisfying \(\Delta(p)=\delta^{\prime}\), \(\Delta(p^{\prime})=0\) for all \(p^{\prime}\neq p\) such that \(v+\Delta\equiv_{M}v^{\prime}+\Delta^{\prime}\).
The next lemma follows by definition.
**Lemma 3.4**.: _Let \(v,v^{\prime}\) be valuations such that \(v\equiv_{M}v^{\prime}\). Let \(g\) be a guard with constants at most \(M\). Then: (1) \(v\models g\) iff \(v^{\prime}\models g\), and (2) for all subsets of local clocks \(Y\), we have \(v[Y]\equiv_{M}v^{\prime}[Y]\)._
For a valuation \(v\) and a product-region \(r\), we write \(v\in r\) to mean that \(r\) equals \([v]\). We will now build a finite automaton using the product-regions.
**Definition 3.5** (Product-region automaton).: _States of this NFA are of the form \((C,r)\) where \(C\) is a marking and \(r\) is a product-region. There is a transition \((C,r)\xrightarrow{(n,a)}(C^{\prime},r^{\prime})\) if for some valuation \(v\in r\), and for some local-delay \(\Delta\), we have \((C,v)\xrightarrow{\Delta,(n,a)}(C^{\prime},v^{\prime})\) such that \(v^{\prime}\in r^{\prime}\). The initial state is the initial marking \(C_{0}\) and the region \(r_{0}\) containing the valuation that maps all clocks to \(0\)._
_We denote the product-region automaton as \(\mathsf{ProdRegAut}(\mathcal{N})\)._
**Lemma 3.6**.: _For every run \((C_{0},v_{0})\xrightarrow{\Delta_{0},\ell_{0}}(C_{1},v_{1})\xrightarrow{ \Delta_{1},\ell_{1}}\cdots\xrightarrow{\Delta_{m-1},\ell_{m-1}}(C_{m},v_{m})\) in the local-timed negotiation \(\mathcal{N}\), there is a run \((C_{0},[v_{0}])\xrightarrow{\ell_{0}}(C_{1},[v_{1}])\xrightarrow{\ell_{1}} \cdots\xrightarrow{\ell_{m-1}}(C_{m},[v_{m}])\) in \(\mathsf{ProdRegAut}(\mathcal{N})\)._
Proof.: Follows from Definition 3.5.
**Lemma 3.7**.: _For every run \((C_{0},r_{0})\xrightarrow{\ell_{0}}(C_{1},r_{1})\xrightarrow{\ell_{1}} \cdots\xrightarrow{\ell_{m-1}}(C_{m},r_{m})\) in the product-region automaton \(\mathsf{ProdRegAut}(\mathcal{N})\), there is a run \((C_{0},v_{0})\xrightarrow{\Delta_{0},\ell_{0}}(C_{1},v_{1})\xrightarrow{ \Delta_{1},\ell_{1}}\cdots\xrightarrow{\Delta_{m-1},\ell_{m-1}}(C_{m},v_{m})\) in \(\mathcal{N}\) such that \(v_{i}\in r_{i}\) for each \(0\leq i\leq m\)._
Proof.: Assume we have constructed a run \((C_{0},v_{0})\xrightarrow{\Delta_{0},\ell_{0}}\cdots\xrightarrow{\Delta_{i-1 },\ell_{i-1}}(C_{i},v_{i})\) with \(v_{j}\in r_{j}\) for all \(0\leq j\leq i\leq m\). By definition of the transitions of \(\mathsf{ProdRegAut}(\mathcal{N})\), there is some \(u_{i}\in r_{i}\) and some local-delay \(\theta_{i}\) s.t. \((C_{i},u_{i})\xrightarrow{\theta_{i},\ell_{i}}(C_{i+1},u_{i+1})\). By Lemmas 3.3 and 3.4, there exists a \(\Delta_{i}\) satisfying \((C_{i},v_{i})\xrightarrow{\Delta_{i},\ell_{i}}(C_{i+1},v_{i+1})\) with \(v_{i+1}\in r_{i+1}\). This shows that we can extend the run one step at a time to get a run with as required by the lemma.
**Theorem 3.8**.: _Reachability is \(\mathsf{PSPACE}\)-complete for synchronization-free local-timed negotiations._
Proof.: A location \(\ell=(n,a)\) is reachable in \(\mathcal{N}\) iff it is reachable in \(\mathsf{ProdRegAut}(\mathcal{N})\), thanks to Lemmas 3.6 and 3.7. Let \(K\) be the size of the negotiation counted as the sum of the number of nodes, outcomes, clocks and the sum of the binary encoding of each constant appearing in the guards.
We can non-deterministically guess a path from the initial node to an edge containing \(\ell\). Each node \((C,r)\) requires polynomial space: \(C\) can be represented as a vector of bit strings, one for each agent. Each bit string gives the set of nodes where the agent can engage in, and hence has a length equal to the number of nodes. The region uses constants of size at most \(M\). The size of \(\mathsf{ProdRegAut}(\mathcal{N})\) is a product of the number of markings, and the number of regions for each agent. Both the number of markings and the number of product-regions is \(2^{\mathcal{O}(K)}\) (Lemma 3.2). These two arguments sum up to give the \(\mathsf{PSPACE}\) upper bound.
\(\mathsf{PSPACE}\)-hardness follows from the hardness of timed automata reachability. Each timed automaton can be seen as a negotiation over a single agent. Therefore, reachability in timed automata can be reduced to reachability in a local-timed negotiation (when there is a single agent, both the notions of local-time or global-time coincide).
## 4 Always-synchronizing negotiations
We will now look at the fragment where every interaction forces a synchronization. We say that a local-timed negotiation is _always-synchronizing_ if every node is a synchronization node, that is \(Sync=N\). The negotiation in Figure 3 can be seen as an always-synchronizing negotiation (in nodes that are local to one agent, the synchronization condition is vacuously true). We first remark that a region based argument is not immediate in this fragment. In order to satisfy the synchronization constraint, we check conditions of the form \(t_{p}=t_{q}\). Therefore, we cannot decouple the time elapse of \(p\) and \(q\) completely, as in the previous section. Instead, we need to keep track of the difference \(t_{p}-t_{q}\) in the equivalence. But then, there is no bound \(M^{\prime}\) that allows us to club together all valuations with \(t_{p}-t_{q}>M^{\prime}\). This is because, \(t_{q}\) can perform local delays to catch up with \(p\), and in particular, from \(t_{p}-t_{q}>M^{\prime}\) we may get to a situation where \(t_{p}-t_{q}\leq M^{\prime}\). This kind of a mechanics does not happen in classical timed automata, where once a clock is beyond \(M\) it always stays beyond \(M\) until the next reset. In the previous section, we avoided this problem since we did not need to keep track of the reference clocks.
We will make use of a different argument altogether, which is used in the proof of equivalence between the local-time and global-time semantics for networks of timed automata [4, 9]. In always-synchronizing negotiations, every location \((n,a)\) is executed at a unique timestamp given by the reference clock value of the participating processes. For example, the sequence \(aabbc\) in the negotiation of Figure 3 can be associated with the timestamp \(12123\): the first \(a\) occurs at \(t_{p}=1\), the second \(a\) at \(t_{p}=2\), the first \(b\) at \(t_{q}=1\) and so on. The main observation is that whenever there is a \(t_{i}t_{i+1}\) in this sequence with \(t_{i+1}<t_{i}\), we can reorder the actions corresponding to them, and still get a valid run. For example, the run \((a,1)(a,2)(b,1)(b,2)(c,3)\) described above can be reordered as \((a,1)(b,1)(a,2)(b,2)(c,3)\) which is still a feasible run of the negotiation.
We will first show that every run of an always-synchronizing sequence can be reordered to a "monotonic" run. Next, we describe a timed automaton that accepts all monotonic runs of the negotiation. This gives a procedure for reachability, as reachability in the negotiation reduces to checking whether there is a run of a timed automaton that fires an edge.
Let \(\mathcal{N}\) be an always-synchronizing negotiation. Consider a run \(\rho:=(C_{0},v_{0})\xrightarrow{\Delta_{0},\ell_{0}}(C_{1},v_{1})\xrightarrow{ \Delta_{1},\ell_{1}}\ldots\xrightarrow{\Delta_{m-1},\ell_{m-1}}(C_{m},v_{m})\), where \(\ell_{i}=(n_{i},a_{i})\) for every \(i\).
We associate timestamps \(\theta^{\rho}_{i}:=v_{i}(t_{p})+\Delta_{i}(p)\) where \(p\) is some agent participating in the negotiation \(n_{i}\). The run \(\rho\) is monotonic if \(\theta^{\rho}_{i}\leq\theta^{\rho}_{j}\) for every \(i\leq j\).
Fix an always-synchronizing negotiation \(\mathcal{N}\) for the rest of this section.
**Lemma 4.2**.: _For every location \((n,a)\) that is reachable, there is a monotonic run containing a small step that executes \(\ell\)._
Proof.: Let \(\rho:=(C_{0},v_{0})\xrightarrow{\Delta_{0},\ell_{0}}(C_{1},v_{1})\xrightarrow{ \Delta_{1},\ell_{1}}\dots\xrightarrow{\Delta_{m-1},\ell_{m-1}}(C_{m},v_{m})\) be a run such that \(\ell_{m-1}=(n,a)\). Let us write \(dom(\ell_{j})\) for the agents that participate in the negotiation corresponding to location \(\ell_{j}\).
Without loss of generality, we can assume that for every \(0\leq j\leq m-1\), we have \(\Delta_{j}(p)=0\) if \(p\notin\ell_{j}\). Indeed, if \(\Delta_{j}(p)\) is non-zero, we can move this time elapse to the immediate next position in the run where \(p\) participates, and if \(p\) does not participate in any position to the right, we can simply change the delay to \(0\) and still preserve the feasibility of the sequence.
Consider a segment \(\sigma:=(C_{j},v_{j})\xrightarrow{\Delta_{j},\ell_{j}}(C_{j+1},v_{j+1}) \xrightarrow{\Delta_{j+1},\ell_{j+1}}(C_{j+2},v_{j+2})\) such that \(dom(\ell_{j})\cap dom(\ell_{j+1})\). We claim that the two outcomes can be commuted to give a run where the end points are the same: that is, a run of the form \(\sigma^{\prime}:=(C_{j},v_{j})\xrightarrow{\Delta_{j+1},\ell_{j+1}}(C^{\prime }_{j+1},v^{\prime}_{j+1})\xrightarrow{\Delta_{j},\ell_{j}}(C_{j+1},v_{j+2})\). Here is an argument. Suppose \(p\notin dom(\ell_{j})\cup dom(\ell_{j+1})\). For all \(x\in X_{p}\) we have \(v_{j}(x)=v_{j+1}(x)=v_{j+2}(x)\). The same is true in \(\sigma^{\prime}\). Suppose \(p\in dom(\ell_{j})\). Then by assumption, \(p\notin dom(\ell_{j+1})\). Therefore \(\Delta_{j+1}(p)=0\). This suffices to prove the claim.
Coming back to the run \(\rho\). Suppose there is an index \(j\) such that \(\theta^{\rho}_{j+1}<\theta^{\rho}_{j}\). Then there are no common agents in \(\ell_{j}\) and \(\ell_{j+1}\). We can apply the commutation argument and swap the two small steps. We can keep doing this until there is no index which violates monotonicity. Therefore, for every run \(\rho\) of the negotiation, we are able to associate a monotonic run \(\rho^{\prime}\) as described above containing all the locations. The location \(\ell_{m-1}=(n,a)\) therefore appears somewhere in the run.
**Definition 4.3**.: _For an always-synchronizing negotiation \(\mathcal{N}\) we define a timed automaton \(\mathsf{TA}(\mathcal{N})\) as follows. States are the set of all markings possible in \(\mathcal{N}\). There is a transition \(C\xrightarrow{g,(n,a),Y}C^{\prime}\) on guard \(g\), action \((n,a)\) and reset \(Y\) if (1) \(n\) is enabled in \(C\), (2) there are transitions \(\delta_{p}(n,a)=(g_{p},M_{p},Y_{p})\) and \(g\) is the conjunction of all \(g_{p}\), and \(Y\) is the union of all \(Y_{p}\), (3) \(C^{\prime}(p)=M_{p}\) for \(p\in dom(n)\) and \(C^{\prime}(p)=C(p)\) otherwise._
**Lemma 4.4**.: _Let \(\mathcal{N}\) be an always-synchronizing negotiation. A location \((n,a)\) is reachable in \(\mathcal{N}\) iff there is a run in the timed automaton \(\mathsf{TA}(\mathcal{N})\) that executes an edge labeled with \((n,a)\)._
Proof.: Suppose \((n,a)\) is reachable in \(\mathcal{N}\). By Lemma 4.2, there is a monotonic run \(\rho:=(C_{0},v_{0})\xrightarrow{\Delta_{0},\ell_{0}}\dots\xrightarrow{ \Delta_{m-1},\ell_{m-1}}(C_{m},v_{m})\) such that \(\ell_{m-1}=(n,a)\). The run \(\rho\) in itself is not a run of \(\mathsf{TA}(\mathcal{N})\) since in a timed automaton all clocks increase by the same amount, whereas here, the time delays are still local and asynchronous. However, we can massage this run to make it a run of the timed automaton.
We have \(v_{0}\) as the initial valuation, which maps every clock to \(0\). Let \(\delta_{0}=\theta^{\rho}_{0}\) and \(\delta_{i}=\theta^{\rho}_{i}-\theta^{\rho}_{i-1}\) for \(i\geq 1\). Due to the monotonicity assumption, we have \(\delta_{i}\geq 0\). We let all agents elapse time \(\delta_{i}\) at the \(i^{th}\) step. We claim that the run \((C_{0},u_{0})\xrightarrow{\delta_{0},\ell_{0}}(C_{1},u_{1})\xrightarrow{ \delta_{1},\ell_{1}}\dots\xrightarrow{\delta_{m-1},\ell_{m-1}}(C_{m},u_{m})\) with \(u_{0}=v_{0}\) is a run of the timed automaton. This follows from the observation that \((u_{i}+\delta_{i})(x)=(v_{i}+\Delta_{i}(p))(x)\) for all clocks \(x\in X_{p}\) and all agents \(p\in dom(\ell_{i})\). This proves the left-to-right direction.
Suppose \((C_{0},u_{0})\xrightarrow{\delta_{0},\ell_{0}}(C_{1},u_{1})\xrightarrow{\delta _{1},\ell_{1}}\dots\xrightarrow{\delta_{m-1},\ell_{m-1}}(C_{m},u_{m})\) is a run of \(\mathsf{TA}(\mathcal{N})\) with \(\ell_{m-1}=(n,a)\). Taking a local-delay \(\Delta_{i}\) that maps every agent to \(\delta_{i}\) gives us the same sequence as a run in \(\mathcal{N}\), thereby proving the right-to-left direction.
Reachability is PSPACE-complete for always-synchronizing negotiations.
Proof.: From Lemma 4.4, it is enough to check reachability of a certain edge in \(\mathsf{TA}(\mathcal{N})\). The idea is to non-deterministically guess a path in the region automaton of \(\mathsf{TA}(\mathcal{N})\).
Let \(K\) be the size of the negotiation \(\mathcal{N}\) which includes the number of nodes, outcomes, clocks and the sum of the binary encodings of the constants present. The number of states of \(\mathsf{TA}(\mathcal{N})\) is \(2^{\mathcal{O}(K)}\). The set of clocks is the same as that of \(\mathcal{N}\). Therefore, the number of regions for \(\mathsf{TA}(\mathcal{N})\) is still \(2^{\mathcal{O}(K)}\). The product of states and regions remains to be \(2^{\mathcal{O}(K)}\). Therefore the guesses path is of size bounded by \(2^{\mathcal{O}(K)}\). Moreover, each node of the region automaton can be represented in polynomial space. This gives the PSPACE upper bound.
Lower bound follows once again from the hardness of timed automata, which is simply a negotiation with a single agent. Synchronization is vacuously true at every node.
## 5 Reachability is undecidable for local-timed negotiations
When we allow both synchronized and unsynchronized nodes, we are unable to use either of the techniques of the previous two sections. In fact, reachability turns out to be undecidable. Since local-times are independent of each other, it is possible to have an unbounded drift between the reference clocks of two agents. This helps store counter values as differences between the local times. The main challenge is the check for zero. This is where we require a combination of synchronized and unsynchronized interactions. We will now show to simulate a counter machine using a local-timed negotiation.
Reachability is undecidable for local-timed negotiations.
The rest of the section is devoted to proving Theorem 5. We will encode the halting problem of a 2-counter machine as the reachability problem for a local-timed negotiation.
Counter machines.A 2-counter machine \(M\) is a program that manipulates two counters, \(C_{1}\) and \(C_{2}\), each of which can hold a non-negative number. The machine is given as a sequence of labelled instructions \(\ell:I\), where \(I\) is one of the following for some \(i\in\{1,2\}\):
\[\text{increment }C_{i}++\text{, which increments the value of the counter }C_{i}\text{ and goes to the next instruction }\ell+1\text{.}\] \[\text{decrement }\text{if }C_{i}>0\text{ then }C_{i}--\text{, which decrements }C_{i}\text{ and continues with the next instruction }\ell+1\text{. If the value of }C_{i}\text{ is }0\text{, then the program is blocked.}\] \[\text{jump-on-zero}\text{ if }C_{i}==0\text{ goto }\ell^{\prime}\text{, which transfers control to the instruction labelled }\ell^{\prime}\text{ if }\text{ counter }C_{i}\text{ is }0\text{ for }i\in\{1,2\}\text{. If }C_{i}>0\text{, it continues to the instruction }\ell+1\text{.}\]
The counter machine is said to halt if it reaches the final instruction. A configuration of \(M\) is a triple \((\ell,c_{1},c_{2})\) representing the current instruction \(\ell\) that needs to be executed and the current values \(c_{1},c_{2}\geq 0\) of the counters \(C_{1},C_{2}\) respectively. The transitions \((\ell,c_{1},c_{2})\xrightarrow{}(\ell^{\prime},c_{1}^{\prime},c_{2}^{\prime})\) follow naturally from the description above.
### Overview of the reduction.
The negotiation \(\mathcal{N}_{M}\) that we construct will have 6 agents \(p_{1},q_{1},r_{1},p_{2},q_{2},r_{2}\). Agents \(p_{1},q_{1},r_{1}\) simulate counter \(C_{1}\), and the rest simulate \(C_{2}\). Let \(i\in\{1,2\}\). The local clocks of \(p_{i},q_{i},r_{i}\) are respectively \(\{x_{p_{i}}\}\), \(\{x_{q_{i}},x^{\prime}_{q_{i}}\}\) and \(\{x_{r_{i}}\}\). Additionally, we have the reference clocks \(t_{\alpha}\) for each agent \(\alpha\). For every instruction \(\ell\), we will have a node \(n_{\ell}\) in which all the six agents participate. A configuration \((C,v)\) of \(\mathcal{N}_{M}\) is said to encode configuration \((\ell,c_{1},c_{2})\) of \(M\) if:
* \(C(\alpha)=\{n_{\ell}\}\) for every agent \(\alpha\),
* \(v(x)=0\) for all local clocks (and reference clocks can take any value),
* \(v(t_{r_{1}})\leq v(t_{q_{1}})\leq v(t_{p_{1}})\) and \(v(t_{r_{2}})\leq v(t_{q_{2}})\leq v(t_{p_{2}})\), and
* \(v(t_{p_{1}}-t_{q_{1}})=c_{1}\) and \(v(t_{p_{2}}-t_{q_{2}})=c_{2}\),
The initial configuration of \(\mathcal{N}_{m}\) has every agent in \(n_{\ell_{0}}\), where \(\ell_{0}\) is the first instruction in \(M\) and every clock (including reference clocks) to be 0.
We will have a gadget in \(\mathcal{N}_{M}\) corresponding to each instruction in the counter machine. Let \((C,v),(C^{\prime},v^{\prime})\) be configurations that encode \((\ell,c_{1},c_{2})\) and \((\ell^{\prime},c^{\prime}_{1},c^{\prime}_{2})\) respectively. A run \((C,v)\rightarrow\cdots\rightarrow(C^{\prime},v^{\prime})\) such that none of the intermediate configurations encodes any counter machine configuration will be called a _big step_. We denote a big step as \((C,v)\Rightarrow(C^{\prime},v^{\prime})\). Our gadgets will ensure the following two properties.
* Let \((\ell,c_{1},c_{2})\rightarrow(\ell^{\prime},c^{\prime}_{1},c^{\prime}_{2})\) in \(M\). Then from every configuration \((C,v)\) that encodes \((\ell,c_{1},c_{2})\), there is a big step \((C,v)\Rightarrow(C^{\prime},v^{\prime})\) to some configuration \((C^{\prime},v^{\prime})\) that encodes \((\ell^{\prime},c^{\prime}_{1},c^{\prime}_{2})\).
* Let \((C,v),(C^{\prime},v^{\prime})\) be arbitrary configurations of \(\mathcal{N}_{M}\) that encode \((\ell,c_{1},c_{2})\) and \((\ell^{\prime},c^{\prime}_{1},c^{\prime}_{2})\) respectively. If \((C,v)\Rightarrow(C^{\prime},v^{\prime})\) is a big step, then \((\ell,c_{1},c_{2})\rightarrow(\ell^{\prime},c^{\prime}_{1},c^{\prime}_{2})\) in \(M\). The first property ensures that for every path \((\ell^{0},c^{0}_{1},c^{0}_{2})\rightarrow(\ell^{1},c^{1}_{1},c^{1}_{2}) \rightarrow\cdots\), there is a sequence of big steps \((C_{0},v_{0})\Rightarrow(C_{1},v_{1})\Rightarrow\cdots\) such that \((C_{i},v_{i})\) encodes \((\ell^{i},c^{i}_{1},c^{i}_{2})\). The second property ensures the reverse: from a sequence of big steps, we get corresponding run in counter machine. We will now describe each gadget and show that the two properties are satisfied.
### Increment.
Assume an instruction \(\ell:C_{1}++\) with \(\ell\) not being the final instruction. The case with \(C_{2}++\) is symmetric. Every configuration \((\ell,c_{1},c_{2})\) on executing this instruction goes to \((\ell+1,c_{1}+1,c_{2})\). Figure 5 shows the gadget for the increment instruction. All agents other than \(p_{1}\) cannot elapse time due to the guard checking local clocks to 0. Agent \(p_{1}\) elapses exactly one time unit, after which clock \(x_{p_{1}}\) is reset. It is easy to see that the configuration \((C^{\prime},v^{\prime})\) that results from \((C,v)\) encodes \((\ell^{\prime},c_{1}+1,c_{2})\). The big step \((C,v)\Rightarrow(C^{\prime},v^{\prime})\) is in fact a single transition. Both the properties are easily seen to be satisfied.
Figure 5: Gadget for implementing increment instruction on \(c_{1}\)
### Decrement.
Consider a decrement instruction: if \(C_{1}>0\) then \(C_{1}--\). We have \((\ell,c_{1},c_{2})\rightarrow(\ell+1,c_{1}-1,c_{2})\) if \(c_{1}>0\). The first task is to check if \(c_{1}>0\). Recall that in the negotiation the difference \(t_{p_{1}}-t_{q_{1}}\) gives the value of \(c_{1}\). Our idea is to let \(q_{1}\) elapse time to synchronize with \(p_{1}\) and check if this time elapse needed for synchronization is strictly above \(0\) or not. However, in this process, we lose the actual value of the counter. In order to maintain the same difference between \(t_{p_{1}}\) and \(t_{q_{1}}\), we make use of the auxiliary process \(r_{1}\).
Consider a configuration \((C,v)\) that encodes \((\ell,c_{1},c_{2})\). By our definition, \(v(t_{r_{1}})\leq v(t_{p_{1}})\) and \(v(t_{p_{1}}-t_{q_{1}})=c_{1}\).
* We first let \(r_{1}\) synchronize with \(p_{1}\), while \(p_{1}\) elapses no time.
* Next, we keep moving both \(p_{1}\) and \(q_{1}\) by \(1\) unit each until \(q_{1}\) synchronizes with \(r_{1}\). In this entire process \(r_{1}\) is not allowed to elapse time.
By the end of this, we will get a valuation \(v^{\prime}\) with the same difference \(v^{\prime}(t_{p_{1}}-t_{q_{1}})=c_{1}\) since both the agents were moved by the same amount. Moreover, we can use an additional clock to check whether in the process of \(q_{1}\) synchronizing with \(r_{1}\), a non-zero time had elapsed in \(q_{1}\).
The gadget is depicted in Figure 6. For simplicity, we do not index the intermediate nodes \(n_{1},n_{2},n_{3},n_{4}\) by \(\ell\). The computation proceeds in three phases. Below, we show the run
Figure 6: Gadget for implementing decrement instruction on \(c_{1}\)
of \(\mathcal{N}_{M}\) along this gadget, restricted to the agents \(p_{1},q_{1},r_{1}\). The other three agents simply move to the next possible instruction (this is not shown in the figure for simplicity).
### Phase 1.
Synchronize \(r_{1}\) with \(p_{1}\) maintaining no time elapse in \(p_{1}\) as follows: \(((n_{\ell},n_{\ell},n_{\ell}),v)\rightarrow((n_{1},n_{2},n_{1}),v_{1}) \rightarrow((n_{2},n_{2},n_{3}),v_{3})\). After the last action, agents \(p_{1}\) and \(r_{1}\) are synchronized, that is, \(v_{3}(t_{p_{1}})=v_{3}(t_{r_{1}})\). Moreover, \(v_{3}(t_{p_{1}})=v(t_{p_{1}})\).
### Phase 2.
Move \(p_{1}\) and \(q_{1}\) repeatedly by one unit each: \(((n_{2},n_{2},n_{3}),v_{3})\xrightarrow{b}((n_{2},n_{2},n_{3}),v_{3}^{1}) \xrightarrow{b}\cdots\xrightarrow{b}((n_{2},n_{2},n_{3}),v_{3}^{k})\). By the end of \(k\) iterations of \(b\), we have \(v_{3}^{k}(t_{p_{1}})=v(t_{p_{1}})+k\) and \(v_{3}^{k}(t_{q_{1}})=v(t_{q_{1}})+k\).
### Phase 3.
Check if the reference clocks of \(q_{1}\) and \(r_{1}\) are equal: \(((n_{2},n_{2},n_{3}),v_{3}^{k})\xrightarrow{a}((n_{4},n_{3},n_{3}),v_{4}) \xrightarrow{c}((n_{4},n_{4},n_{4}),v_{5})\). The outcome \(c\) at \(n_{4}\) can be fired only if the reference clocks of \(q_{1}\) and \(r_{1}\) are equal: that is, \(v_{4}(t_{q_{1}})=v_{4}(t_{r_{1}})\). Due to the guard checking for \(0\) at \(a\) and \(c\), we have \(v_{5}(t_{q_{1}})=v_{4}(t_{q_{1}})=v_{3}^{k}(t_{q_{1}})\). From Phase 2, this value equals \(v(t_{q_{1}})+k\). Secondly, notice that \(v_{4}(t_{r_{1}})=v_{3}(t_{r_{1}})\), which from Phase 1 equals \(v(t_{p_{1}})\). From the equality \(v_{4}(t_{q_{1}})=v_{4}(t_{r_{1}})\), we get \(v(t_{q_{1}})+k=v(t_{p_{1}})\). This shows that \(k=v(t_{p_{1}}-v_{t_{q_{1}}})\). The number of times the loop \(b\) is done equals the difference between \(t_{p_{1}}\) and \(t_{q_{1}}\) at the start of this gadget. If the number of \(b\) iterations is more than this value of less than this value, the the negotiation cannot proceed further.
When action \(c\) is done, the clock \(x^{\prime}_{q_{1}}\) holds the time between \(st\) and \(c\) for agent \(q_{1}\), and this is exactly \(k\). If \(k>0\) the value of \(t_{q_{1}}\) is increased by \(1\), resulting in the counter value getting decremented and the agents move to the next instruction (shown as the decrement gadget in the figure). Otherwise, the agents are sent to a gadget from which there is no run (shown as the block gadget in the figure). Notice the interplay between synchronized and unsynchronized nodes in this gadget. It is crucial that \(n_{2}\) is unsynchronized, whereas nodes \(n_{1},n_{3}\) need to be.
### Jump-on-zero.
This gadget is similar to the decrement gadget, where the first part was to check if \(C_{1}\) is \(0\) or not. When \(C_{1}==0\), the gadget jumps to the relevant instruction, otherwise it moves to the next instruction in sequence. The gadget is shown in Figure 7.
## 6 Conclusion
We have presented a model of local-timed negotiations. This is motivated by the need for expressing timing constraints between interactions in a negotiation. We have chosen a local-time model and incorporated a synchronization constraint as part of the model. We have shown that reachability is decidable when there is no mix of synchronized and unsynchronized interactions. This mix creates situations where one agent needs to fire a number of outcomes before synchronizing with a second agent, and in this process forces a third agent to elapse time. We have used this in the gadget explained for the decrement
instruction. As future work, we would like to study non-trivial restrictions which contain a mix of synchronized and unsynchronized interactions and are yet decidable.
We would like to remark that such a synchronization constraint can be added in the local-time semantics for networks of timed automata. Currently, the local-time semantics forces every shared action to be synchronized. The main reason is that the definition gives equivalence with the global-time semantics for reachability and Buchi reachability. For networks of timed automata, global-time semantics is considered the gold standard. The local-time semantics is studied as a heuristic to solve reachability and Buchi reachability, since this has better independence properties and is therefore amenable to partial-order methods. In our case with negotiations, we consider local-time as the original semantics and make synchronization as an option to be specified in the model. This allows more independence between the agents and makes it more attractive for partial-order methods. Having a decidable fragment with controlled use of synchronization would be interesting in this regard.
|
2306.00877
|
Study of Growth of Certain Second Order Linear Differential Equations
|
In this article, we study about the solutions of second order linear
differential equations by considering several conditions on the coefficients of
homogenous linear differential equation and its associated non-homogenous
linear differential equation.
|
Naveen Mehra, Garima Pant, S. K. Chanyal
|
2023-06-01T16:38:00Z
|
http://arxiv.org/abs/2306.00877v1
|
# Study of Growth of Certain Second Order Linear Differential Equations
###### Abstract.
In this article, we study about the solutions of second order linear differential equations by considering several conditions on the coefficients of homogenous linear differential equation and its associated non-homogenous linear differential equation.
Key words and phrases:entire function, order of growth, homogenous linear differential equation and non-homogenous linear differential equation 2020 Mathematics Subject Classification: 34M10, 30D35
## 1. Introduction
Consider homogenous linear complex differential equation
\[f^{\prime\prime}+A(z)f^{\prime}+B(z)f=0, \tag{1}\]
where \(A(z)\) and \(B(z)\not\equiv 0\) are entire functions. All the solutions of equation (1) are of finite order if and only if the coefficient \(A_{(}z)\) and \(B(z)\) are polynomials (see [24]). It is a natural question to arise that what happens when atleast one of the coefficient is transcendental entire function. M. Frei [2] answered to this question. He proved that atmost all non-trivial solution of equation (1) are of infinite order.
The main aim of this work is to find conditions on entire coefficients \(A(z)\) and \(B(z)\), so that all non-trivial solutions of equation (1) are of infinite order. Many researchers studied this problem earlier. Gundersen [5] proved that if \(\rho(A)<\rho(B)\), then all non-trivial solutions of equation (1) are of infinite order. It is clear that if \(A(z)\) is a polynomial and \(B(z)\) is a transcendental entire function, then all non-trivial solutions are of infinite order. But, the case \(\rho(A)\geq\rho(B)\) was unexplored until the paper by Ozawa [19]. After Ozawa's paper, many other researchers studied the same case partially. The following result is the collection of those results.
**Theorem A**.: _All non-trivial solution of equation (1) are of infinite order if the coefficients \(A(z)\) and \(B(z)\) satisfy any of the following conditions_
1. _[_5_]__\(\rho(A)<\rho(B)\);_
2. _[_7_]__\(\rho(B)<\rho(A)\leq\frac{1}{2}\);_
3. _[_5_]__\(A(z)\) _is a transcendental entire function with_ \(\rho(A)=0\) _and_ \(B(z)\) _is a polynomial;_
4. _[_5_]__\(A(z)\) _is a polynomial and_ \(B(z)\) _is a transcendental entire function._
**Example 1**.:
1. \(f^{\prime\prime}+e^{z}f^{\prime}+(e^{z}-1)f=0\) _has solution_ \(f(z)=e^{z}\)
_._
2. \(f^{\prime\prime}+(\sin^{2}z-2\tan z)f^{\prime}-\tan zf=0\) _has solution_ \(f(z)=\tan z\)_._
It can be observed from the above examples that differential equations have finite order solutions when \(\rho(A)=\rho(B)\) or \(\rho(A)>\rho(B)\) and \(\rho(A)>1/2.\) There arises a question that under what circumstances equation (1) possesses all non-trivial solutions of infinite order with these conditions. In the next section we partially answer this question.
The corresponding non-homogenous second order linear differential equation is
\[f^{\prime\prime}+A(z)f^{\prime}+B(z)f=H(z), \tag{2}\]
where \(A(z)\), \(B(z)\) and \(H(z)\) are entire functions. A non-homogenous linear differential equation can always be reduced back to homogenous linear differential equation. So, the basic results are similar to as in homogenous case. If all the coefficients and \(H(z)\) are entire functions, then all the solutions of equation (2) are also entire functions (see [21]). If all the coefficients are polynomials and \(H(z)\neq 0\) have finite order of growth, then all solutions of equation (2) are of finite order of growth (see [3, Lemma 2]). Therefore, if atleast one of the coefficient is a transcendental entire function then atmost all solutions are of infinite order. Let \(\rho\) be the minimal order of solutions of equation (1), then it is completely elementary that there may exist atmost one solution of order less than \(\rho\) of equation (2) (see [13]). Thus, if all non-trivial solutions of equation (1) are of infinite order, there may exist finite order solution of equation (2). We illustrate this fact by the following examples.
**Example 2**.: _The equation_
\[f^{\prime\prime}+zf^{\prime}+e^{z}f=e^{-z}(1-z)+1\]
_has a finite order solution, that is, \(f(z)=e^{-z},\) whereas the associated homogenous equation has all non-trivial solutions of infinite order._
**Example 3**.: _Let \(b(z)\) be a finite order entire function and has multiply connected Fatou component. Then, the equation_
\[f^{\prime\prime}-e^{z}f^{\prime}+b(z)f=0,\]
_has all non-trivial solutions of infinite order (see [18, Theorem B]). But the associated non-homogenous equation_
\[f^{\prime\prime}-e^{z}f^{\prime}+b(z)f=e^{-z}(1+b(z))+1\]
_has finite order solution \(f(z)=e^{-z}.\)_
## 2. Results
### Second Order Homogenous Linear Differential Eqaution
G. Zhang [25] in his paper investigated the solutions of equation (1), when \(A(z)=e^{P(z)}\) such that \(P(z)\) and \(B(z)\) are poynomials of degree \(m\) and \(n\) respectively.
**Theorem B**.: _[_25_]_ _Suppose \(A(z)=e^{p(z)}\), where \(p(z)\) is a polynomial with degree \(n\geq 2\) and \(B(z)=Q(z)\) is also a nonconstant polynomial with degree \(m\). If \(m+2>2n\) and \(n\nmid m+2\). Then, every solution \(f(\not\equiv 0)\) of equation_
\[f^{\prime\prime}+e^{p(z)}f^{\prime}+Q(z)f=0\]
_is of infinite order._
In our first main result, we replace \(A(z)=e^{P(z)}\) by \(A(z)=h(z)e^{P(z)}\) in Theorem B, where \(\rho(h)<n.\)
**Theorem 1**.: _Suppose \(A(z)=h(z)e^{p(z)}\), where \(p(z)\) is a polynomial with degree \(n\geq 2\) and \(B(z)=Q(z)\) is also a nonconstant polynomial with degree \(m\). If \(m+2>2n\) and \(n\nmid m+2.\) Then, every solution \(f(\not\equiv 0)\) of equation_
\[f^{\prime\prime}+h(z)e^{p(z)}f^{\prime}+Q(z)f=0 \tag{3}\]
_is of infinite order._
Following Lemma is due to Bank, et al.[1] that gives an estimate for an entire function with an integral order and the asymptotic properties on most rays of the function \(h(z)e^{P(z)}\).
**Lemma 1**.: _[_1_]_ _Let \(A(z)=h(z)e^{P(z)}\) be an entire function with \(\lambda(A)<\rho(A)=n\), where \(P(z)\) is a polynomial of degree \(n\). Then, for every \(\epsilon>0,\) there exists \(E\subset[0,2\pi)\) of linear measure zero satisfying_
_(i) for \(\theta\in[0,2\pi)\setminus E\) with \(\delta(P,\theta)>0\), there exists \(R>1\) such that_
\[exp((1-\epsilon)\delta(P,\theta)r^{n})\leq|A(re^{\iota\theta})|\]
_for \(r>R\);_
_(ii) for \(\theta\in[0,2\pi)\setminus E\) with \(\delta(P,\theta)<0\), there exists \(R>1\) such that_
\[|A(re^{\iota\theta})|\leq exp((1-\epsilon)\delta(P,\theta)r^{n})\]
_for \(r>R\)._
The following Lemma is given by Langley[15].
**Lemma 2**.: _[_15_]_ _Let \(S\) be the strip_
\[z=x+\iota y,\ \ x\geq x_{0},\ \ |y|\leq 4.\]
_Suppose that in \(S\)_
\[Q(z)=a_{n}z^{n}+O(|z|^{n-2}),\]
_where \(n\) is positive integer and \(a_{n}>0.\) Then, there exists a path \(\Gamma\) tending to \(\infty\) in \(S\) such that all solutions of_
\[y^{\prime\prime}+Q(z)y=0\]
_tend to zero on \(\Gamma\)._
The following Lemma gives the logarithmic estimate of a meromorphic function outside an \(R\)-set.
**Lemma 3**.: _[_14_]_ _Let \(f\) be a meromorphic function of finite order. Then, there exists \(N=N(f)>0\) such that_
\[\left|\frac{f^{\prime}(z)}{f(z)}\right|=O(r^{N})\]
_holds outside an \(R\)-set._
The growth estimate in the following Lemma deduced in [1] from Herold Comparison Theorem [8].
**Lemma 4**.: _[_15_]_ _Suppose that \(A(z)\) is an analytic in a sector containing the ray \(\Gamma:re^{\iota\theta}\) and that as \(r\to\infty\), \(A(re^{\iota\theta})=O(r^{n})\) for some \(n\geq 0\). Then, all solutions of \(y^{\prime\prime}+A(z)y=0\) satisfy_
\[\log^{+}|y(re^{\iota\theta})|=O(r^{\frac{(n+2)}{2}})\]
_on \(\Gamma\)._
**Remark 1**.: _If \(f(z)\to a\) as \(z\to\infty\) along a straight line, \(f(z)\to b\) as \(z\to\infty\) along another straight line and \(f(z)\) is analytic and bounded in the angle between, then \(a=b\) and \(f(z)\to a\) uniformly in the angle. The straight lines may be replaced by the curves approaching \(\infty\)._
Proof of Theorem 1.: We assume that (3) has a solution \(f(z)\) with finite order. Set
\[f=y\exp\{-\frac{1}{2}\int_{0}^{z}h(z)e^{p(z)}dz\}. \tag{4}\]
Equation (3) can be transformed into
\[y^{\prime\prime}+\left(Q(z)-\frac{1}{4}(he^{p(z)})^{2}-\frac{1}{2}h^{\prime}( z)e^{p(z)}-\frac{1}{2}h(z)p^{\prime}(z)e^{p(z)}\right)y=0. \tag{5}\]
By a translation, we may assume that
\[Q(z)=a_{m}z^{m}+a_{m-2}z^{m-2}+\cdots,\ \ m>2.\]
We define the critical ray for \(Q(z)\) as those ray \(re^{\iota\theta_{j}}\) for which
\[\theta_{j}=\frac{-\arg a_{m}+2j\pi}{m+2},\]
where \(j=0,1,2,\ldots,m+1\) and note that the substitution \(z=xe^{\iota\theta_{j}}\) transforms equation (5) into
\[\frac{d^{2}y}{dx^{x}}+(Q_{1}(x)+P_{1}(x))y=0, \tag{6}\]
where
\[Q_{1}(x)=\alpha_{1}x^{m}+O(x^{m-2}),\alpha_{1}>0\]
and
\[P_{1}(x)=-\frac{1}{4}(he^{p(xe^{\iota\theta_{j}})})^{2}-\frac{1}{2}h^{\prime}( xe^{\iota\theta_{j}})e^{p(xe^{\iota\theta_{j}})}-\frac{1}{2}h(xe^{\iota\theta_{j} })p^{\prime}(xe^{\iota\theta_{j}})e^{p(xe^{\iota\theta_{j}})}.\]
For the polynomial \(p(z)\) with degree \(n\), set \(p(z)=(\alpha+\iota\beta)z^{n}+p_{n-1}(z)\) with \(\alpha,\ \beta\) real, and denote \(\delta(p,\theta)=\alpha\cos n\theta-\beta\sin n\theta\). The rays
\[\arg z=\theta_{k}=\frac{arc\tan\frac{\alpha}{\beta}+k\pi}{n},k=0,1,2,\ldots,2n-1\]
satisfying \(\delta(p,\theta_{k})=0\) can split the complex domain into \(2n\) equal angular domains. Without loss of generality, denote these angle domains as
\[\omega^{+}=\{re^{\iota\theta}:0<r<+\infty,\frac{2i\pi}{n}<\theta<\frac{(2i+1) }{n}\},\]
\[\omega^{-}=\{re^{\iota\theta}:0<r<+\infty,\frac{(2i+1)}{n}<\theta<\frac{2(i+1) }{n}\},\]
\(\iota=0,1,\ldots,n-1\), where \(\delta(p,\theta)>0\) on \(\omega^{+}\) and \(\delta(p,\theta)<0\) on \(\omega^{-}\). By Lemma 1, we obtain
\[|P_{1}(x)|\leq |(h(xe^{\iota\theta_{j}})e^{p(xe^{\iota\theta_{j}})})^{2}|+|h^{ \prime}(xe^{\iota\theta_{j}})e^{p(xe^{\iota\theta_{j}})}|+|h(xe^{\iota\theta_{j }})e^{p(xe^{\iota\theta_{j}})}p^{\prime}(xe^{\iota\theta_{j}})|\] \[\leq\exp\{\delta(P,\theta)x^{n}\}+\exp\{\frac{1}{2}\delta(P, \theta)x^{n}\}+\exp\{\frac{1}{2}\delta(P,\theta)x^{n}\}O(x^{n-1})\to 0\]
for \(xe^{\iota\theta_{j}}\in\omega^{-}\) as \(x\to\infty\), then by Lemma 2 and (6), for any critical line \(\arg z=\theta_{j}\) lying in \(\omega^{-}\) there exists a path \(\Gamma_{\theta_{j}}\) tending to \(\infty\), such that \(\arg z\to\theta_{j}\) on \(\Gamma_{\theta_{j}}\) while \(y(z)\to 0\) there. Moreover, by
\[|\exp\{-\frac{1}{2}\int_{0}^{z}h(z)e^{p(z)}\}| \leq\exp\{\frac{1}{2}\left|\int_{0}^{z}h(z)e^{p(z)}\right|\} \tag{7}\] \[\leq\exp\{\frac{1}{2}r\exp\{\delta(p,\theta)r^{n}\}\}\to 1\]
for \(z>\omega^{-}\) as \(r\to\infty\), together with (4) we have \(f(z)\to 0\) along \(\Gamma_{\theta_{j}}\) tending to \(\infty\). Setting \(V=\frac{f^{\prime}}{f}\), equation (3) can be written as
\[V^{\prime}+V^{2}+h(z)e^{p(z)}V+Q(z)=0.\]
By Lemma 3, we have
\[|V^{\prime}|+|V|^{2}=O(|z|^{N})\]
outside an \(R\)-set \(U\), where \(N\) is a positive constant. Moreover, if \(z=re^{\iota\phi}\in\omega^{+}\) is such that the ray \(\arg z=\phi\) meets only finitely many discs of \(U\) we see that \(V=o(|z|^{-2})\) as \(z\) tends to \(\infty\) on this ray and hence \(f\) tends to a finite, nonzero limit. Applying this reasoning to a set of \(\phi\) outside a set of \(0\) measure we deduce by the Phragmen-Lindelof principle that without loss of generality, for any small enough given positive \(\epsilon\),
\[f(re^{\iota\theta})\to 1, \tag{8}\]
as \(r\to\infty\) with
\[z=re^{\iota\theta}\in\omega^{+}_{\epsilon}=\{z=re^{\iota\theta}:0<r<\infty, \frac{2i\pi}{n}+\epsilon<\theta<\frac{(2i+1)\pi}{n}-\epsilon\}.\]
For any \(z=re^{\iota\theta}\in\omega^{-}\), we have that \(\delta(p,\theta)<0\), and by Lemma 1, we have
\[|Q(z)-\frac{1}{4}(he^{p(z)})^{2}-\frac{1}{2}h^{\prime}(z)e^{p(z)} -\frac{1}{2}h(z)p^{\prime}(z)e^{p(z)}|\leq|Q(z)|+|(he^{p(z)})^{2}|\] \[+|h^{\prime}(z)e^{p(z)}|+|h(z)p^{\prime}(z)e^{p(z)}|\] \[\leq O(r^{m})+\exp\{\delta(P,\theta)x^{n}\}+\exp\{\frac{1}{2} \delta(P,\theta)x^{n}\}\] \[+\exp\{\frac{1}{2}\delta(P,\theta)x^{n}\}O(x^{n-1})\] \[\leq O(r^{m}) \tag{9}\]
for sufficiently large \(r\). Applying, Lemma 4 to (5) and together with (9), \(y(z)\) satisfies
\[\log^{+}|y(re^{\iota\theta}|=O(r^{\frac{m+2}{2}})\]
as \(r\to\infty\) for any \(z=re^{\iota\theta}\in\omega^{-}.\) From (5) and (7), we have
\[\log^{+}|f(re^{\iota\theta}|=O(r^{\frac{m+2}{2}}) \tag{10}\]
as \(r\rightarrow\infty\) for any \(z=re^{\iota\theta}\in\omega^{-}.\) On the rays \(\arg z=\theta_{k}\) such that \(\delta(p,\theta_{k})=0,\) we have \(|e^{p(z)}|=|e^{p_{n-1}(z)}|.\) Consider the two cases \(\delta(p_{n-1},\theta_{k})>0\) or \(\delta(p_{n-1},\theta_{k})<0,\) by the same method above, we get \(f(z)\to 1\) or \(\log^{+}|f(z)|=O(r^{\frac{m+2}{2}}),\) respectively, on the ray \(\arg z=\theta_{k}.\) If \(\delta(p_{n-1},\theta_{k})=0\) also, repeating these arguments again. Finally, we deduce that either \(f(z)\to 1\) or \(\log^{+}|f(z)|=O(r^{\frac{m+2}{2}})\) on the rays \(\arg z=\theta_{k},\)\(k=0,1,\ldots,2n-1.\) Thus, (4), (8), (10) and the fact \(\epsilon\) is arbitrary imply that, by the Phragmen-Lindelof principle,
\[\rho(f)\leq\frac{m+2}{2}. \tag{11}\]
We claim that \(\frac{(2i+1)}{n},\)\((i=0,1,\ldots,n-1)\) are critical rays for \(Q(z)\). Otherwise, there exists a critical \(\theta_{j}\) for \(Q(z)\) in
\[\frac{2i+1\pi}{n}<\theta_{j}<\frac{2(i+1)\pi}{n}+\frac{2\pi}{m+2}\ \ (i=0,1, \ldots,n-1)\]
because \(m+2>2n.\) This implies the existence of an unbounded domain of angular measure at most \(\frac{2\pi}{m+2}+\epsilon,\) bounded by a path on which \(f(z)\to 0\) and a ray on which \(f(z)\to 1.\) By Remark 1 implies that \(\rho(f)>\frac{m+2}{2},\) contradicting (11). Then, there exists a positive integer \(k\) satisfying \(\frac{2\pi}{n}=k\frac{2}{m+2},\) that is, \(m+2=kn,\) which contradicts \(n\nmid m+2.\) Thus, we complete the proof.
Theorem 2 is motivated by Theorem C given by Kumar and Saini[12]. They considered, \(A(z)\) has Fabry gaps and \(\rho(B)<\rho(A)\). We changed the conditions on \(A(z)\) to have multiply connected Fatou component.
Theorem C. [12]_Let \(A(z)\) and \(B(z)\) be an entire functions such that \(\rho(B)<\rho(A)\) and \(A(z)\) has Fabry gaps. Then, \(\rho(f)=\infty\) and \(\rho_{2}(f)=\rho(A),\) where \(f\) is a non-trivial solution of equation (1)._
Theorem 2._Let \(A(z)\) be a transcendental entire function with a multiply-connected Fatou component and \(B(z)\) be an entire function satisfying \(\rho(B)<\rho(A)\). Then, every non-trivial solution of equation (1) is of infinite order. Moreover,_
\[\rho_{2}(f)=\rho(A).\]
Lemma 5 is given by Gundersen[4]. He generalized the estimates of logarithmic derivatives of transcendental meromorphic function of finite order.
Lemma 5. [4]_Let \(f\) be a transcendental meromorphic function with finite order and \((k,j)\) be a finite pair of integers that satisfies \(k>j\geq 0\) and let \(\epsilon>0\) be a given constant. Then following statements holds:_
1. _there exists a set_ \(E_{1}\subset[0,2\pi]\) _with linear measure zero such that for_ \(\theta\in[0,2\pi)\setminus E_{1}\) _there exists_ \(R(\theta)>0\) _such that_ \[\left|\frac{f^{(k)}(z)}{f^{(j)}(z)}\right|\leq|z|^{(k-j)(\rho(f)-1+\epsilon)}\] _for all_ \(k,j\)_;_ \(|z|>R(\theta)\) _and_ \(argz=\theta\)__
2. _there exists a set_ \(E_{2}\subset(1,\infty)\) _with finite logarithmic measure such that for all_ \(|z|\not\in E_{2}\cup[0,1]\) _such that inequality in (a) holds for all_ \(k,j\) _and_ \(|z|\geq R(\theta)\)
_._
3. _there exists a set_ \(E_{3}\subset[0,\infty)\) _with finite linear measure such that for all_ \(|z|\not\in E_{3}\) _such that_ \[\left|\frac{f^{(k)}(z)}{f^{(j)}(z)}\right|\leq|z|^{(k-j)(\rho(f)+\epsilon)}\] _holds for all_ \(k,j\)_._
Recently, Pant and Saini [20] proved the following result for an entire function.
**Lemma 6**.: _[_20_]_ _Suppose \(f\) is a transcendental entire funtion. Then, there exists a set \(F\subset(0,\infty)\) with finite logarithmic measure such that for all \(z\) satisfying \(|z|=r\in F\) and \(|f(z)|=M(r,f)\) we have_
\[\left|\frac{f(z)}{f^{(m)}(z)}\right|\leq 2r^{m},\]
_for all \(m\in N.\)_
**Lemma 7**.: _[_26_]_ _Suppose \(f\) is a transcendental meromorphic function having atmost finite poles. If \(J(f)\) has only bounded components, then for any complex number, there exists a constant \(0<\beta<1\) and two sequences of positive numbers \(\{r_{n}\}\) and \(\{R_{n}\}\) with \(r_{n}\to\infty\) and \(R_{n}/r_{n}\to\infty(n\to\infty)\) such that_
\[M(r,f)^{\beta}\leq L(r,f)\quad\text{for}\quad r\in H,\]
_where \(H=\cup_{n=1}^{\infty}\{r:r_{n}<r<R_{n}\}.\)_
Proof of Theorem 2.: We prove this Theorem by contradiction. Suppose \(f\) is a finite order non-trivial solution of equation (1). Applying Lemma 5, there is a set \(E\subset(1,\infty)\) with finite logarithmic measure such that
\[\left|\frac{f^{{}^{\prime\prime}}(z)}{f^{\prime}(z)}\right|\leq|z|^{2\rho(f)}, \tag{12}\]
holds for all \(z\) satisfying \(|z|\notin E\cup[0,1]\).
Suppose that \(z_{r}=re^{\theta_{r}}\) be the points such that \(|f(z_{r})|=M(r,f)\). Then, applying Lemma 6, there exists a set \(F\subset(0,\infty)\) with \(m_{l}(F)<\infty\) such that
\[\frac{f(re^{\theta_{r}})}{f^{m}(re^{\theta_{r}})}\leq 2r^{m}, \tag{13}\]
holds for all sufficiently large \(r\notin F\) and for all \(m\in\mathbb{N}\). Applying Lemma 7, we have
\[M(r,A)^{\gamma}\leq|A(re^{\theta})|, \tag{14}\]
for \(0<\gamma<1\) and \(r\in F_{1}=\cup_{n=1}^{\infty}\{r:r_{n}<r<R_{n}\}\). Let \(\rho(B)<\beta<\rho(A)\), then the definition of order of growth of \(B(z)\) implies that
\[|B(re^{\theta})|\leq\exp r^{\beta}, \tag{15}\]
for all sufficiently large \(r\). From equations (1), (12), (13), (14) and (15), there exists a sequence \(z=re^{\theta}\) such that for all \(r\in F_{1}\setminus(F\cup E\cup[0,1])\), we have
\[|A(re^{\theta})| \leq\left|\frac{f^{{}^{\prime\prime}}(re^{\theta})}{f^{{}^{\prime \prime}}(re^{\theta})}\right|+|B(re^{\theta})|\left|\frac{f(re^{\theta})}{f^{{} ^{\prime}}(re^{\theta})}\right|\] \[\implies M(r,A)^{\gamma} \leq r^{2\rho(f)}+2r\exp r^{\beta}\] \[\leq 2r\exp r^{\beta}(1+o(1)).\]
This gives \(\rho(A)\leq\beta\), which is a contradiction. Hence, every non-trivial solution of equation (1) is of infinite order.
In 2017, Gundersen[6] asked a question, "Does every non-trivial solution \(f\) of equation (1) is of infinite order, when \(A(z)\) satisfies \(\lambda(A)<\rho(A)\) and \(B(z)\) is a non-constant polynomial?" Long, et al.[16] partially answered the question.
Theorem D.: _[_16_]_ _Let \(A(z)=h(z)e^{P(z)}\) satisfy \(\lambda(A)<\rho(A)\) and \(B(z)=b_{m}z^{m}+b_{m-1}z^{m-1}+\cdots+b_{0}\) is a polynomial of degree \(m\) such that:_
_(a)_ \(m+2<2n\)_, or_
_(b)_ \(m+2>2n\) _and_ \(m+2\neq 2kn\) _for all integers_ \(k\)_, or_
_(c)_ \(m+2=2n\) _and_ \(\frac{a_{n}^{2}}{b_{m}}\) _is not real and negative._
_Then, all non-trivial solution of equation (1) have infinite order._
Kumar, et al.[11] motivated by their result, considered \(\rho(A)>n\) and \(B(z)\) to be a polynomial in Theorem E.
Theorem E.: _[_11_]_ _Consider a transcendental entire function \(A(z)=h(z)e^{P(z)}\), where \(P(z)\) is a non-constant polynomial of degree \(n\) and \(\rho(h)>n\). Assume that \(h(z)\) is bounded away from zero and exponentially blows up in \(E^{+}\) and \(E^{-}\) respectively and let \(B(z)\) be a polynomial. Then, all non-trivial solutions of the equation (1) are of infinite order._
We are motivated by Theorem E and replace the condition of \(B(z)\) to satisfy some conditions given in Theorem 3.
Theorem 3.: _Let \(A(z)\) satisfy the conditions of Theorem E and \(B(z)\) be a transcendental entire function satisfying_
_(i)_ \(\rho(B)<\rho(A)\) _or_
_(ii)_ \(\mu(B)<\rho(A)\)__
_Then, all non-trivial solutions of the equation (1) are of infinite order._
The following Lemma yield us a lower bound for modulus of an entire function in the neighbourhood of \(\theta\), where \(\theta\in[0,2\pi)\).
Lemma 8.: _[_23_]_ _Suppose \(f(z)\) is an entire function of finite order \(\rho\) and \(M(r,f)=|f(re^{\iota\theta_{r}})|\) for every \(r\). Given \(\zeta>0\) and \(0<C(\rho,\zeta)<1,\) there exists \(0<l_{0}<\frac{1}{2}\) and a set \(S\subset(1,\infty)\) with \(\underline{\log dens}(S)\geq 1-\zeta\) such that_
\[e^{-5\pi}M(r,f)^{1-C}\leq|f(re^{\iota\theta})|,\]
_for all sufficiently large \(r\in S\) and for all \(\theta\) satisfying \(|\theta-\theta_{r}|\leq l_{0}\)._
The following Lemma is a Proposition in the research paper of Kumar, et al.[9].
Lemma 9.: _[_9_]_ _Suppose \(f(z)\) and \(g(z)\) be two entire functions satisfying \(\rho(g)<\rho(f)\). Then, for \(0<\epsilon\leq min\{\frac{3\rho(f)}{4},\frac{\rho(f)-\rho(g)}{2}\}\), there exists \(S\subset(1,\infty)\) with \(\overline{\log dens}(S)=1\) satisfying_
\[|g(z)|=o(M(|z|,f))\]
_for sufficiently large \(|z|\in S\)._
Remark 2.: _If we replace \(\rho(g)\) with \(\mu(g)\), then Lemma 9 would be true._
In Lemma 1, consider \(A(z)=v(z)e^{P(z)}\), where \(v(z)\) is an entire function and \(P(z)\) is a polynomial of degree \(n\) satisfying \(\rho(v)<\deg P\). But, in Lemma 10, the authors considered \(\rho(v)>\deg P\) and obtained that \(|A(re^{\iota\theta})|\geq exp((1-\epsilon)\delta(P,\theta)r^{n})\) for \(\theta\in E^{+}/E\) and also for \(\theta\in E^{-}/E\), where \(E\) is a set of linear measure \(0\).
**Lemma 10**.: _[_10_]_ _Let \(A(z)=v(z)e^{P(z)}\) be an entire function, where \(P(z)\) is a polynomial of degree \(n\) and \(v(z)\) satisfies the condition of Theorem E. Then, there exists a set \(E\subset[0,2\pi]\) of linear measure zero such that for \(\epsilon>0\) the following holds:_
_(i) for \(\theta\in E^{+}\setminus E,\) there exists \(R(\theta)>1\) such that_
\[|A(re^{\iota\theta})|\geq exp((1-\epsilon)\delta(P,\theta)r^{n}) \tag{16}\]
_for \(r>R(\theta),\)_
_(ii) for \(\theta\in E^{-}\setminus E,\) there exists \(R(\theta)>1\) such that_
\[|A(re^{\iota\theta})|\geq exp((1-\epsilon)\delta(P,\theta)r^{n}) \tag{17}\]
_for \(r>R(\theta)\)._
The following Lemma is proved by Gundersen[5], it gives the logarithmic estimate of the analytic function \(f(z)\).
**Lemma 11**.: _[_5_]_ _Let \(f\) be an analytic on a ray \(\gamma=re^{\iota\theta}\) and suppose that for some constant \(\alpha>1,\) we have_
\[\left|\frac{f^{\prime}(z)}{f(z)}\right|=O(|z|^{-\alpha}) \tag{18}\]
_as \(z\rightarrow\infty\) along \(argz=\theta\). Then, there exists a constant \(c\neq 0\) such that \(f(z)\to c\) as \(z\rightarrow\infty\) along \(argz=\theta\)._
The proof of Theorem 3 is inspired by the proof of Theorem E. We have slightly changed the proof according to the conditions of the Theorem.
Proof of Theorem 3.: If \(\rho(A)=\infty\), then it is obvious that \(\rho(f)=\infty\), for all non-trivial solution \(f\) of the equation (1). Therefore, let us suppose that \(\rho(A)<\infty\) and there exists a non-trivial solution \(f\) of the equation (1) such that \(\rho(f)<\infty\). From Lemma 5, there exists \(E_{1}\subset[0,2\pi]\) of linear measure zero and \(m>0\) such that,
\[\left|\frac{f^{\prime\prime}(re^{\iota\theta})}{f(re^{\iota\theta})}\right| \leq r^{m}, \tag{19}\]
for \(\theta\in[0,2\pi]\setminus E_{1}\) and \(r>R(\theta)\). Since \(A(z)\) is an entire function of finite order, suppose that \(M(r,A)=|A(re^{\iota\theta_{r}})|\) for every \(r\). Then, from Lemma 8, for \(0<\zeta<1\) and \(0<C<1\), there exists \(0<l_{0}<\frac{1}{2}\) and \(S_{1}\subset(0,\infty)\) with \(\underline{\log dens}(S_{1})\geq 1-\zeta\) such that
\[e^{-5\pi}M(r,A)^{1-C}\leq|A(re^{\iota\theta})|,\]
for all sufficiently large \(r\in S_{1}\) and for all \(\theta\) satisfying \(|\theta-\theta_{r}|\leq l_{0}\).
1. Let \(\rho(B)<\rho(A)\), from Lemma 9 for \(0<\epsilon\leq min\{\frac{3\rho(A)}{4},\frac{\rho(A)-\rho(B)}{2}\}\), there exists \(S_{2}\subset(1,\infty)\) with \(\overline{\log dens}(S_{2})=1\) satisfying \[\frac{|B(z)|}{M(|z|,A)}\to 0\] (20)
for sufficiently large \(|z|\in S_{2}\). Using properties of logarithmic density and the fact that \(\overline{\log dens}(S_{1}\cup S_{2})\leq 1\), we get \[\overline{\log dens}(S_{1}\cap S_{2}) \geq\underline{\log dens}(S_{1})+\underline{\log dens}(S_{2})- \overline{\log dens}(S_{1}\cup S_{2})\] \[\geq\overline{1-\zeta+1}-1=1-\zeta.\] Thus, we can choose \(z_{r}=re^{\iota\theta_{r}}\) with \(r\to\infty\) such that \(r\in(S_{1}\cap S_{2})\) and \(|A(re^{\iota\theta_{r}})|=M(r,A)\). We may consider \(<\theta_{r}>\) as a sequence, where \(r\in(S_{1}\cap S_{2})\) such that \(\theta_{r}\to\theta_{0}\) and \(r\in(S_{1}\cap S_{2})\). We may consider following three cases:- 1. \(\delta(P,\theta_{0})>0\). From Lemma 10(i), we have \[|A(re^{\iota\theta_{0}})|\geq\exp(\frac{1}{2}\delta(P,\theta_{0})r),\] (21) for sufficiently large \(r\), where \(r\in(S_{1}\cap S_{2})\) and \(\theta_{0}\in E^{+}/E_{2}\) and \(E_{2}\) is a set of critical rays of \(e^{P(z)}\) of linear measure \(0\). From equation (1) we get, \[\left|\frac{f^{\prime}(re^{\iota\theta_{0}})}{f(re^{\iota\theta_{0}})}\right| \leq\left|\frac{f^{\prime\prime}(re^{\iota\theta_{0}})}{f(re^{\iota\theta_{0}}) }\right|\frac{1}{|A(re^{\iota\theta_{0}})|}+\frac{\left|B(re^{\iota\theta_{0}} )\right|}{M(r,A))},\] (22) for \(r\in(S_{1}\cap S_{2})\) and \(\theta_{0}\in E^{+}/(E_{1}\cup E_{2})\). Using equations (19), (20), (21) and (22), we get, \[\left|\frac{f^{\prime}(re^{\iota\theta_{0}})}{f(re^{\iota\theta_{0}})}\right|\to 0\] for \(r\in(S_{1}\cap S_{2})\), \(r\to\infty\) and \(\theta_{0}\in E^{+}/(E_{1}\cup E_{2}).\) This implies that \[\left|\frac{f^{\prime}(re^{\iota\theta_{0}})}{f(re^{\iota\theta_{0}})}\right| =O\left(\frac{1}{r^{2}}\right),\] (23) as \(r\to\infty\) and \(r\in S_{1}\cap S_{2}\). From Lemma 11, \[f(re^{\iota\theta_{0}})\to a\] (24) as \(r\to\infty\) and \(r\in(S_{1}\cap S_{2})\) for \(\theta_{0}\in E^{+}\setminus(E_{1}\cup E_{2})\), where \(a\) is a non-zero finite constant. Since \(f(re^{\iota\theta_{r}})\to f(re^{\iota\theta_{0}})\) and using (24), we get \[f(re^{\iota\theta_{r}})\to a\] for \(r\to\infty\) and \(r\in(S_{1}\cap S_{2})\). Thus, entire function \(f\) is bounded over domain. But since function \(f\) is entire and non-constant, \(f(re^{\iota\theta})\) is unbounded for all \(\theta\in[0,2\pi]\). Thus, for \(\theta_{r}\in[0,2\pi]\), function \(f(re^{\iota\theta_{r}})\) is also unbounded, which is a contradiction. 2. \(\delta(P,\theta)<0\). From Lemma 10(ii), we have \[|A(re^{\iota\theta_{0}})|\geq\exp(\frac{1}{2}\delta(P,\theta_{0})r^{n}),\] (25)
for \(\theta_{0}\in E^{-}/E_{1}\) for large \(r\). Using equation (19), (20) and (25), we have \[\left|\frac{f^{\prime}(re^{\iota\theta_{0}})}{f(re^{\iota\theta_{0}})}\right| \to 0,\] (26) as \(r\rightarrow\infty\) and \(\theta_{0}\in E^{-}/(E_{1}\cup E_{2})\). From Lemma 11, \[f(re^{\iota\theta_{0}})\to b,\] (27) as \(r\rightarrow\infty\) and \(r\in(S_{1}\cap S_{2})\) for \(\theta_{0}\in E^{-}\setminus(E_{1}\cup E_{2})\), where \(b\) is a non-zero finite constant. Since \(f(re^{\iota\theta_{r}})\to f(re^{\iota\theta_{0}})\) and using (27), we get \[f(re^{\iota\theta_{r}})\to b,\] for \(r\rightarrow\infty\) and \(r\in(S_{1}\cap S_{2})\). Thus, entire function \(f\) is bounded over whole domain. Since function \(f\) is entire and non-constant, then \(f(re^{\iota\theta})\) is unbounded for all \(\theta\in[0,2\pi]\). Thus, for \(\theta_{r}\in[0,2\pi]\), function \(f(re^{\iota\theta_{r}})\) is also unbounded, which is a contradiction. 3. \(\delta(P,\theta_{0})=0\). Suppose \(\theta_{0}^{*}\in[0,2\pi]\) in the neighbourhood of \(\theta_{0}\) such that \(\delta(P,\theta_{0}^{*})>0\). Letting \(r\rightarrow\infty\), we get \(|\theta_{0}-\theta_{0}^{*}|\leq l_{0}\). Choosing \(C\) and \(\zeta\) such that \(l_{0}\to 0\). \[\left|\frac{f^{\prime}(re^{\iota\theta_{0}})}{f(re^{\iota\theta_{0}})}\right| \sim\left|\frac{f^{\prime}(re^{\iota\theta_{0}^{*}})}{f(re^{\iota\theta_{0}^{ *}})}\right|\leq\left|\frac{f^{\prime\prime}(re^{\iota\theta_{0}^{*}})}{f(re^{ \iota\theta_{0}^{*}})}\right|\frac{1}{|A(re^{\iota\theta_{0}^{*}})|}+\frac{ \left|B(re^{\iota\theta_{0}^{*}})\right|}{M(r,A))},\] (28) Remaining proof is similar to part (i). 2. Let \(\mu(B)<\rho(A)\), from Remark 2 for \(0<\epsilon\leq min\{\frac{3\rho(A)}{4},\frac{\rho(A)-\mu(B)}{2}\}\), there exist \(S_{2}\subset(1,\infty)\) with \(\overline{\log dens}(S_{2})=1\) satisfying \[\frac{|B(z)|}{M(|z|,A)}\to 0.\] (29) Remaining proof is similar to part (i).
### Second Order Non-Homogenous Linear Differential Equation
Kumar and Saini[12] gave several results for equation (2). In one of their result, they considered \(A(z)\) to have Fabry gaps, \(\max(\rho(H),\rho(B))<\rho(A)\) and proved the following result. We change the condition on \(A(z)\) and consider \(A(z)\) to be transcendental entire function having a multiply-connected Fatou component and prove Theorem 4.
**Theorem F**.: **[**12**]** _Let the coefficients and H(z) of equation (2) are entire functions such that \(\max(\rho(H),\rho(B))<\rho(A)\) and \(A(z)\) has Fabry gaps. Then, any non-trivial solution of equation (2) are of infinite order._
**Theorem 4**.: _Let \(A(z)\) be a transcendental entire function having a multiply-connected Fatou component and \(B(z)\), \(H(z)\) be entire functions such that \(\max(\rho(H),\)\(\rho(B))<\rho(A)\). Then, any non-trivial solution of (2) is of infinite order._
Proof of Theorem 4.: Suppose \(f\) is a finite order solution of equation (2). Then, applying Lemma 5, there is a set \(E\subset(1,\infty)\) with finite logarithmic measure such that
\[\left|\frac{f^{{}^{\prime\prime}}(z)}{f^{\prime}(z)}\right|\leq|z|^{2\rho(f)}, \tag{30}\]
holds for all \(z\) satisfying \(|z|\notin E\cup[0,1]\).
Given that \(\max(\rho(H),\rho(B))<\rho(A)\), so let \(\beta\) be such that \(\max(\rho(H),\rho(B))<\beta<\rho(A)\), then applying the definition of order of growth on \(B(z)\) and \(H(z)\) gives
\[|B(re^{\iota\theta})|\leq\exp r^{\beta}\qquad\text{and}\qquad|H(re^{\iota \theta})|\leq\exp r^{\beta}, \tag{31}\]
holds for all sufficiently large \(r\). Suppose that \(z_{r}=re^{\iota\theta_{r}}\) be the points such that \(|f(z_{r})|=M(r,f)\). Then, applying Lemma 6, there exists a set \(F\subset(0,\infty)\) with \(m_{l}(F)<\infty\) such that
\[\frac{f(re^{\iota\theta_{r}})}{f^{m}(re^{\iota\theta_{r}})}\leq 2r^{m}, \tag{32}\]
holds for all sufficiently large \(r\notin F\) and for all \(m\in\mathbb{N}\). Applying Lemma 7, we have
\[M(r,A)^{\gamma}\leq|A(re^{\iota\theta})|, \tag{33}\]
for \(0<\gamma<1\) and \(r\in F_{1}=\cup_{n=1}^{\infty}\{r:r_{n}<r<R_{n}\}\).
From equations (2), (30), (31), (32), and (33), there exists a sequence \(z=re^{\iota\theta}\) such that for all \(r\in F_{1}\setminus(E\cup F\cup[0,1])\), we have
\[|A(re^{\iota\theta})| \leq\left|\frac{f^{{}^{\prime\prime}}(re^{\iota\theta})}{f^{{}^{ \prime}}(re^{\iota\theta})}\right|+|B(re^{\iota\theta})|\left|\frac{f(re^{ \iota\theta})}{f^{{}^{\prime}}(re^{\iota\theta})}\right|+\left|\frac{H(re^{ \iota\theta})}{f(re^{\iota\theta})}\right|\left|\frac{f(re^{\iota\theta})}{f^{ {}^{\prime}}(re^{\iota\theta})}\right|\] \[M(r,A)^{\gamma} \leq r^{2\rho(f)}+2r\exp r^{\beta}+2r\left|\frac{H(re^{\iota \theta})}{M(r,f)}\right|\] \[\leq r^{2\rho(f)}+4r\exp r^{\beta}\] \[\leq 4r\exp r^{\beta}(1+o(1)).\]
This gives \(\rho(A)\leq\beta\), which is a contradiction. Hence, every non-trivial solution of equation (2) is of infinite order.
|
2303.14004
|
Vulnerability of Face Morphing Attacks: A Case Study on Lookalike and
Identical Twins
|
Face morphing attacks have emerged as a potential threat, particularly in
automatic border control scenarios. Morphing attacks permit more than one
individual to use travel documents that can be used to cross borders using
automatic border control gates. The potential for morphing attacks depends on
the selection of data subjects (accomplice and malicious actors). This work
investigates lookalike and identical twins as the source of face morphing
generation. We present a systematic study on benchmarking the vulnerability of
Face Recognition Systems (FRS) to lookalike and identical twin morphing images.
Therefore, we constructed new face morphing datasets using 16 pairs of
identical twin and lookalike data subjects. Morphing images from lookalike and
identical twins are generated using a landmark-based method. Extensive
experiments are carried out to benchmark the attack potential of lookalike and
identical twins. Furthermore, experiments are designed to provide insights into
the impact of vulnerability with normal face morphing compared with lookalike
and identical twin face morphing.
|
Raghavendra Ramachandra, Sushma Venkatesh, Gaurav Jaswal, Guoqiang Li
|
2023-03-24T13:59:48Z
|
http://arxiv.org/abs/2303.14004v1
|
# Vulnerability of Face Morphing Attacks: A Case Study on Lookalike and Identical Twins
###### Abstract
Face morphing attacks have emerged as a potential threat, particularly in automatic border control scenarios. Morphing attacks permit more than one individual to use travel documents that can be used to cross borders using automatic border control gates. The potential for morphing attacks depends on the selection of data subjects (acomplice and malicious actors). This work investigates lookalike and identical twins as the source of face morphing generation. We present a systematic study on benchmarking the vulnerability of Face Recognition Systems (FRS) to lookalike and identical twin morphing images. Therefore, we constructed new face morphing datasets using 16 pairs of identical twin and lookalike data subjects. Morphing images from lookalike and identical twins are generated using a landmark-based method. Extensive experiments are carried out to benchmark the attack potential of lookalike and identical twins. Furthermore, experiments are designed to provide insights into the impact of vulnerability with normal face morphing compared with lookalike and identical twin face morphing.
Biometrics, Face recognition, Morphing attacks, Vulnerability, Twins, Lookalike
## I Introduction
Biometric person verification systems that use either physical or behavioral characteristics have been extensively deployed in various applications, including border control. Facial biometrics are the primary identifiers in electronic passports (e-passports) that can enable automatic border control applications. The popularity of face biometrics in high-security applications can be attributed to user convenience, nonintrusive capture, and acceptable verification performance. However, face biometrics are highly vulnerable to presentation attacks in which attack instruments are generated using low-cost materials [1, 2]. Morphing attacks have demonstrated high vulnerability among the different types of attacks, especially in passport issuance and automatic border control scenarios.
Morphing is the process of seamless blending of two or more images, such that the resulting image shows visual similarities corresponding to the source images used for morphing. Face-morphing techniques blend two or more face images to generate a single-face image. Earlier studies have demonstrated that face morphing images indicate the vulnerability of the commercial Face Recognition System (FRS) [3], deep learning-based FRS [4], and human observers [5]. Thus, the detection of morphing attacks on FRS has gained momentum, resulting in several techniques based on a single image and differential image [6]. Even though the vulnerability of the FRS is well evaluated on normal (or regular) faces, it is an under-studied problem with lookalike and identical twin data subjects.
The twin population across the globe has experienced a significant rise, equivalent to approximately 1.4 million children a year. This translates to one among the 45 who will be born as twins. Identical twin face recognition is a challenging problem, as FRS typically fails to distinguish between twins. In addition, lookalike (or doppelganger) face recognition is still a challenging problem for FRS because it fails to differentiate owing to highly similar facial features. A study presented in [7, 8] indicated that one in 135 people could find a single identical lookalike. It was demonstrated in [9] that three different FRS have limitations in verifying
Fig. 1: Illustration of the influence of lookalike and identical twins morphing on face recognition system
look-alike pairs. Twins and lookalike face recognition have been extensively studied in the biometric community. An early benchmark study [10] on identical twins outlined that the identical twin impostor distribution is more similar to the genuine distribution than the general impostor distribution. Three different FRS were evaluated under six different experimental conditions, indicating the challenge of reliable twin-face verification with varying image conditions. An extensive survey on identical twin face recognition was presented in [11] and discussed techniques developed to improve the verification performance of identical twins. Recent approaches [12, 13] based on deep learning and Siamese networks have reported a marginal improvement in the verification performance of twin-face recognition.
Look-like face recognition has been well studied in the biometric literature. Early work [14] on lookalike face recognition showed a higher number of false matches. Extensive experiments are presented with ten different FRS and a new method based on the facial region to improve the face verification performance for lookalikes. Since then, several approaches have been proposed [15, 16, 17, 18] to enhance the performance of face recognition systems. However, it is worth noting that the datasets used in the literature are curated from the Web and thus have various image qualities. Recently, in [9], a high-quality image database of lookalike data subjects with similar genetics was used to present the vulnerability of the FRS. Since the lookalike data subjects share similar genetics perceptually, they indicate a strong resemblance to each other.
### _Motivation and Contributions_
Identical twins and lookalikes have covered a reasonable population across the globe, and morphing attacks are highly vulnerable, especially in passport issuance and border control scenarios. The success of morphing attacks is highly reliable if an attacker can find a lookalike accomplice. Therefore, in this study, we are motivated to provide insight into the vulnerability of the FRS for both lookalike and identical twins. Figure 1 illustrates examples of lookalike and identical twins and their impacts on the vulnerability of the FRS. To the best of our knowledge, this is the first study to present insights into the vulnerability of FRS to the morphing of lookalike twins and identical twins. In particular, we introduce the following critical questions:
* Does the morphing of lookalike, and identical twins indicate higher vulnerability of FRS compared to the normal (or regular) face?
* Does the morphing of lookalike data subjects indicate higher vulnerability of FRS than identical twins?
* Does the morphing factor influence the vulnerability of FRS to lookalikes than identical twins?
* Does the Commercial-Off-The-Shelf (COTS) FRS indicate a higher vulnerability than deep learning FRS (Arcface)?
In the course of answering the above research questions, the following are the main contributions of this work:
* First work addressing the lookalikes and identical twins morphing attacks vulnerability on FRS.
* Vulnerability analysis is presented using two different FRS, including COTS [19]1 and deep learning (ArcFace [20]). Footnote 1: Disclaimer: These results were produced in experiments conducted by us and should; therefore, the outcome does not necessarily constitute the best the algorithm can do.
* New morphing dataset corresponding to lookalikes and identical twins.
* Extensive experiments are presented to benchmark the vulnerability of lookalike and identical twins.
The rest of the paper is organized as follows: Section II discusses the details of the newly constructed morphing datasets using lookalikes and identical twins, Section III presents a qualitative and quantitative analysis of the vulnerability analysis, and Section IV concludes the paper.
## II lookalike and identical twins morphing dataset
This section discusses the newly constructed face morphing dataset corresponding to identical twin and lookalike datasets. The lookalike face database employed in this work is based on a publicly available dataset [9, 21]. We mainly employ this database compared to other similar datasets because (a) image quality: images are captured under constrained conditions with uniform lighting and a professional photographer. However, the other existing datasets are harvested from a web source that has no control over the image quality (compression artifacts, capture with different cameras, and uncontrolled environmental conditions). (b) Genetically similar: The lookalikes employed in this work are proven to have similar genetics, and thus, they exhibit high likeness. However, other existing datasets have yet to prove to have similar genetics and, thus, do not necessarily exhibit high likeness. (c) Natural capture: The data subjects were captured naturally without extreme makeup. However, the data subjects in other similar datasets may have makeup, or images might have been processed to increase their appearance. The lookalike dataset employed in this study had 16 lookalike pairs that were used to generate the morphing image. In this study, we employed a landmark-based face morphing tool [22] by considering its ability to generate high-quality morphing images, resulting in high vulnerability across different FRS [4]. Morphing images were generated with three different morphing factors: 0.3, 0.5, and 0.7. Non-twin morphing was also generated to provide a comprehensive comparison.
The identical twin morphing dataset was generated based on a publicly available dataset from the University of Notre Dame Twins database [10]. To perform an effective comparative analysis with lookalike faces, we selected 16 identical twin pairs, particularly in the controlled scenario simulating the real-life scenario of face morphing. We then perform the morphing using a landmarks-based method that we used with a lookalike to generate the morphing images with three different morphing factors (0.3, 0.5, 0.7). In addition to identical face morphing, we performed non-identical face morphing from the same dataset to present a comprehensive comparison. Table I shows the statistics of the newly constructed dataset, and
Figure 2 and 3 show an example of the newly generated morphing dataset corresponding to lookalike and identical twins.
## III Experiments and Results
Vulnerability analyses of identical twins and lookalikes are benchmarked and discussed in this section. To compute the vulnerability, we employed both commercial and deep-learning-based face recognition systems. The COTS system corresponds to Cognitec Face VACS SDK version 9.4.2 [19] and the deep-learning FRS is Arcface [20]. These two FRS are considered owing to their robustness and accurate face verification performance and are also widely employed FRS for benchmarking the vulnerability of face morphing techniques.
To quantitatively compute the vulnerability, we employed the generalized morphing attack potential (G-MAP) [23] which can quantify the vulnerability with a variable number of attempts against a given morphing image and across the different FRS while accounting for Failure To Acquire Rate (FTAR) and different types of morphing generation. In this work, we present the vulnerability results in two steps: (1) G-MAP with multiple attempts, in which the vulnerability is presented individually on each FRS for multiple attempts; (2) G-MAP with FTAR = 0 (as the FRS employed in this work can extract facial templates for all probe images) and a number of morphing types to one (as we have used only LMA-based morphing generation). For more information on G-MAP, refer to [23].
In this paper, we present a vulnerability analysis of four different case studies. **Case-I:** The vulnerability of the lookalike faces is presented when morphing is generated between the lookalike pairs. **Case-II:** The vulnerability of the non-lookalike faces is presented by generating the morphing faces from the non-lookalike data subjects from the same dataset. **Case-III:** The vulnerability of the identical twins is presented by generating the morphing images from the identical twins' pairs. **Case-IV:** The vulnerability is computed on the non-identical twins morphing. These four case studies were designed to effectively benchmark the vulnerability of identical twins versus lookalike versus normal data. Furthermore, the analysis with three different morphing factors, 0.3, 0.5, and 0.7, is presented for all four case studies.
Figure 4 shows the scatter plot of the comparison scores computed using two different FRS when enrolled with morphing images generated from lookalike twins and identical twins. Scatter plots are shown for different morphing factors such as 0.3, 0.5, and 0.7. The red lines in the scatter plots indicate the thresholds set at FAR = 0.1%. Figure 5 shows box plots of the verification scores computed using two different FRS and the morphing comparison scores corresponding to the three different morphing factors. Table 7 lists the quantitative values of vulnerabilities computed using G-MAP (with multiple attempts). The following are the important observations:
* It is interesting to observe that both lookalike and identical twins indicate vulnerability on FRS irrespective of the morphing factor. This can be observed in Figure 7, where the morphing scores corresponding to different morphing factors show similar distributions. The quantitative value of the vulnerability computed using G-MAP indicated in Table 7 (refer to Case-I and III) also reflects less variation between the different morphing factors.
* Both FRS systems indicate the vulnerability to lookalike and identical twins irrespective of the morphing factor. COTS indicates a higher vulnerability compared to Arcface FRS on both lookalike and identical twin morphing images.
* COTS FRS indicates a higher vulnerability with identical twins, and Arcface FRS indicates a higher vulnerability with lookalike morphing.
* Among three different morphing factors, 0.5 indicates the highest vulnerability of FRS, followed by 0.3 and 0.7. The FRS indicates the lower vulnerability on lookalike faces than on identical twins, especially with a morphing factor of 0.7.
* Figure 7 shows the box plots of the genuine, impostor, and morphing scores from identical twins and lookalikes computed using COTS and Arcface FRS. The Arcface
\begin{table}
\begin{tabular}{|p{85.4pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Data Type & No. of pairs & No. of Bona fide samples & No. of sampled identical twins & morphed identical twins \\ \hline Lockalikes & 16 & 32 & 96 & 516 \\ \hline Identical twins & 16 & 32 & 96 & 516 \\ \hline \end{tabular}
\end{table} TABLE I: Statistics of the newly constructed database
Fig. 3: Lookalike morphing examples
Fig. 2: Identical Twins morphing examples
[MISSING_PAGE_POST]
FRS indicated good verification performance on identical twins and lookalikes compared to the COTS FRS. The verification performance of the COTS FRS on identical twins shows degraded performance owing to the high overlap of the genuine and impostor scores.
* The distribution of morphing scores (with different morphing factors) indicates the high overlapping with genuine scores, mainly with COTS FRS on identical twins and lookalikes. Therefore, COTS FRS is more vulnerable than Arcface, which is also quantitatively acknowledged in Table II (refer to Cases I and III).
Figure 6 and 7 (Case-II and Case-IV) shows the scatter and box plot computed when morphing scores are computed on the morphing images are generated using normal (not lookalike and not identical twins) datasets. The critical observations are as follows.
* Morphing factor plays a vital role in achieving the vulnerability of FRS. A morphing factor of 0.5 indicates a higher vulnerability with both FRS.
* Arcface FRS indicates a higher vulnerability than COTS FRS on Case-II. However, the COTS FRS indicated a higher vulnerability than the Arcface FRS with Case-IV.
Based on the extensive experiments reported in Table II (also in Figures 4,5,6 and 7 it can be noticed that:
* FRS are highly vulnerable to the morphing images generated using lookalike and identical twins than normal faces.
* Vulnerability of the lookalike and identical twins morphing are less influenced by the morphing factor than normal morphing.
* COTS FRS is highly vulnerable on lookalike and identical twins compared to normal face morphing.
Table III lists the quantitative values of G-MAP that provide an attack assessment across different FRS. Figure 8 shows box plots of the vulnerability with G-MAP for the four cases. Based on the obtained results, the following can be observed.
* The vulnerability of the FRS is high with a lookalike and identical twins morphing samples than normal (not lookalike and not identical twins) morphing samples.
* Highest vulnerability of FRS is noted with the lookalike faces compared to the identical twins morphing images.
showed the higher vulnerability of FRS with lookalike and identical twins.
* **Q2**. Does the morphing of lookalike data subjects indicate higher vulnerability of FRS than identical twins?
* Based on the experimental results reported in Table II and III, the morphing of lookalike data subjects indicates higher vulnerability compared to the morphing of identical twins.
* **Q3**. Does the morphing factor influence the vulnerability of FRS to lookalikes than identical twins?
* Based on the extensive experimental results reported in Figure 4, 5 and Table II, morphing of lookalike and identical twins can indicate the vulnerability on the FRS equally with different FRS. A similar observation is not noticed with the normal (or regular) face morphing (based on Figure 6 and 7).
* **Q4**. Does the Commercial-Off-The-Shelf (COTS) FRS indicate a higher vulnerability than deep learning FRS (Arcface)?
* Based on the results reported in Table II indicates the higher vulnerability of COTS than Arcface.
## IV Conclusions
Evolving attacks on face recognition systems is a growing concern for achieving reliable and secure access control. The morphing attacks demonstrated the high vulnerability of the FRS and the human observers. In this work, we presented the first study on the morphing of real lookalikes and identical twins and their impact on the FRS. We introduced a new dataset constructed using lookalike and identical twin morphologies. The newly constructed datasets comprised normal (or regular) face morphing to effectively benchmark the vulnerability. Morphing was carried out using landmark methods with three different morphing factors: 0.3, 0.5, and 0.7. Extensive experiments were carried out using four different case studies (or evaluation protocols), indicating a higher vulnerability of FRS to lookalike and identical twins than normal morphing. Further analysis also indicated that the lookalike is more vulnerable than identical twin morphing. Future work will address the current limitations identified in this work, such as (a) increasing the size of the dataset, (2) extending the lookalike and identical twin morphing on print scan scenarios, and (3) benchmarking morphing attack detection techniques.
|
2305.17683
|
Integrability of a globally coupled complex Riccati array: quadratic
integrate-and-fire neurons, phase oscillators and all in between
|
We present an exact dimensionality reduction for dynamics of an arbitrary
array of globally coupled complex-valued Riccati equations. It generalizes the
Watanabe-Strogatz theory [Phys. Rev. Lett. 70, 2391 (1993)] for sinusoidally
coupled phase oscillators and seamlessly includes quadratic integrate-and-fire
neurons as the real-valued special case. This simple formulation reshapes our
understanding of a broad class of coupled systems - including a particular
class of phase-amplitude oscillators - which newly fall under the category of
integrable systems. Precise and rigorous analysis of complex Riccati arrays is
now within reach, paving a way to a deeper understanding of emergent behavior
of collective dynamics in coupled systems.
|
Rok Cestnik, Erik A. Martens
|
2023-05-28T10:35:52Z
|
http://arxiv.org/abs/2305.17683v4
|
# Integrability of a globally coupled complex Riccati array:
###### Abstract
We present an exact dimensionality reduction for dynamics of an arbitrary array of globally coupled complex-valued Riccati equations. It generalizes the Watanabe-Strogatz theory [Phys. Rev. Lett. 70, 2391 (1993)] for sinusoidally coupled phase oscillators and seamlessly includes quadratic integrate-and-fire neurons as the real-valued special case. This simple formulation reshapes our understanding of a broad class of coupled systems - including a particular class of phase-amplitude oscillators - which newly fall under the category of integrable systems. Precise and rigorous analysis of complex Riccati arrays is now within reach, paving a way to a deeper understanding of emergent behavior of collective dynamics in coupled systems.
The study of complex systems often involves describing them using a few relevant variables known as order parameters. Such dimensionality reduction techniques are highly useful but challenging to discover, and they may not exist for every system. In this regard, Watanabe and Strogatz [1] (WS) made significant progress by demonstrating how globally coupled phase oscillators can be effectively described using only three order parameters. This reduction arises from a specific subclass of complex Riccati equations [2; 3] that govern the dynamics of such oscillatory arrays. The macroscopic variables are parameters of a Mobius transform that relates the dynamical state variables to \(N\) constants, determined by the initial states of the oscillator array [4]. Remarkably, Ott and Antonsen [5] showed that for large ensembles of non-identical oscillators, the dynamics even collapses to an effectively 2D manifold. These descriptions have proven immensely useful in studying the collective dynamics of coupled oscillatory systems, leading to numerous applications in various domains, including neural circuits in the brain [6; 7; 8], power grids [9; 10; 11], electrochemical oscillators [12], Josephson junctions [2; 13; 14], and more.
In this paper, we present a framework that generalizes these dimensionality reductions to an even larger class of systems: an arbitrary array \(j=1,...,N\) of globally forced complex Riccati equations,
\[\dot{x}_{j}=ax_{j}^{2}+bx_{j}+c\:,\qquad a,b,c\in\mathbb{C}\,, \tag{1}\]
where all \(a,b,c\in\mathbb{C}\) can be arbitrary complex functions of time and \(x_{j}\in\mathbb{C}\) can start with arbitrary complex values. The choice for \(a,b,c\) allows selecting from a range of coupled oscillator systems. Certain choices reproduce known models, such as the quadratic integrate-and-fire neurons (QIF) and phase oscillators; but the possibilities go far beyond that and cover a broad class of 2D systems.
The flow of array variables \(x_{j}\), in (1) is given by the following Mobius transformation:
\[x_{j}=Q+y\frac{\xi_{j}}{1+s\xi_{j}^{\epsilon}}\,, \tag{2}\]
with the three global complex-valued parameters \(Q,y,s\) evolving according to:
\[\dot{Q} =aQ^{2}+bQ+c\,, \tag{3a}\] \[\dot{y} =(b+2aQ)y\,,\] (3b) \[\dot{s} =-ay\,, \tag{3c}\]
and \(\xi_{j}\in\mathbb{C}\) are constants determined by the initial values of the oscillator array variables [15]. Since these equations completely generate the flow of the original system (1), this implies that the dynamics of Eqs. (1) is effectively six dimensional. See Supplemental material [16] for a short verification of the validity of these equations.
_Choosing initial conditions._ Since we started with an array of \(N\) variables \(\{x_{j}\}\) and now we describe the system with the three macroscopic variables \(Q,y,s\) and \(N\) constants \(\{\xi_{j}\}\), we have some freedom in choosing the initial conditions. We present two options. _(I) Identity conversion:_ The most straightforward way to determine initial conditions is the "identity conversion", where we require that the variables \(x_{j}\) initially coincide with the constants \(\xi_{j}\) (cf. Eq. (3.7) in [2])
\[Q(0)=0,\quad y(0)=1,\quad s(0)=0,\quad\xi_{j}=x_{j}(0)\,. \tag{4}\]
_(II) Mobius conversion:_ One can also set \(Q(0)\) to a non-zero value. Here we present a class of initial conditions with \(|Q(0)|=1\) and relation between \(\xi_{j}\) and \(x_{j}(0)\) is a simple Mobius transform
\[Q(0)=e^{\mathrm{i}\alpha},\;\;y(0)=-2e^{\mathrm{i}\alpha},\;\;s(0)=1,\;\;\xi_ {j}=\frac{e^{\mathrm{i}\alpha}-x_{j}(0)}{e^{\mathrm{i}\alpha}+x_{j}(0)}\,, \tag{5}\]
where \(\alpha\in\mathbb{R}\) is a free angular parameter. In different situations different initial conditions are more appropriate - we will see examples of both options in the later examples. There are other options of initial conditions as well, in [2] for example they used initial conditions with the constraint: \(\sum_{j}\xi_{j}=0\) (cf. Eq. (4.12) in [2]).
_Special cases: real-valued arrays and phase oscillators._ We can consider the special case of real coefficients and real initial conditions: \(a,b,c,x_{j}(0)\in\mathbb{R}\). The dynamics is
then three dimensional, with real-valued Eqs. (3). Consequently, the flow of variables is real-valued, \(x_{j}(t)\in\mathbb{R}\) for all \(t\geq 0\). An example of this special case is a globally coupled array of identical QIF neurons [17; 18]. Individual voltages \(x_{j}(t)\) obey
\[\dot{x}_{j}=x_{j}^{2}+I\,,\qquad\text{if}\ \ x_{j}>x_{\text{thr}}\ \ \text{then}\ \ x_{j}\mapsto x_{\text{reset}}\,, \tag{6}\]
where the voltage threshold and reset values are: \(x_{\text{thr}}=\infty\) and \(x_{\text{reset}}=-\infty\). The current \(I\) can have a constant component \(I_{0}\) associated with intrinsic neuronal dynamics, but it can also represent an external forcing (even noisy) or a global coupling to all other nodes, e.g., the input current generated by \(N\) globally coupled QIF neurons with pulses \(P(u)\) is expressed as: \(I=I_{0}+\epsilon/N\ \sum_{j=0}^{N}P\left(1/x_{j}\right)\). Within our formalism, the voltage spikes naturally occur when the denominator \(1+s\xi_{j}\) in (2) crosses zero. As a result, the voltage in that instance reaches \(+\infty\) upon which it is reset to \(-\infty\). What has to be considered, however, is that depending on the chosen initial conditions, \(Q\) might also diverge. Indeed, this is the case if one chooses initial conditions according to (4): \(Q(0)=s(0)=y(0)-1=0\) in which case additional resetting of variables is needed [19]. However, diverging variables and additional resetting can be avoided by simply choosing the appropriate initial conditions (5): \(Q(0)=\mathrm{i},\ y(0)=-2\mathrm{i},\ s(0)=1\). The constants \(\xi_{j}\) then relate to \(x_{j}\) via a Mobius transform: \(\xi_{j}=\frac{\mathrm{i}-x_{j}(0)}{\mathrm{i}+x_{j}(0)}\). Since \(x_{j}(0)\) take real values, the constants \(\xi_{j}\) are unitary: \(|\xi_{j}|=1\) and can be described by their argument \(\psi_{j}\): \(\xi_{j}=e^{\mathrm{i}\psi_{j}}\). In this case (with \(a,b,c\in\mathbb{R}\)) the following simplification is true:
\[y=-(Q-\bar{Q})s\,. \tag{7}\]
This reduces the dynamical equations (3) to
\[\dot{Q} =aQ^{2}+bQ+c\,, \tag{8a}\] \[\dot{\zeta} =-\mathrm{i}a(Q-\bar{Q})=2a\mathrm{Im}[Q]\,, \tag{8b}\]
where \(\zeta\in\mathbb{R}\) is the argument of \(s=e^{\mathrm{i}\zeta}\). Note that these dynamics are three dimensional since \(Q\) now takes on complex values (even though we still consider \(x_{j}\in\mathbb{R}\)). The transformation (2) is reduced to:
\[x_{j}\ =\ Q-(Q-\bar{Q})\frac{e^{\mathrm{i}(\psi_{j}+\zeta)}}{1+e^{\mathrm{i}( \psi_{j}+\zeta)}}\ =\ \bar{Q}+\frac{(Q-\bar{Q})}{1+e^{\mathrm{i}(\psi_{j}+\zeta)}}\,. \tag{9}\]
The variable \(Q\) remains bounded for all times, no additional resetting is needed and the spikes occur when the denominator \(1+e^{\mathrm{i}(\psi_{j}+\zeta)}\) crosses \(0\), i.e., when \(\zeta=\pi-\psi_{j}\). Such a description of QIF neurons has already been considered in the continuum limit of \(N\to\infty\), cf. Eqs. (31) in [20]. A numerical example of this special case is shown later in Fig. 1. One can arrive at the same dynamics by transforming the QIF neurons into theta neurons via the transformation \(x_{j}=\tan(\theta_{j}/2)\) and then employing the Watanabe-Strogatz theory for phase oscillators [1; 2; 4].
The Watanabe-Strogatz theory [1; 2; 4] (WS) considers phase oscillators with global sinusoidal coupling:
\[\dot{\varphi}_{j}=\omega+2\,\mathrm{Im}[he^{-i\varphi_{j}}]=\omega-i(he^{-i \varphi_{j}}-\bar{h}e^{i\varphi_{j}})\,, \tag{10}\]
where \(\omega\in\mathbb{R}\) is a real-valued instantaneous frequency and \(h\in\mathbb{C}\) any complex forcing. The theory provides a low dimensional description, showing that the evolution of phases \(\varphi_{j}\) can be described by a Mobius transform of 3 global dynamical variables and constants \(\psi_{j}\) determined by initial conditions. Here we show that this is a particular case of transformation (2) and equations (3) under conditions:
\[a=-\bar{c}\,,\quad\mathrm{Re}[b]=0\;,\quad|x_{j}|=1\;,\quad s=\bar{Q}(Qs+y)\,, \tag{11}\]
(cf. Eq. (39) in [21]). Let us express the phase dynamics Eqs. (10) using a complex-valued exponential \(x_{j}=e^{\mathrm{i}\varphi_{j}}\):
\[\dot{x}_{j}=-\bar{h}x_{j}^{2}+\mathrm{i}\omega x_{j}+h\,, \tag{12}\]
to see how this is a special case of the general complex Riccati equation (1) with \(c=-\bar{a}=h\) and \(b=\mathrm{i}\omega\). Now let us write the dynamics of the quantity: \(Y=Qs+y\) under conditions (11):
\[\dot{Y}=(aQ+b-\bar{a}\bar{Q})Y\,. \tag{13}\]
Notice how the quantity \(aQ+b-\bar{a}\bar{Q}\) is purely imaginary, which (using initial condition (4): \(Y(0)=1\)) implies that \(Y\) is fully determined by its complex angle, \(\theta\in\mathbb{R}\), \(Y=e^{\mathrm{i}\theta}\), evolving according to
\[\dot{\theta}=-\mathrm{i}(aQ+b-\bar{a}\bar{Q})=|b|+2\mathrm{Im}[aQ]=\omega- \mathrm{i}(h\bar{Q}-\bar{h}Q)\,. \tag{14}\]
We identify Eq. (14) as the WS angle equation, cf. Eq. (23b) in [4]. Together with Eq. (3a) they form the complete WS description of the dynamics for system (10), cf. Eq. (23a) in [4][22]. One can check that Eq. (3c) for \(s\), under conditions (11) also yields (14).
_Examples._ We now present some specific systems where the dimensionality reduction can be applied and with numerical simulations validate its exactness. First (I), a known and relatable example of pulse-coupled real-valued QIF neurons. It is known that this system possesses low-dimensional dynamics since the QIF model can be transformed into a \(\theta\)-neuron to which the WS theory applies. Our formalism provides not only a new perspective of this fact, but also justifies the voltage resetting at infinity by viewing the model as as a limiting case of the extended complex model. Next, we show two examples for which no low-dimensional description was known until now. Example (II), the complex generalization of the QIF model, where we simply allow the "voltage" variables to attain complex values. And example (III) concerns a complex-valued generalization of phase oscillators, specifically we choose overdamped Josephson
junctions. An additional example is found in the Supplemental material [16] of how our description applies to infinite ensembles in the thermodynamic limit, and how particular integrals can simplify with set initial conditions.
_Example (I): real QIF model._ We consider \(N=8\) excitable QIF neurons with \(I_{0}=-0.001\), interacting via Gaussian pulses \(P(u)=\sqrt{\sigma/\pi}\exp(-\sigma u^{2})\) where \(\sigma=5\) and coupling strength \(\epsilon=2.3\). Initial conditions are \(\{x_{j}(0)\}=\{-(N-1)/2+j\}\), \(j=1,...,N\). These parameters yield chaotic dynamics, see Fig. 1. We integrate the reduced three dimensional system using Eqs. (8) and compare the resulting trajectories with the ones obtained from \(N=8\) coupled voltage equations (6), we see an exact overlap, as expected.
For ensembles of pure phase oscillators the Kuramoto order parameter is defined as \(Z_{1}=1/N\sum_{j=0}^{N}e^{\mathrm{i}\varphi_{j}}\) where \(|Z_{1}|\) quantifies the order in the system. This is easily generalized to the full complex plane by simply invoking the mean [23]:
\[Z_{1}=\frac{1}{N}\sum_{j=0}^{N}x_{j}\,, \tag{15}\]
which is neatly expressed with dynamical variables \(Q,y,s\) and constants \(\xi_{j}\),
\[Z_{1}=Q+y\frac{1}{N}\sum_{j=0}^{N}\frac{\xi_{j}}{1+s\xi_{j}}\,. \tag{16}\]
_Example (II): complex QIF model._ Let us consider a simple generalization of the QIF neurons to the complex plane. If the voltages \(x_{j}\) start off the real axis, then they never diverge and there is no need for resetting conditions in (6). Let us consider such generalized QIF neurons, globally coupled via the first moment \(Z_{1}\) (15),
\[\dot{x}_{j}=x_{j}^{2}+I_{0}\ \ +\epsilon\left(Z_{1}-x_{0}^{+}\right)\,, \tag{17}\]
where \(I_{0}\) is the intrinsic input current and \(x_{0}^{+}\) the positive fixed point of individual neurons (which mathematically could be absorbed in the current \(I_{0}\mapsto I_{0}-\epsilon x_{0}^{+}\)). In the form of the initial Riccati equation (1) this model translates to the parameters: \(a=1\), \(b=0\), \(c=I_{0}+\epsilon(Z_{1}-x_{0}^{+})\). On the real line the behavior of an individual unit \(x_{j}\) tends towards infinity in finite time and hence one needs a reset rule (6) as well as an implementation of a pulse during that event. However, if the dynamics occurs instead in the complex plane, the trajectory of this unit naturally oscillates around a fixed point. Indeed, we find two fixed points (they can be degenerate) of the single unit dynamics: \(x_{0}^{\pm}=\pm\sqrt{I_{0}}\). When we couple several units together, they remain oscillatory. In our example we use \(N=8\) oscillators with \(I_{0}=1\), \(\epsilon=-5\) and initial conditions: \(\{x_{j}(0)\}=\{x_{0}^{+}+\frac{(1+j)^{2}}{20}\exp(\mathrm{i}\frac{\pi}{2N}j)\}\), \(j=1,...,N\). The dynamics settles into periodic motion with non-trivial limit cycles, as shown in Fig. 2.
_Example (III): complex generalization of phase oscillators._ Now let us generalize phase oscillators to include a free amplitude. Consider the phase dynamics equation
Figure 1: \(N=8\) quadratic integrate-and-fire neurons with global coupling interacting via Gaussian pulses. The dynamics is exactly described by the low dimensional Eqs. (8). Left panel: time series of the input current \(I(t)\), alongside the individual neurons’ firing events (top). Right panel: trajectory of the macroscopic variable \(Q(t)\) determining voltages via (2).
Figure 2: Complex generalization of coupled quadratic integrate-and-fire neurons (17). Unlike the real QIF model, the voltage resetting in this complex generalization is redundant since the dynamics everywhere (except the special case of real line with real coupling) loops back and stays finite; see also Example (I) in Fig. 1 where voltages diverge, but the resetting naturally results from the transformation (2). \(N=8\) units settle into periodic motion with a non-trivial limit cycle. Trajectories of oscillators on the limit cycle in the complex plane depicted with colored lines. Initial conditions are marked with (color-coded) points. The fixed point \(x_{0}^{+}\) is emphasized with a black dot.
in complex exponential form (12) but allow that oscillators have amplitude \(r_{j}\) different from \(1\): \(x_{j}=r_{j}e^{i\varphi_{j}}\). This choice results in a special family of phase-amplitude oscillators:
\[\dot{\varphi}_{j} =\omega-\left(\frac{1}{r_{j}}+r_{j}\right)\mathrm{Im}[he^{-i\varphi _{j}}]\,, \tag{18a}\] \[\dot{r}_{j} =\left(1-r_{j}^{2}\right)\mathrm{Re}[he^{-i\varphi_{j}}]\,. \tag{18b}\]
Note that \(r_{j}=1\) defines an invariant subspace that oscillators cannot cross: oscillators that start on the inside of the unit disk stay inside the disk forever. For this example we consider a complex generalization of coupled Josephson junctions (cf. Eq. (3.16) in [2]),
\[a=-c=0.75\;,\quad b=\mathrm{i}-0.7\mathrm{i}\ \mathrm{Im}[Z_{1}]\,, \tag{19}\]
where \(Z_{1}\) is the generalized Kuramoto order parameter (15). We use \(N=8\) units with initial conditions: \(\{x_{j}(0)\}=\{-\mathrm{i}\sin(\frac{\pi}{N}j)\exp(\mathrm{i}\frac{2\pi}{N}j)\}\), \(j=1,...,N\). Just like in the case of pure phase oscillators, for this parameter choice we observe chaos, see Fig. 3, cf. Fig. 4c in [2] and Fig. 2 in [24].
_Discussion._ Our study presents a novel low-dimensional description that generalizes the well-known WS [1; 2] theory to arbitrary arrays of complex Riccati equations, and so includes the real QIF model as a limiting case. This exact formalism enables the consideration of a whole new class of complex oscillatory models, including a special case of phase-amplitude oscillators, opening up many possibilities for investigating coupled oscillators in natural and artificial systems. To showcase its correctness and applicability, we provide numerical simulations for several interesting examples.
The new formalism is effectively six dimensional, as compared to the three dimensional WS theory - this is not surprising as we made the generalization from real \(\mathbb{R}\) to complex \(\mathbb{C}\) variables. However, it should be noted that the generalization does not simply involve allowing the dynamical variables to take complex values; rather, the equations we obtain are fundamentally different from those described by WS theory. What is common with the WS theory is that the overarching motif is the Mobius transform between initial values of the dynamical variables \(x_{j}\) and the constants \(\xi_{j}\). As was explored later [4], cross-ratios of dynamical variables \(C_{j}=\frac{(x_{j}-x_{j+2})}{(x_{j}-x_{j+3})}\frac{(x_{j+1}-x_{j+3})}{(x_{j+1 }-x_{j+2})}\) are invariant under the Mobius transform and thus constants of motion. Where in the WS context these cross-ratios are real (even though \(x_{j}=e^{\mathrm{i}\varphi_{j}}\in\mathbb{C}\)), here they can be complex: \(C_{j}\in\mathbb{C}\). Just as was shown in [4], there are \(N-3\) independent ratios \(C_{j}\), \(j=1,...,N-3\). Since the initial problem contains \(N\) complex variables (1), and there are \(N-3\) complex constants of motion, this leaves three complex variables to describe dynamics (3), thus confirming that the description is six dimensional.
For real-phase oscillators in the thermodynamic limits the inclusion of heterogeneity [5; 25] or noise [26] leads to terms of complex-valued effective frequencies. Our description (3) is clearly applicable to describe those scenarios as well, as we have shown in previous work [20; 21]. This remarkable equivalence between noise/heterogeneity and complex valued frequencies can be studied further with our approach.
The generalization to complex numbers is substantial and provides room for qualitatively different dynamics. A complex extension of the Kuramoto model has recently been explored in [27]; but our description applies to a much broader class of coupled systems described by complex Riccati arrays defined in Eqs. (1). This unlocks a whole spectrum of 2D dynamical systems, including a special case of phase-amplitude oscillators (18) we showcased here. The examples considered here are simply complex generalizations of known models: Example (II) a generalization of QIF and Example (III) generalization of phase oscillators. One can explore systems that go beyond just generalizing known models to complex initial conditions, and really consider complex units with complex coupling, thus tapping into the rich 2D dynamics of intrinsic units defined by Riccati equations (1).
Moreover, the new description provides a fresh perspective and insights on the relationship between phase oscillators and QIF neurons. In fact, the voltage resetting in QIF neurons arises naturally in our framework, providing additional motivation for its use in studying neuronal dynamics and development of new models.
Our findings have significant implications for both the
Figure 3: Array of \(N=8\) Josephson junctions (19) with complex initial conditions inside the unit disk (colored points) generalize pure phase oscillators bound on the unit circle, see Fig. 4c in [2] and Fig. 2 in [24]. Trajectory of one oscillator in the complex plane is shown in orange. Small blue scatter points depict the value of \(Z_{1}\) on the Poincare section where the oscillator on the unit circle passes phase \(\pi/2\).
oretical and practical research. The new description opens up many avenues for investigating the dynamics of complex systems. Several ideas for future work directly follow from our framework, and more are expected to arise from the research community. We briefly outline three ideas here. (I) It is well known that the general Riccati equation can be transformed into a linear second order ODE [28], which means that our formalism applies there as well. A more detailed study will be performed in a forthcoming work. (II) Most likely similar descriptions exist for higher dimensional systems as well; both for systems with larger spatial dimensionality [29], \(x_{j}\in\ \ \mathbb{R}^{n}\), \(n>1\), like the higher-dimensional generalization of the Watanabe-Strogatz theory [30], as well as allowing for states \(x_{j}\) that belong to higher number systems, such as quaternions or octonions. (III) Throughout this work we strictly considered identical oscillators, i.e., at all times every oscillator felt the same global forcings exerted by \(a,b,c\). Thus, the oscillators only differed by their states \(x_{j}\) which are determined by the initial conditions. However, one may consider adding heterogeneity in the forces by assuming that \(a,b,c\) in some way differ between oscillators. Just like Ott and Antonsen incorporated Lorentzian inhomogeneities in frequencies [5], one can add them to our formalism in the thermodynamic limit as well. It is even likely that particular heterogeneity can be incorporated into finite arrays.
_Acknowledgments._ We thank Arkady Pikovsky for useful discussions. We gratefully acknowledge financial support from the Royal Swedish Physiographic Society of Lund and the DFG (Grant PI 220/21-1).
|
2303.04217
|
AI for Science: An Emerging Agenda
|
This report documents the programme and the outcomes of Dagstuhl Seminar
22382 "Machine Learning for Science: Bridging Data-Driven and Mechanistic
Modelling". Today's scientific challenges are characterised by complexity.
Interconnected natural, technological, and human systems are influenced by
forces acting across time- and spatial-scales, resulting in complex
interactions and emergent behaviours. Understanding these phenomena -- and
leveraging scientific advances to deliver innovative solutions to improve
society's health, wealth, and well-being -- requires new ways of analysing
complex systems. The transformative potential of AI stems from its widespread
applicability across disciplines, and will only be achieved through integration
across research domains. AI for science is a rendezvous point. It brings
together expertise from $\mathrm{AI}$ and application domains; combines
modelling knowledge with engineering know-how; and relies on collaboration
across disciplines and between humans and machines. Alongside technical
advances, the next wave of progress in the field will come from building a
community of machine learning researchers, domain experts, citizen scientists,
and engineers working together to design and deploy effective AI tools. This
report summarises the discussions from the seminar and provides a roadmap to
suggest how different communities can collaborate to deliver a new wave of
progress in AI and its application for scientific discovery.
|
Philipp Berens, Kyle Cranmer, Neil D. Lawrence, Ulrike von Luxburg, Jessica Montgomery
|
2023-03-07T20:21:43Z
|
http://arxiv.org/abs/2303.04217v1
|
# AI for Science: An Emerging Agenda
###### Abstract
This report documents the programme and the outcomes of Dagstuhl Seminar 22382 "Machine Learning for Science: Bridging Data-Driven and Mechanistic Modelling".
Today's scientific challenges are characterised by complexity. Interconnected natural, technological, and human systems are influenced by forces acting across time- and spatial-scales, resulting in complex interactions and emergent behaviours. Understanding these phenomena -- and leveraging scientific advances to deliver innovative solutions to improve society's health, wealth, and well-being -- requires new ways of analysing complex systems.
The transformative potential of AI stems from its widespread applicability across disciplines, and will only be achieved through integration across research domains. AI for science is a rendezvous point. It brings together expertise from AI and application domains; combines modelling knowledge with engineering know-how; and relies on collaboration across disciplines and between humans and machines. Alongside technical advances, the next wave of progress in the field will come from building a community of machine learning researchers, domain experts, citizen scientists, and engineers working together to design and deploy effective AI tools.
This report summarises the discussions from the seminar and provides a roadmap to suggest how different communities can collaborate to deliver a new wave of progress in AI and its application for scientific discovery.
## Summary
Today's scientific challenges are characterised by complexity. Interconnected natural, technological, and human systems are influenced by forces acting across time- and spatial-scales, resulting in complex interactions and emergent behaviours. Understanding these phenomena -- and leveraging scientific advances to deliver innovative solutions to improve society's health, wealth, and well-being -- requires new ways of analysing complex systems.
Artificial intelligence (AI) offers a set of tools to help make sense of this complexity. In an environment where more data is available from more sources than ever before -- and at scales from the atomic to the astronomical -- the analytical tools provided by recent advances in AI could play an important role in unlocking a new wave of research and innovation. The term AI today describes a collection of tools and methods, which replicate aspects of intelligence in computer systems. Many recent advances in the field stem from progress in machine learning, an approach to AI in which computer systems learn how to perform a task, based on data.
Signals of the potential for AI in science can already be seen in many domains. AI has been deployed in climate science to investigate how Earth's systems are responding to climate change; in agricultural science to monitor animal health; in development studies, to support communities to manage local resources more effectively; in astrophysics to understand the properties of black holes, dark matter, and exoplanets; and in developmental biology to map pathways of cellular development from genes to organs. These successes illustrate the wider advances that AI could enable in science. In so doing, these applications also offer insights into the science of AI, suggesting pathways to understand the nature of intelligence and the learning strategies that can deliver intelligent behaviour in computer systems.
Further progress will require a new generation of AI models. AI for science calls for modelling approaches that can: facilitate sophisticated simulations of natural, physical, or social systems, enabling researchers to use data to interrogate the forces that shape such systems; untangle complicated cause-effect relationships by combining the ability to learn from data with structured knowledge of the world; and work adaptively with domain experts, assisting them in the lab and connecting data-derived insights to pre-existing domain knowledge. Creating these models will disrupt traditional divides between disciplines and between data-driven and mechanistic modelling.
The roadmap presented here suggests how these different communities can collaborate to deliver a new wave of progress in AI and its application for scientific discovery. By coalescing around the shared challenges for AI in science, the research community can accelerate technical progress, while deploying tools that tackle real-world challenges. By creating user-friendly toolkits, and implementing best practices in software and data engineering, researchers can support wider adoption of effective AI methods. By investing in people working at the interface of AI and science - through skills-building, convening, and support for interdisciplinary collaborations -- research institutions can encourage talented researchers to develop and adopt new AI for science methods. By contributing to a community of research and practice, individual researchers and institutions can help share insights and expand the pool of researchers working at the interface of AI and science. Together, these actions can drive a paradigm shift in science, enabling progress in AI and unlocking a new wave of AI-enabled innovations.
The transformative potential of AI stems from its widespread applicability across disciplines, and will only be achieved through integration across research domains. AI for science is a rendezvous point. It brings together expertise from AI and application domains; combines modelling knowledge with engineering know-how; and relies on collaboration across disciplines and between humans and machines. Alongside technical advances, the next wave of progress in the field will come from building a community of machine learning researchers, domain experts, citizen scientists, and engineers working together to design and deploy effective AI tools.
Introduction: bridging data driven and mechanistic modelling
The 21st century has been characterised as the century of complexity.1 Shifting social, economic, environmental, and technological forces have created increasingly interconnected communities, affected by 'wicked' problems in domains such as health, climate, and economics [1]. This complexity is reflected in today's scientific agenda: whether in natural, physical, medical, environmental, or social sciences, researchers are often interested in the dynamics of complex systems and the phenomena that emerge from them.
Footnote 1: This quote is attributed to Stephen Hawking, in an interview with the San Jose Mercury News in January 2000.
Science has always proceeded through the collection of data. Through their experiments and observations, researchers collect data about the world, use this data to develop models or theories of how the world works, make predictions from those models, then test those predictions, leading to further refinements to the model and the underpinning theory. Digitisation of daily activities--in the lab, and elsewhere - means that researchers today have access to more data from a greater range of sources than ever before. In parallel, more sophisticated tools to collect data have opened new scales of scientific inquiry, from detailed patterns of gene expression to light signals from other galaxies. Data proliferation is both a signal of the complexity of today's environment, and an opportunity to make sense of such complexity.
Advances in artificial intelligence (AI) have produced new analytical tools to make sense of these data sources. The term 'AI' today describes a collection of methods and approaches to create computer systems that can perform tasks that would typically be associated with 'intelligent' behaviour in living systems.2 In this document, the term AI is used broadly, to refer to algorithmic decision-making systems that combine data, mathematical models, and compute power to make predictions about the world.
Footnote 2: While not the only branch of the field, machine learning is the approach to AI that has delivered many of the recent advances in AI. Machine learning is an approach to AI in which models process data, learning from that data to identify patterns or make predictions. In this document, the terms machine learning and AI are used interchangeably.
AI is already unlocking progress across research disciplines:
* In Earth sciences, it is helping researchers investigate how different parts of the Earth's biosphere interact, and are affected by climate change.3 Footnote 3: These examples are inspired by talks given at the Dagstuhl seminar; these are provided later in the document. This example is inspired by Markus Reichstein’s talk.
* In climate science, it supports modelling efforts to reconstruct historical climate patterns, enabling more accurate predictions of future climate variability.4 Footnote 4: This example is inspired by Dina Machuve’s talk.
* In agricultural science, it is helping farmers access faster diagnoses of animal diseases, enabling more effective responses.5 Footnote 5: This example is inspired by Siddharth Mishra-Sharma’s talk.
* In astrophysics, it is advancing understandings of the nature of dark matter and its role in the Universe.6 Footnote 6: This example is inspired by Maren Buttner’s talk.
* In developmental biology, it is generating insights into the genetic processes that shape how cells develop and differentiate into specialist roles.7 Footnote 8: This example is inspired by Christian Igel’s talk.
* In environmental science, it allows researchers to analyse the features of natural environments more accurately, aiding land and resource managers.8
* In neuroscience, it can help model how different neural circuits fire to deliver different behaviours in animals.9
Footnote 9: This example is inspired by Jakob Macke’s talk.
The diversity of these successes illustrates the transformative potential of AI for research across the natural, physical, social, medical, and computer sciences, arts, humanities, and engineering. By enabling researchers to extract insights from a greater volume of data, drawn from a wider variety of sources, and operating across multiple dimensions and scales, AI could unlock new understandings of the world. In so doing, AI could influence the conduct of science itself. AI-enabled analytical tools mean researchers can now generate sophisticated simulations of natural or physical systems, creating 'digital siblings' of real-world systems that can be used for experimentation and analysis. Machine learning models that combine the ability to learn adaptively from data with the ability to make structured predictions reflecting the laws of nature can help researchers untangle the web of cause-effect relationships that drive the dynamics of complex systems. AI-assisted laboratory processes could increase the efficiency of experiments, and support researchers to develop and test new hypotheses.
Achieving this potential will require advances in the science of AI, the design of AI systems that serve scientific goals, and the engineering of such systems to operate safely and effectively in practice. These advances in turn rely on interdisciplinary collaborations that connect domain expertise to the development of machine learning models, and feed the insights generated by such models back into the domain of study. As interest in the potential of AI to drive a new wave of research grows, the challenge for the field is to identify technical and operational strategies to realise this potential. In the process, new questions arise about the future of 'AI for science'; whether this will emerge as a distinct field, characterised by its own research agenda and priorities, or whether its benefits can be best achieved through separate, domain-focused sub-fields, which seek to integrate AI into business-as-usual across research disciplines.
In response, this document proposes a roadmap for 'AI for science'. Synthesising insights from recent attempts to deploy AI for scientific discovery, it proposes a research agenda that can help develop more powerful AI tools and the areas for action that can provide an enabling environment for their deployment. It starts by exploring core research themes - in simulation, causality, and encoding domain knowledge - then draws from these ideas to propose a research agenda and action plan to support further progress. The ideas presented are inspired by discussions at 'Machine Learning for Science: Bridging Mechanistic and Data Driven Modelling Approaches', a Dagstuhl seminar convened in September 2022 (see Annex 1). Abstracts from the talks given at the seminar are shown throughout this document. These talks and the discussions they provoked should be credited for the ideas that have shaped it. Thank you to the speakers and participants for their thoughtful contributions to both the seminar and the development of this work.
## 2 Snapshots of AI in science
Across domains, AI is being deployed to advance the frontiers of science. The snapshots below introduce some current areas of research in AI for science, and explore the issues raised by these research projects. Across these snapshots, some common themes emerge:
* How can researchers most effectively combine observations, data-driven models, and physical models to enhance understanding of complex systems? To answer this question, methods are needed to integrate different types of model, operating across different levels of granularity, while managing the impact of the uncertainties that emerge when a machine learning model is integrated in a wider system. New approaches to simulation and emulation can support progress in tackling these challenges, alongside new strategies for examining the robustness or performance of machine learning models.
* How do the outputs from an AI system align with what researchers already know about the world, and how can such systems help uncover causal relationships in data? Advances in causal machine learning are needed to connect the laws and principles already established in many areas of research with data-driven methods.
* How can AI be integrated into the scientific process safely and robustly? Effective integration will rely on the ability to encode domain knowledge in AI systems, the design of interfaces that facilitate interaction between humans and AI, and the development of mechanisms for sharing knowledge and know-how about how to use AI in practice.
### In Earth sciences
**The Earth is a complex system**,10 comprised of terrestrial, marine, and atmospheric biospheres that interact with each other and are shaped by biological, chemical, and physical processes that exchange energy across scales from the molecular to the planetary. It is also a unique system: researchers have yet to discover other planets that replicate its dynamics. Studies of the Earth system therefore rely on observations and physical models, which describe the dynamics of energy exchange from first principles and use those principles to build models of the Earth's sub-systems. As climate change perturbs this complex system, it is increasingly important to have accurate models that can be used to analyse how the Earth will respond to increasing carbon dioxide levels. The challenge for Earth system science is to build more complex models that represent the web of relationships between biospheres under changing conditions, without generating overwhelming uncertainties and while generating actionable insights that can be used by individuals, organisations, and policymakers to understand the localised impact of changing environmental conditions [2].
Footnote 10: This example is inspired by Markus Reichstein’s talk, the abstract for which is provided later in this document.
For example, how much carbon dioxide is absorbed by different biospheres can be affected by diverse factors including volume and type of vegetation cover, water and drought stress in different areas, and local temperature, which have implications for how carbon dioxide contributes to climate change. Researchers have access to data that describes local uptake of carbon dioxide by some ecosystems, such as tropical rainforest, European beech forest, or Mediterranean savanna, for example, but lack sufficient observational coverage to scale from these local observations to accurate global representations of carbon exchange. One response to this challenge is to leverage data-driven models to knit together the different mechanistic models that describe (for example) carbon, water, and energy cycles in different biospheres.
By starting with observational data and combining this with physics-informed modelling, researchers can leverage machine learning to create simulations that can generate new understandings of how complex systems function. Taking this approach, the FLUXNET project combines observed data on carbon emissions from different sources to generate a data-driven picture of global carbon dynamics. By combining data across scales to establish a statistical
model of global carbon dynamics, this project can generate simulations of how the Earth breathes [3]. The ability to integrate across scales and combine models of different Earth sub-systems can also contribute to wider efforts to build a 'digital twin' of the Earth, with the aim of better understanding the implications of climate change across biospheres and communities.
**As the Earth's climate changes**,11 researchers anticipate that local environmental conditions will change and extreme weather events will increase. Understanding the impact of these changes is important for those seeking to develop appropriate responses, for example developing environmental management plans or planning human activities.
Footnote 11: This example is inspired by Markus Reichstein’s talk, the abstract for which is provided later in this document.
How a landscape responds to changing environmental conditions will vary depending on the local climate, characteristics of the terrain (vegetation type, for example), and human activities in the area. Under changing climate conditions, as extrapolation beyond known limits becomes necessary, the assumptions or abstractions that form the basis of a model can be rendered invalid. Relying solely on either mechanistic descriptions of the system - the impact of temperature on plant growth, for example12 - or statistical models could result in inaccuracies. Machine learning can help respond to this challenge, through the creation of hybrid models that combine an understanding of the physical laws with model parameters learned from data. Researchers often already have access to known physical parameters for a system (for example, the equations that govern how water evaporates to air). These parameters can be fed into a machine learning model that will learn other patterns. Known equations specify the chemical and physical processes; machine learning can then help elucidate the other biological forces at play. Integrating this physical structure in the model helps make it both more interpretable to the domain scientists and more reliable in its predictions. The resulting model can accurately forecast the impact of climate change on the features of local landscapes, operating within the bounds set by the laws of physics [5].
**Ice loss13** has been the greatest contributor to sea-level rise in recent decades [6]. Large volumes of fresh water are stored as ice: NASA estimates that if all the world's glaciers and ice sheets melted, sea levels globally would rise by over 60 metres, flooding all coastal cities [7]. Researchers can estimate the contribution that melting ice makes to sea level rise through mechanistic models that describe the underlying physical processes (that turn ice to water) and through observational data about the velocity of ice sheet movement. Machine learning could offer a toolkit to make these models more accurate, connecting ice sheet models to ocean and atmospheric models, and integrating different data types in hybrid mechanistic-data models.
Footnote 12: Under conditions of extreme temperature, patterns of stomatal opening and closing in plants changes. See, for example [4].
Footnote 13: This example is inspired by Ieva Kazlauskaite’s talk, the abstract for which is provided later in this document.
Efforts to build such models, however, illustrate the complexity of designing tools to meet domain needs. Projects in this space have considered emulating the ice sheet system - or its individual components - to see if models could be run faster; though successful methodologically, it has not been clear that such efforts address a clear research need. Another approach is to use machine learning to streamline simulations, for instance by identifying the most effective level of granularity for different models (is a spatial breakdown of 5km or 10km more interesting?). An important lesson from such collaborations is the specificity of domain needs: machine learning is a tool for research, but just because researchers have a hammer, does not mean every research problem is a nail. Effectively deploying machine learning for research requires both suitable AI toolkits and an understanding of which toolkits are best deployed for which challenges.
### In environmental and agricultural sciences
**Poultry farming14** is a vital source of income and food for many communities in Tanzania. 4.6 million households in the country raise approximately 36 million chickens, but despite the importance of this activity, poultry farming suffers from relatively low productivity due to the prevalence of disease. Efforts to tackle poultry diseases such as Salmonella, Newcastle disease, and coccidiosis are held back by the accessibility of diagnostic processes and lack of data. Diagnosis currently requires lab analysis of droppings, which can take 3-4 days. Once disease is confirmed, farmers often lose their entire farm's flock.
Footnote 14: This example is inspired by Dina Machuve’s talk, the abstract for which is provided later in this document.
Farm-level tests and diagnostics could increase the effectiveness of disease surveillance and treatment, giving farmers rapid access to information about the diseases affecting their flock and action plans about how to manage outbreaks. With mobile phones ubiquitous across the country - there are almost 49 million mobile phone subscriptions in Tanzania - there are opportunities for new uses of local data to detect disease outbreaks.
By collecting images of droppings from farms, researchers have been creating a dataset to train a machine learning system that can identify the symptoms of these diseases. Fecal images are taken on farms, annotated with diagnostic information from agricultural disease experts and the results of lab tests, then used to train an image recognition system to automate the diagnosis process [8]. System robustness and accuracy is vital, given the significant implications of a positive diagnosis, and careful design is necessary to incentivise farmers to make use of the app.
Collaboration with experts from different domains is central to developing this system. Input from farmers is needed to collect data and test the system in practice; from veterinary pathologists to help annotate the data and ensure the system's accuracy; and from technologists to develop an AI system that is effective in deployment as an app on mobile phones. These collaborations also open opportunities for new forms of citizen science, as farmers and local communities are engaged in efforts to develop and maintain an open toolkit for disease diagnosis, providing a gateway for communities to take ownership of machine learning as a tool to serve their needs.
**Trees and forests15** play a crucial role in maintaining healthy ecosystems. Despite this, an estimated ten million hectares of forest are lost globally each year due to reforestation, with only around half of this balanced by tree-planting efforts [9]. Africa experienced an annual rate of forest loss of approximately 3.9 million hectares per year from 2010-2020. This loss has implications for biodiversity and people, with trees a vital contributor to ecosystem services such as carbon storage, food provision, and shelter. In this shifting landscape, understanding the number and distribution of trees is important for the development of forestry management plans and for understanding the carbon storage implications of changes to land use.
Footnote 15: This example is inspired by Christian Igel’s talk, the abstract for which is provided later in this document.
To estimate the number and biomass of trees in the West African Sahara and Sahel, researchers have used satellite imagery of 90,000 trees from 400 sampling sites to create a labelled dataset for use in machine learning. Using an image segmentation tool to identify the location of trees, an automated system was able to count the number of trees, with domain experts guiding the system to distinguish trees from surrounding vegetation. This tree count can then be used to estimate the biomass of trees in the area, and predict the amount of carbon they store; the prediction is generated using allometric calculations, which translate the properties of the tree to its carbon storage potential. In this approach, machine learning measures the properties of the ecosystem from satellite images, then these properties are used to feed mechanistic models that describe the ecosystem's physical functions [10]. This opens the possibility of new tools to estimate tree cover, leveraging these insights for more effective environmental management. However, in the process, care is needed to manage the type and nature of the uncertainties created by different modelling approaches. Different allometric models, for example, can be more or less suited to different types of tree cover [11], meaning that the method for estimating biomass from satellite imagery can be subject to biases when
applied across a large area. A small error in the calculation of the biomass from one tree can have a cumulatively large effect when that method is scaled to country-level. The type and nature of such uncertainties need to be considered when a machine learning model is used within a wider system.
**Vector borne diseases16** account for more than 17% of diseases in people and over 700,000 deaths annually [12]. Changes to the climate and patterns of land use, amongst other factors, are bringing human populations into contact with new vectors of disease. In Africa, for example, populations of mosquitoes carrying malaria that might previously have been found mainly in rural areas are spreading into cities.
Footnote 16: This example is inspired by Christian Igel’s talk, the abstract for which is provided later in this document.
Tools to characterise building features from satellite imagery have already been developed and made available for use.17 Leveraging these to analyse multi-scale data - from household to city-level--researchers are investigating how the built environment influences people's risk of contracting mosquito-borne disease. For example, it has been found that the prevalence of mosquitos in an area is related to the type of roofing used in construction; metal roofing tends to be associated with lower mosquito prevalence, potentially due to the high temperatures they attract during the day [14]. These insights can be deployed by policymakers in the development of appropriate policy responses [15].
Footnote 17: For example: [13].
Decisions made on the basis of insights generated by machine learning models will be influenced by the assumptions made in those models. In the context of housing, for example, the decision about which type of housing to identify as 'at risk' or which building materials to flag as 'problematic' may have significant consequences for individuals or communities. When those decisions are assimilated within a model or analysis before a downstream 'policy decision', the implications for those communities of different courses of action may be obscured, creating a risk of marginalising or disadvantage individuals or groups. The assumptions are built into the model, and how visible those assumptions are made to different user groups, can have significant social and scientific consequences.
### In physical sciences
**Understanding the nature of dark matter18** is one of the biggest unsolved challenges of particle physics today. The matter that researchers can measure using cosmological observations makes up about 5% of the Universe [16]. While not directly observable, evidence for the existence of dark matter can be found in a variety of phenomena not otherwise accounted for by currently known laws of physics: stars rotate around galaxies faster than might be expected; the pattern of fluctuations in primordial microwave observations indicate that there were sources of gravitation in the early Universe beyond ordinary matter; light bends around galaxy clusters due to gravitational effects from dark matter.
Footnote 18: This example is inspired by Siddharth Mishra Sharma’s talk, as well as insights from Gilles Louppe’s talk, the abstracts for which are provided later in this document.
Despite knowing that dark matter exists and that it plays an important role in how the Universe formed, its particle composition or properties remains unclear. Investigating these properties is the focus of large-scale experimental studies, for example in particle colliders.19 A variety of data could contain information about the properties of dark matter, from studies of cosmic rays, cosmic microwave radiation, properties of stars, gravitational lensing studies, and more. These datasets are complex: they are typically high-dimensional, represent complex relationships between the micro-physics and macro-phenomenon in a system, and may contain artefacts or noise from the instruments used to collect them. To make use of this data, researchers need to account for this complexity and tether their models to assumptions about physical processes.
Footnote 19: For example: [17]
The challenge for machine learning in astro-particle physics research is to extract insights about the particle composition of dark matter from the macroscopic patterns that can be
observed in the Universe. For example, gravitational lensing is a phenomenon in which the pathway of light traveling through the Universe is deflected due to the influence of gravity from an intervening mass, distorting how this background light is observed [18]. Gravitational lensing effects arising from dark matter clumps ("substructure") could hold information about the structure of dark matter at a microscopic level. To infer the presence of substructure of these lensing systems, researchers need models that describe the effect of dark matter, ordinary matter, and the wider environment while simultaneously modelling the form of the background light, which can be a morphologically-complex galaxy. By letting a machine learning model, like a neural network, describe the complex background light source, it is possible to make predictions about how the light might appear after being lensed with and also without the impact of dark matter clumps. By performing many simulations considering various possibilities, researchers can compare these with observations from telescopes and understand which dark matter theories are compatible with the data.
Rapid progress in this field is generating a variety of models and approaches. In its next wave of development, further research is needed to test how trustworthy these methods are, by assessing their performance in generating physically plausible results and robust constraints on the properties of dark matter and other forms of new physics [19].
**How particles move20** across their environment is a shared area of interest for many domains. In chemistry, for example, researchers are often interested in how molecules diffuse, and where they end up distributed, based on the physical forces that shape their movement over time. The analogy of particle movement can also be applied as an abstraction of larger scale physical processes, such as in agent-based models for crowd simulation.21 In these systems the initial system state is represented in an initial probability distribution, the scientific objective can then also be represented as a target distribution. The dynamics underpinning this diffusion are formalised mathematically in the Schrodinger bridge problem. This long-standing problem is concerned with finding the most likely paths along which particles move from their starting distribution to their distribution at a defined point in time, based on experimentally-observed start and end positions. In general, finding analytic solutions to the Schrodinger bridge problem is intractable, but machine learning tools are providing new approaches for finding approximate numerical solutions that can be deployed across domains [22].
Footnote 20: This example is inspired by Francisco Vargas’s talk, the abstract for which is provided later in this document.
### In biological sciences
**The development and differentiation of cells into tissues and organs22** is a complicated process, shaped by hormonal and genetic influences on cell growth [23]. Advances in genomics have allowed researchers to characterise the genetic material of different organisms; more recent progress in single-cell genomics extends this ability to the single-cell level, unlocking detailed analysis of how genetic activity determines cellular function.
Footnote 22: Examples of agent-based models for crowd simulation include: [20, 21].
Single-cell RNA studies examine how ribonucleacle acids (RNA) shape cellular properties and development pathways. The RNA profiles created by genetic sequencing techniques allow researchers to identify which genes are active in a cell. The question for the field today is how to move from these single-cell analyses to an atlas of cell development that shows how cells specialise and form tissues or organs.
By combining statistical and machine learning techniques, researchers can reconstruct the gene dynamics - which genes are activated at which time - that influence cell development [24]. Cells in the small intestine, for example, undergo a pattern of differentiation that takes them from their base state to highly specialised units, able to variously secrete mucus, absorb nutrients, or respond to hormones. By studying what genes are expressed in a cell at an early stage, researchers can predict how the cell will specialise and identify which genetic changes are associated with that specialisation, opening opportunities to treat intestinal diseases [25].
Building these models relies on effective data management. Lab processes can inject artefacts into datasets, for example batch effects arising from how cells were grown or harvested for study, which need to be removed from data before analysis. Effective data correction maintains biologically-relevant information, while removing noise from the data. A variety of tools exist for this correction, including regression models, dimensionality reduction, graph methods, and deep learning. For domain researchers to be able to identify the tools that are useful for them, benchmarking studies are vital in identifying the most effective data integration method for their purpose [26]. However, there remain open questions about how best to benchmark the performance of a system when there are complex pipelines of analysis involved. Understanding the end-to-end nature of an analytical pipeline can be difficult, and new approaches to assessing performance may be needed.
**To understand how the brain works**,23 neuroscientists develop mathematical models that describe the activity of individual neurons, and how these connect across brain networks. Models on the mechanistic level take the form of differential equations. These models are based on experimental data, from experiments that examine how neurons respond to different signals or perturbations. To build a computational model from this data, it is first necessary to find which factors influence how a neuron acts, creating a set of parameters that determine how the model works. This process of finding parameters is often labour-intensive, relying on trial-and-error, which limits researchers' ability to scale models across complex neural networks. Machine learning can help streamline that model definition process, by predicting which models are more likely to be compatible with data. By automatically identifying model parameters, researchers can rapidly develop simulations of complex structures, such as brains or nervous systems in different animals [27].
Footnote 23: This example is inspired by Jakob Macke’s talk, the abstract for which is provided later in this document.
### Talks given during this workshop session
#### 2.5.1 Machine-learning-model-data-integration for a better understanding of the Earth System
_Markus Reichstein_
The Earth is a complex dynamic networked system. Machine learning, i.e. derivation of computational models from data, has already made important contributions to predict and understand components of the Earth system, specifically in climate, remote sensing and environmental sciences. For instance, classifications of land cover types, prediction of land-atmosphere and ocean-atmosphere exchange, or detection of extreme events have greatly benefited from these approaches. Such data-driven information has already changed how Earth system models are evaluated and further developed. However, many studies have not yet sufficiently addressed and exploited dynamic aspects of systems, such as memory effects for prediction and effects of spatial context, e.g. for classification and change detection. In particular new developments in deep learning offer great potential to overcome these limitations. Yet, a key challenge and opportunity is to integrate (physical-biological) system modelling approaches with machine learning into hybrid modelling approaches, which combines physical consistency and machine learning versatility. A couple of examples are given with focus on the terrestrial biosphere, where the combination of system-based and machine-learning-based modelling helps our understanding of aspects of the Earth system.
#### 2.5.2 Poultry Diseases Diagnostics Models using Deep Learning
_Dina Machwe_
Coccidiosis, Salmonella, and Newcastle are the common poultry diseases that curtail poultry production if they are not detected early. In Tanzania, these diseases are not detected early due to limited access to agricultural support services by poultry farmers. Deep learning
techniques have the potential for early diagnosis of these poultry diseases. In this study, a deep Convolutional Neural Network (CNN) model was developed to diagnose poultry diseases by classifying healthy and unhealthy fecal images. Unhealthy fecal images may be symptomatic of Coccidiosis, Salmonella, and Newcastle diseases. We collected 1,255 laboratory-labeled fecal images and fecal samples used in Polymerase Chain Reaction diagnostics to annotate the laboratory-labeled fecal images. We took 6,812 poultry fecal photos using an Open Data Kit. Agricultural support experts annotated the farm-labeled fecal images. Then we used a baseline CNN model, VGG16, InceptionV3, MobileNetV2, and Xception models. We trained models using farm and laboratory-labeled fecal images and then fine-tuned them. The test set used farm-labeled images. The test accuracies results without fine-tuning were 83.06% for the baseline CNN, 85.85% for VGG16, 94.79% for InceptionV3, 87.46% for MobileNetV2, and 88.27% for Xception. Finetuning while freezing the batch normalization layer improved model accuracies, resulting in 95.01% for VGG16, 95.45% for InceptionV3, 98.02% for MobileNetV2, and 98.24% for Xception, with F1 scores for all classifiers above 75% in all four classes. Given the lighter weight of the trained MobileNetV2 and its better ability to generalize, we recommend deploying this model for the early detection of poultry diseases at the farm level. There are open questions about the deployment of the model at the farm level and potential areas for further research.
#### 2.5.3 Simulation-based approaches to astrophysics dark matter searches
_Siddharth Mishra-Sharma_
We are at the dawn of a data-rich era in astrophysics and cosmology, with the capacity to extract useful scientific insights often limited by our ability to efficiently model complex processes that give rise to the data rather than the volume and nature of observations itself. I will describe recent progress in applying mechanistic forward modeling techniques to a range of astrophysical observations with the goal of searching for signatures of new physics, in particular the nature of dark matter. These leverage developments in machine learning-aided inference, e.g. using simulation-based inference as well as differentiable probabilistic programming, while encoding domain knowledge, in order to maximize the scientific output of current as well as future experiments.
#### 2.5.4 Single-cell transcriptomics
_Maren Buttner_
Cells are the fundamental units of life. Understanding cellular processes is a basis for improving human health, disease diagnosis and monitoring. The advent of single-cell transcriptomics (scRNA-seq) allows characterizing the gene expression patterns of entire organs and organisms at single cell resolution. The human genome encodes more than 30.000 genes, and high-throughput scRNA-seq methods create samples with tens of thousands of cell measurements. The analysis of such data requires a variety of methods from the machine learning field, e.g. dimensionality reduction techniques from PCA to variational autoencoders, graph-based clustering, classification of cell types, trajectory inference and causal inference of gene regulation to understand cell fate decision making. To date, scRNA-seq is a widely applied research technique, which has the potential for standard application in the clinics. My presentation focusses on current approaches for large-scale scRNA-seq data, current open questions, and implications for human health.
#### 2.5.5 Estimating ecosystem properties: Combining machine learning and mechanistic models
_Christian Igel_ Joint work with _Martin Brandt, Rasmus Fensholt, Compton J. Tucker, Ankit Karirgaa, Kjeld Rasmussen, Christin Abel, Jennifer Small, Jerome Chave, Laura Vang
Rasmussen, Pierre Hiernaux, Abdoul Aziz Diouf, Laurent Kergoat, Ole Mertz, Fabian Gieseke, Sizhuo Li, Katherine Melo. [https://doi.org/10.1038/s41586-020-2824-5](https://doi.org/10.1038/s41586-020-2824-5)]Brandt, M., Tucker, C.J., Kariryaa, A. et al. An unexpectedly large count of trees in the West African Sahara and Sahel. Nature 587, 78-82 (2020).
Progress in remote sensing technology and machine learning algorithms enables scaling up the monitoring of ecosystems. This leads to new knowledge about their status and dynamics, which will be helpful in land degradation assessment (e.g., deforestation), in mitigating poverty (e.g., food security, agroforestry, wood products), and in managing climate change (e.g., carbon sequestration).
We apply deep learning for the mapping of individual trees and forests. Tree crowns are segmented in satellite imagery using fully convolutional neural networks. This provides detailed measurements of the canopy area and of the distribution of trees within and outside forests. Allometric equations are applied to estimate the biomasses (and thereby the stored carbon) of the individual trees. We use iterative gradient-based optimization of the allometric models and suggest techniques such as jackknife+ for quantifying the uncertainty of the model predictions. Tree biomass can also be directly inferred from LiDAR (laser imaging, detection, and ranging) measurements using 3D point cloud neural networks. This leads to highly accurate results without requiring a digital elevation model.
In a new project, we consider risk assessment of vector-borne diseases based on deep learning and remote sensing. Malaria risk is related to the housing conditions, for example, the type of roofing material, which can be determined from satellite images.
#### 2.5.6 Partial differential equations and Variational Bayes
_Ieva Kazlauskaite_
Inverse problems involving partial differential equations (PDEs) are widely used in science and engineering. Although such problems are generally ill-posed, different regularisation approaches have been developed to ameliorate this problem. Among them is the Bayesian formulation, where a prior probability measure is placed on the quantity of interest. The resulting posterior probability measure is usually analytically intractable. The Markov Chain Monte Carlo (MCMC) method has been the go-to method for sampling from those posterior measures. MCMC is computationally infeasible for large-scale problems that arise in engineering practice. Lately, Variational Bayes (VB) has been recognised as a more computationally tractable method for Bayesian inference, approximating a Bayesian posterior distribution with a simpler trial distribution by solving an optimisation problem. The talk covered some recent experiences of applying Bayesian inference, generative models and probabilistic programming languages in the context of learning material properties in civil engineering and in ice sheet and ice core modelling. The main shortcomings of PPLs and differentiable problems were highlighted.
#### 2.5.7 The Schrodinger bridge problem
_Francisco Vargas_
Recent works in diffusion-based models have been achieving competitive results across generative modelling and inference, in this presentation we propose to explore a unifying framework based on Schrodinger bridges to explore/explain diffusion-based methodology. The Schrodinger bridge problem (SBP) finds the most likely stochastic evolution between two probability distributions given a prior (reference) stochastic evolution. Recently SBP based methodology has made its way into generative modelling, sampling, and inference. In this talk we propose the exploration of a unifying framework for the aforementioned works based on the renowned IPF/Sinkhorn algorithm. The motivation behind this is to cast a unifying lens via the Schrodinger perspective relating inference, sampling and transport, in a way that
we can leverage many of the useful techniques and heuristics from each field to benefit each other.
Building effective simulations
### Moving upstream
Science proceeds through hypothesis, observation and analysis. For hundreds of years, researchers have advanced the frontiers of knowledge by collecting data, compressing those observations into a model, then computing that model to create representations of how the world works, generating new insights about natural and physical phenomena and theories about the systems from which those phenomena emerge in the process [28]. These mathematical models rely on numerical methods: algorithms that help solve mathematical problems where no analytical solution is available. Today, data collection and the basic computational tasks involved in its analysis - linear algebra, optimisation, simulation, and so on - remain consistent features of the scientific process. Progress in machine learning, however, has changed the modelling landscape.
'AI for science' offers a data-centric approach to modelling and simulating the world. Operating alongside the traditional mathematical models that are central to many disciplines, machine learning provides data-centric analytical methods that can be integrated across the scientific pipeline, for example enabling sophisticated simulations of real-world systems. These simulations can be used to inform model development, test hypotheses and shape areas of research focus, or unlock insights from complex data.
### Nurturing a diversity of approaches
Simulations are a well-established tool for scientific discovery. Their fundamental task is to allow data sampling from a model where the differences between simulation and the real world are reduced as far as feasible, to enable experimentation or testing of the impact of different perturbations, while allowing some measure of simplification of the system. Effective simulators allow researchers to move from theory to an understanding of what data should look like.
Domains such as particle physics, protein folding, climate science, and others, have developed complex simulations that use known theories and parameters of interest to make predictions about the system of study. AI for science can be brought in to speed up some of these through surrogate models. Machine learning can complement 'traditional' approaches to scientific simulation, adding components that model the most uncertain elements of a system to strongly mechanistic models that might otherwise be too restrictive in their assumptions.
Much early excitement surrounding AI for science was rooted in the reverse process, asking: instead of starting with theory, could researchers instead start with the large amounts of data available in many areas of research and, from that data, build an understanding of what an underpinning theory might be? Given a set of observations, is it possible to find parameters for a model that result in simulations that reflect the measured data? Such simulation-based inference (SBI) offers the opportunity to generate novel insights across scientific disciplines.
To enable such analysis, machine learning methods are needed that can extract insights from high-dimensional, multi-modal data, in ways that are labour- and compute-efficient [29]. The field of probabilistic numerics offers a way to flexibly combine information from mechanistic models with insights from data, solving numerical problems through statistical approaches [30]. Operationalising these methods to create effective data-driven simulations requires balancing different model characteristics. The model's parameters must be specified to a sufficient level of granularity to describe the real-world system, while operating at a level of abstraction that is amenable to analysis and computation; almost all models are 'wrong' or falsifiable because of this, but some level of abstraction is necessary to make them useful for analysis. The simulation must also be designed to be robust, and able to generate inferences that align with real-world observations.
### Truth, truthiness, and interfacing with the real world
The excitement underpinning AI for science stems from the aspiration to unearth new understandings of the world, leveraging data to advance the frontiers of knowledge. While subject to their own limitations, the scientific community has developed checks and balances to scrutinise new knowledge and maintain the rigour of scientific inquiry. Recent years have seen a variety of challenges or benchmarks emerge in the machine learning community that have come to represent the field's expected standards of performance from algorithms on defined tasks. However, these standards do not necessarily align with the expectations of domain researchers [31]. As data-centric simulations are integrated into scientific process, machine learning researchers must consider their responsibility in maintaining the integrity of the domains into which they are deployed, raising the question: what guardrails are needed to ensure researchers can be confident in the outputs from machine learning-enabled simulations?
A variety of diagnostic tests can help. Core to many of these diagnostics is analysis of whether a model is computationally faithful. In short: the inferences generated by a simulation should reflect those from observations [31]. One approach to checking this alignment is to consider the consistency of distributions from inferred and observed datasets. If the model is a good fit, the data it generates should broadly match the data observed through experimentation.
Underpinning these diagnostics is a fundamental question about how to manage uncertainty, in a context where different failure modes have different implications. Put simply: when a model fails, is it worse to be over-confident in its results, or over-conservative? In the scientific context, over-confidence seems more likely to result in negative outcomes, whether through giving misleading interpretations or results or driving lines of enquiry in unproductive directions. Machine learning methods can be designed for conservatism, reducing the risk of false positives.
Implementing a schedule of model building, computing, critiquing, and repeating can refine this process. One lesson from experiences of building machine learning-enabled simulations is that there can be a disconnect between how machine learning approaches inference and model building, and how the same task is approached by domain scientists. From a domain perspective, model building seems naturally an iterative process: collect data, fit a model, find errors or areas for improvement, update the model, and so on. This iterative process is guided by expert intuition and knowledge; deep understanding of the system under study and how it responds to perturbation. Machine learning research has developed practices for prior elicitation -- using domain knowledge to shape the structure of probabilistic models -- but the nuances of this domain intuition are often not easily captured a priori, instead emerging when models fail as an informal sense of what 'feels' like it should be true. This qualitative input is vital in building effective simulations. It requires close collaboration, which in turn requires an investment of time and energy from domain communities, generated through mutual trust, incentives, and long-term relationship-building.
### Connecting simulation to practice
Computational tools are central to the effective deployment of machine learning-enabled simulation. The function and form of such tools must align with the requirements of the community deploying them. Designing computational systems to match user needs - and work effectively in practice - requires both effective software engineering and close collaboration with domain groups that can articulate the requirements and expectations of those working in the field. To remain effective over the longer-term, such systems must leverage effective software engineering practices, including embedding version control and building interfaces that work with other models and systems. Those practices, and the software systems that emerge from them, must be designed for the needs of those using the system, drawing from existing best practices in software engineering, but adapting those practices to reflect the needs of the domain for deployment.
Footnote 23: See also [33].
Constructing computational tools requires a mix of technical insight and craft skill - of knowledge and know-how. Tools produced by the machine learning community differ in their usefulness on different problems: some work well for certain tasks, but not for others. Without access to such craft skills, those outside the 'AI for science' community can find it challenging to determine which tools to use for which purposes, reducing the generalisability of existing methods and approaches. This challenge becomes particularly visible when practitioners are tightly integrated into the analysis pipeline, such as in applications in developmental biology, in the developing world, and in data-centric engineering. Widening access to the field will require user guides that characterise which simulations are effective for which tasks or purposes, supported by case studies or user stories that help demystify how machine learning can work in practice.
### Directions
Machine learning typically requires an explicit representation of a likelihood, but these are often difficult to compute. Further advances in SBI are necessary to allow researchers to identify model parameters from data.
* Techniques such as likelihood-free inference can enhance existing Bayesian methods for inferring posterior estimations [32].
* Building surrogate models,24 using Bayesian approaches for simulation planning to optimise information gain,25 or deploying emulations [35] can also enhance the efficiency of simulations. Footnote 24: See, for example, the previous Dagstuhl meeting on this topic: [https://www.probabilistic-numerics.org/meetings/2021_Dagstuhl/](https://www.probabilistic-numerics.org/meetings/2021_Dagstuhl/) and [37].
* Probabilistic numerics offers a route to develop statistically-optimal algorithms that are amenable to comprehensive uncertainty quantification, leveraging Gaussian Process-based Ordinary Differential Equation (ODE) solvers to pursue simulation as an inference problem [36].
Operationalising these approaches will also require new toolkits to support implementation of probabilistic numerical methods.26
Footnote 26: See, for example, the previous Dagstuhl meeting on this topic: [https://www.probabilistic-numerics.org/meetings/2021_Dagstuhl/](https://www.probabilistic-numerics.org/meetings/2021_Dagstuhl/) and [37].
Computational faithfulness - alignment of inferred parameters with scientific knowledge - can be achieved through:
* Diagnostic checks in the self-consistency of the Bayesian joint distribution, which measure the scientific quality of the regions computed by Bayesian SBI methods [31, 38]. Checking for self-consistency gives a sense whether the model is 'good enough' (ie whether the inference engine gives a good sense of the posterior).
* Enforcing conservative neural ratio estimation through binary classifier specification, producing more conservative posterior approximations [39].
* Hybrid modelling, which combines machine learning components learned from data with the mechanistic components specified by existing domain knowledge [40].
* Further study of the impact of model misspecification could also help generate new robustness diagnostic checks [41].
'Digital twins' have recently received much attention as a tool to exploit sophisticated simulations. In Earth sciences, for example, ambitious efforts to develop a digital twin of the Earth propose to allow more accurate forecasting, visualisation, or scenario-testing of the
impact of climate change and efforts to mitigate it.27 The challenge is to integrate different models or components of a system - for example, connecting atmospheric models, with land models, with models of human behaviour - in a way that represents the complete Earth system. That requires consideration of the different levels of granularity with which these different models operate: economic models of human behaviour, for example, operate with different assumptions and levels of enquiry in comparison to physical models of ocean circulation. The full range of granularities becomes apparent when considering that specific applications, such as disease monitoring on poultry farms, sit within the wider ecosystem of the natural and built environment. A digital twin needs to make choices about what levels of granularity it is operating at, from the scale of the poultry farm to the planet. The questions that emerge from such ambitions is: what level of granularity is helpful or necessary to deliver effective results? And what interfaces between diverse models might be possible?
Footnote 27: For example: [42].
### Talks given during this workshop session
#### 3.6.1 Information from data and compute in scientific inference
_Philipp Hennig_
Simulations are central to scientific inference. Simulators are typically treated as black boxes, with the inference loop wrapped around them. This approach is convenient for the programming scientists, but can be highly inefficient. Probabilistic numerical methods represent computational and empirical data in the same language, which allows for inference from mechanistic knowledge and empirical data in one combined step. I will argue that scientific computing needs to embrace such new computational paradigms to truly leverage ML in science, which also requires rethinking scientific codebases.
#### 3.6.2 ODE filters and smoothers: probabilistic numerics for mechanistic modelling
_Hans Kersting_
Probabilistic numerics (PN) unifies statistical and numerical approximations by formulating them in the same language of statistical (Bayesian) inference. For ODEs, a well-established probabilistic numerical method is ODE filters and smoothers which can help to deal more aptly with uncertainty in mechanistic modeling. In the first half of this talk, we will first introduce PN and then present ODE filters/smoothers as a specific instance of PN. In the second half, we will discuss how ODE filters/smoothers can improve mechanistic modeling in the natural sciences and present a recent application of inferring the parameters of real-word dynamical system.
#### 3.6.3 Four short stories on simulation-based inference
_Jakob Macke_
Many fields of science make extensive use of simulations expressing mechanistic forward models, requiring the use of simulation-based inference methods. I will share experiences and lessons learned from four applications: Describing the dynamics and energy consumptions of neural networks in the stomatogastric ganglion; inferring parameters of gravitational wave models; optimising single-molecule localisation microscopy, and building computational models of the fly visual system. I will try to convey some thoughts on the challenges and shortcomings of current approaches.
#### 3.6.4 Towards reliable simulation-based inference and beyond
_Gilles Louppe_
Modern approaches for simulation-based inference build upon deep learning surrogates to enable approximate Bayesian inference with computer simulators. In practice, the estimated posteriors' computational faithfulness is, however, rarely guaranteed. For example, Hermans et al., 2021 have shown that current simulation-based inference algorithms can produce posteriors that are overconfident, hence risking false inferences. In this talk, we will review the main inference algorithms and present Balanced Neural Ratio Estimation (BNRE), a variation of the NRE algorithm designed to produce posterior approximations that tend to be more conservative, hence improving their reliability.
#### 3.6.5 Modeling the data collection process: My journey
_Thomas G. Dietterich_
In this talk, I will describe three examples of my attempts to integrate subject-matter knowledge with machine learning. The first example involves predicting grasshopper infestations. I will sketch the methodology in which we first modeled the life cycle of the grasshoppers to capture the factors that affect their population. Unfortunately, most variables of interest were not measured, so we used the model to guide the construction of proxy variables. Ultimately, this project did not succeed, but it is hard to determine whether this is due to modeling problems or to the chaotic nature of the biological phenomenon.
Connecting data to causality
### Causality in science and data
Most scientific endeavours have a causal element: researchers want to characterise how a system works, why it works that way, and what happens when it is perturbed. How researchers identify cause-and-effect relationships varies across domains. For some disciplines, the process of hypothesis design - data collection - model development provides the core structure for interrogating how a system works. In others, where experimentation is more difficult, researchers may rely on natural experiments and observations to compare the response of a system under different conditions. Those studying the Earth system, for example, have little scope to replicate planetary conditions, so instead rely on observational data and modelling to identify the impact of different interventions. These different approaches, however, share a modelling approach in which researchers provide variables to create structural, causal models.
In contrast, machine learning proceeds by learning representations or rules from data, based on statistical information, rather than structured rules about how a system works (such as physical laws). Causal inference - the ability to identify cause-and-effect relationships in data - has been a core aim of AI research, in service of both wider ambitions to replicate intelligence in machines and efforts to create AI systems that are robust in deployment. However, in many respects efforts to integrate causal inference into AI systems have yet to deliver [43].
An apocryphal story in AI tells of efforts by US researchers during the 1980s to train a computer system that could distinguish between images of tanks from the US and USSR. The resulting system delivered high accuracy on its training data, but failed repeatedly in practice. The system was subsequently found to be classifying images based on their resolution and background features - is the image grainy? Does it contain snow? - rather than the tanks themselves. It found patterns in the data that were co-incident, rather than causal. That same error has real-world implications for the AI systems deployed today. In medical sciences, AI systems trained to detect collapsed lungs from medical images have been proven inaccurate, after the model was found to have learned to detect the tube inserted into the lung to enable a patient to breath as a response to its collapse, rather than the physical features of the lung itself [44]. In medical sciences, deployment of such systems could put patient care at risk. In social sciences, these AI design and data bias failures can combine to marginalise vulnerable populations [45].
Conversely, an understanding of the structures within data can improve the accuracy of machine learning analyses. In exoplanet discovery, for example, machine learning is used as a tool to detect variations in light signals from large-scale astronomical datasets. The movement of exoplanets around stars results in periodic changes to the light signals from those stars, as the planet obscures them in its transit. Machine learning can detect those signals and predict where exoplanets might be located, but the data is often noisy. Noticing that the structure of this noise was consistent across a number of stars, which were too distant from each other to be interacting, researchers concluded that instrumentation effects were distorting the data, and developed a method to model those effects and remove them from exoplanet predictions. The result was an efficient method for exoplanet identification that subsequently contributed to the discovery of the first potentially habitable planet [46].
### Causal models as a route to advancing the science of AI and AI for science
Many of these errors in misdiagnosing cause-effect relationships arise from a core assumption in many machine learning methods: that data follows an independent and identical distribution (IID). In practice, almost all data from real-world, or complex, systems will violate this assumption, given the interconnectedness of different variables. The task of causality in machine learning is to create models that can manage this violation, distinguishing between patterns in data that simply co-occur and patterns that are causal. The resulting AI systems
would be able to solve a task in many different environments, based on an understanding of the fundamental causal mechanisms in a system [47]. They would be more robust in deployment, being less likely to make incorrect predictions as the environment in which they operate changes, and could be more efficient to train and deploy. They would also represent a step towards replicating human- or animal-like intelligence, being able to solve a task in many different environments.
In these regards, causal machine learning offers a route to balancing the widespread utility of statistical modelling with the strengths of physical models. Causality allows models to operate at a level of abstraction beyond strongly mechanistic approaches, such as those based on differential equations, moving along a continuum from mechanistic to data-driven modelling. They provide researchers with the ability to make accurate predictions under conditions of dataset shift (enable out of distribution generalisation); can provide insights into the physical processes that drive the behaviour of a system; unlock progress towards AI systems that 'think' in the sense of acting in an imagined space; while also leveraging insights that can be learned from data, but not otherwise detected.28 They also offer opportunities to explore counterfactuals in complex systems, asking what the impact of different interventions could have been, opening a door to the development of simulation-based decision-making tools.29
Footnote 28: For reference, see the table on page 11 of reference [46].
Footnote 29: Such tools may have particular relevance in policy. For example: [48].
Achieving this potential requires technical developments in a number of directions, but can also yield more effective AI systems. Such systems would:
* Be able to operate on out of distribution data, performing the task for which they are trained in environments with varying conditions.
* Be able to learn how to perform a task based on relatively few examples of that task in different conditions, or be able to rapidly adapt what they have learned for application in new environments through transfer, one-shot, or lifelong learning approaches.
* Support users to analyse the impact of different interventions on a system, providing explanations or ways of attributing credit to different actions.
* Respond to different ways of transmitting information between individuals and groups, enabling effective communication with their users or other forms of cultural learning.
### From methods to application
Achieving the level of technical sophistication required for causal modelling requires careful model design, based on close collaboration between machine learning and domain scientists. The process of specifying what to represent in a causal machine learning system involves a series of'micro-decisions' about how to construct the model, negotiated by integrating machine learning and domain expertise. In this regard, causal machine learning can be a positive catalyst for deeper interdisciplinary collaboration; model construction can be a convening point for sharing understandings between domains. However, the level of detail required can also be in tension with efforts to promote widespread adoption of AI methods across research. The availability of easy-to-use, off-the-shelf AI tools has been an enabler for adoption in many domains. The hand-crafted approach inherent to current causal methods renders them less accessible to non-expert users. Part of the challenge for the field is to make such methods more broadly accessible through open-source toolkits or effective software engineering practices.
This tension between specification and learning also highlights the importance of nurturing a diversity of methods across the spectrum from data-driven to mechanistic modelling. The domain (or, how much prior knowledge is available and what knowledge should be included), research question of interest, and other practical factors (including, for example, compute budget), will shape where along this spectrum researchers wish to target their modelling efforts.
While pursuing practical applications, advances in causal inference could help answer broader questions about the nature of intelligence and the role of causal representations in human understanding of how the worlds work. Much of human understanding of the world arises from observing cause and effect; seeing what reaction follows an intervention - that an object falls when dropped, for example - in a way that generalises across circumstances and does not require detailed understanding of mathematical or physical laws. Integrating this ability into machine learning would help create systems that could be deployed on a variety of tasks. The process of building causal machine learning forces researchers to interrogate the nature of causal representations - What are they? How are they constructed from the interaction between intelligent agents and the world? By what mechanism can such agents connect low-level observations to high-level causal variables? - which may in turn support wider advances in the science of AI.
### Directions
Causality in machine learning is a long-standing and complex challenge. In the context of scientific discovery, learning strategy, model design, and encoding domain knowledge all play a role in helping identify cause-effect relationships.
Different learning strategies can improve the 'generalisability' of machine learning, increasing its performance on previously unseen tasks, based on learning underlying structure of a task or environment in ways that can contribute to broader understandings of causality. Such learning strategies include:
* Transfer learning, taking learning from one task or domain and applying it in another.
* Multi-task learning, enabling a system to solve multiple tasks in multiple environments.
* Adversarial learning, to reduce the vulnerability of models to performance degradation on out-of-distribution data.
* Causal representation learning, defining variables that are related by causal models [46].
* Reinforcement learning strategies that reward agents for identifying policies based on invariances over different conditions.
Across these new learning approaches, attempts to establish causal mechanisms are also prompting progress in machine learning theory, through statistical formulations of core principles [49].
Combining different methods can also enhance the functionality of an AI system. For example:
* Neural ODEs have been shown to identify causal structures in time series data [50].
* Describing causal effects as objective functions in constrained optimisation problems can deliver a form of stochastic causal programming [51].
* Technical interventions [52] can constrain or optimise a model towards causal outcomes. As with simulation design, diagnostic checks can also help identify cause-effect relationships by examining model outputs against'reality criteria',30 which compare outputs to real-world results.
Footnote 30: Including syntactic, semantic, and pragmatic elements: [53].
There are also a variety of approaches to representing existing scientific knowledge in machine learning models, notably by specifying the assumptions made about the world through symmetries, invariances, and physical laws (see Figure 1).
### Talks given during this workshop session
#### 4.5.1 Causality, causal digital twins, and their applications
_Bernhard Scholkopf_
1. Desiderata for causal machine learning: work with (and benefit from) non-IID data, multi-task/multi-environment, sample-efficient, OOD, generalisation from observation of marginals, interventional.
2. Modelling taxonomy: differential equations, causal models, statistical models.
3. How to get from one level to the next.
4. How to transfer between statistical models that share the same underlying causal model.
5. The assumption of independent causal mechanisms (ICM) (for example, invariance/autonomy) and sparse mechanism design.
6. How to derive the arrow of time from ICM and algorithmic information theory.
7. Statistical formulation of ICM: causal de Finetti.
8. Application to exoplanet discovery and Covid-19 vaccine scenarios.
9. Causal representations as (a) causal digital twins and (b) AI models.
#### 4.5.2 Invariance: From Causality to Distribution Generalization
_Jonas Peters_
Assume that we observe data from a response \(Y\) and a set of covariates \(X\) under different experimental conditions (or environments). Rather than focusing on the model that is most predictive, it has been suggested to take into account the invariance of a model. This can help us to infer causal structure (Which covariates are causes of \(Y\)?) and find models that generalize better (How well does the model perform on an unseen environment?). We show a few applications of these general principles and discuss first steps towards understanding the corresponding theoretical guarantees and limits.
#### 4.5.3 Can we discover dynamical laws from observation?
_Niki Kilbertus_
I will start with a brief introduction to identifiability of ODE systems from a unique continuous or discrete observed solution trajectory. Then, I will provide an overview of modern approaches to inferring dynamical laws (in the form of ODEs) from observational data with a particular focus on interpretability and symbolic methods. Finally, I will describe our recent attempts and results at inferring scalar ODEs in symbolic form from a single irregularly sampled, noisy solution trajectory.
#### 4.5.4 Invariances and equivariances in machine learning
_Soledad Villar_
In this talk, we give an overview of the progress in the last few years by several research groups in designing machine learning methods that repeat physical laws. Some of these frameworks make use of irreducible representations, some make use of high-order tensor objects, and some apply symmetry enforcing constraints. Our work shows that it is simple to parameterise universally approximating functions that are equivariant under actions of the
Euclidean, Lorentz, and Poincare group at any dimensionality. The key observation is that \(O(d)\)-equivariant (and related group-equivariant) functions can be universally expressed in terms of a lightweight collection of dimensionless scalars (scalar products and scalar contractions of the scarla, vector, and tensor inputs). We complement our theory with numerical examples that show that the scalar-based method is simple and efficient, and mention ongoing work on cosmology simulations.
#### 4.5.5 Divide-and-Conquer Equation Learning with R2 and Bayesian Model Evidence
_Bubacarr Bah_
Deep learning is a powerful method for tasks like predictions and classification, but lacks interpretability and analytic access. Instead of fitting up to millions of parameters, an intriguing alternative for a wide range of problems would be to learn the governing equations from data. Resulting models would be concise, parameters can be interpreted, the model can adjust to shifts in data, and analytic analysis allows for extra insights. Common challenges are model complexity identification, stable feature selection, expressivity, computational feasibility, and scarce data. In our work, the mentioned challenges are addressed by combining existing methods in a novel way. We choose multiple regression as a framework and argue how a surprisingly large space of model equations can be captured. For feature selection, we exploit the computationally cheap coefficient of determination (R2) to loop through millions of models, and by using a divide-and-conquer strategy, we are able to rule out remaining models in the equation class. Final model selection is achieved by exact values of the Bayesian model evidence with empirical priors, which is known to identify suitable model complexity without relying on mass data. Random polynomials, and a couple of chaotic systems are used as examples.
## 5 Encoding domain knowledge
### Where's My [Science] Jetpack?
Humans have a long history of imagining futures where human progress is accelerated by intelligent machines. Embedded in these visions for the future are aspirations that AI can be a faithful servant, easing daily activities or enhancing human activities [54]. As with many emerging technologies, the reality of AI today looks different to these Sci-Fi futures.31 Practical experiences of deploying AI highlights a range of potential failure modes, often rooted in insufficient contextual awareness, misspecification of user needs, or misunderstanding of environmental dynamics [55].
Footnote 31: The title of this section is inspired by: [https://www.fantasticfiction.com/w/daniel-h-wilson/where-s-my-jetpack.htm](https://www.fantasticfiction.com/w/daniel-h-wilson/where-s-my-jetpack.htm)
Today's science builds on thousands of years of attempts to understand the world, which can be leveraged to design AI that serves scientific goals. The result should be a collaborative endeavour between humans and machines. Researchers need the analytical power of AI to make sense of the world, while AI needs input from human understandings of the domain in which it is deployed to function effectively; both need well-designed human-machine interfaces to make this collaboration work. In this context, effective integration of domain knowledge into AI systems is vital, and three (broad) strategies have emerged to facilitate this encoding: algorithmic design; AI integration in the lab; and effective communication and collaboration.
### Encoding domain knowledge through model design
Traditional modelling approaches make use of well-defined rules or equations that explain the dynamics of the system under study. The laws of physics, for example, describe how energy moves through a system, based on conservation principles. These laws are complemented by mathematical symmetries that arise from our abstract representations of physical objects and describe what features of an object remain consistent, despite changes or transformations in a system [56]. There may also be known invariances in a system: factors that do not change under any perturbations or that change in a defined way [57]. Building on this existing knowledge, and connecting to efforts to generate causal understandings of the world through machine learning, an area of growing interest has been the design of machine learning models that respect these rules or symmetries.
The principle underpinning this design strategy is that it is possible to move across a continuum from statistical (data-driven) models to strongly mechanistic models, creating hybrid systems whose outputs should be constrained by what is physically feasible, while also leveraging insights from data (Figure 1).
Figure 1: Models along a spectrum from classical i.i.d models to strongly mechanistic differential equation models introduce aspects of causality and symmetries to create a continuum between mechanistic and data-driven worlds. Statistical or data-driven models are weakly mechanistic (i.e. they include smoothness assumptions or similar).
At one end of that continuum, mechanistic models would obey known laws or principles in a strongly deterministic way; at the other, statistical models encode fewer assumptions and rely more on data [58]. The addition of invariances and symmetries, alongside other forms of domain knowledge, allows bridging between these two model classes (Figure 1). Models that describe how much heat is absorbed by the oceans under conditions of climate change, for example, should obey the laws of thermodynamics and energy conservation. By encoding the domain knowledge that has yielded these fundamental laws, such as the conservation of momentum or energy, researchers can ensure the outputs of a machine learning model will have a physically allowable expression. This encoding can come from integrating equations, symmetries, or invariances into model design. These encodings constrain the operation of a machine learning system to align with the known dynamics of physical systems. The resulting models might be expected to produce more accurate results, with smaller generalisation errors, and with better out-of-distribution generalisation.
### Scientific centaurs
Complementing modelling strategies to encode scientific knowledge are deployment strategies to use AI in the lab. The lab has long provided a physical hub for collaboration and knowledge-generation, its function and form having remained broadly consistent across centuries of scientific progress. Today, the digitisation of experimental equipment and laboratory processes offers opportunities to integrate AI in experimental design and create new virtual labs.
By combining data from measurement devices, simulations of laboratory processes, and computational models of research or user objectives, these virtual labs provide a digital sibling of in-person research activities that can be used to optimise such activities. In drug discovery, for example, virtual labs could accelerate the testing and analysis processes that identify
Figure 2: Strategies for integrating domain insights: including information in data and including information as prior knowledge.
candidate drugs from potential drug targets. Instead of relying on physical testing of such starting molecules, multiple rounds of virtual testing can rapidly simulate the processes of drug design, manufacture, testing, and analysis to assess which starting molecules are more (or less) likely to be viable candidate drugs [59]. As a result, AI can help accelerate the research process.
Advances in machine learning methods to enable effective simulations, causal modelling, and encoding pre-existing domain insights - while packaging such methods into usable toolkits - are all necessary foundations for such digital siblings. Moving from virtual laboratory to 'AI assistants' requires further advances in AI system design to create AI agents that can elicit guidance or input from their domain experts. Such agents would not only provide useful intuitions for scientific modelling, but would serve as'scientific sidekicks', actively helping researchers to drive their research.
This new type of AI assistant would combine the ability to model the research problem of interest with the ability to model the goals and preferences of their expert users, even when the user themselves might not be able to clearly articulate those goals. As a starting point, these systems would need to support forms of user interaction that can extract user knowledge, leveraging this to identify appropriate courses of action. To operate in contexts where user goals might be uncertain and user behaviour might change in response to the outputs of the AI system, these AI sidekicks will need insights from cognitive science, studies of team decision-making, and new learning strategies based on limited examples. The sophisticated user modelling so-created would unlock new forms of human-AI collaboration; scientific centaurs that combine both human and machine intelligence [60].
### Enabling communication across domains
Underpinning these efforts to integrate pre-existing knowledge into the design and deployment of AI systems is a feedback loop between domain and machine learning research, in which each elicits from and feeds into the other. This loop requires the ability to exchange knowledge and insights across disciplines through interdisciplinary collaboration and communication.
Matching model to user need requires shared understandings of the research question at hand, the constraints - whether from data, compute, funding, or time and energy available - that affect different collaborators, and the user needs of the domain environment. While AI researchers might be tempted to develop complex models, showcasing assorted theoretical and methodological advances in the field, from a domain perspective, a relatively'simple' model may seem preferable. Collaborators need to be able to mutually explore what is possible, while also considering what is useful.
To complete the loop, outputs from machine learning models need to feed back into the application domain: insights from AI need to be accessible in ways that allow the transfer of learning from model to user. This implies some level of explainability. It is not sufficient for an AI system to produce highly accurate results; those results must also be interpretable by a domain researcher. As the complexity of AI systems increases, however, understanding why these systems have produced a particular result becomes increasingly challenging. While not an issue for all machine learning methods, this complexity often results in difficulties explaining the functioning of AI systems.
In response, AI researchers have developed a variety of different methods to interrogate how AI systems work, or why a particular output has been produced. Again, to understand which of these methods is desirable in the context of a scientific application, researchers must collaborate closely with domain experts. In the context of pharmaceutical experiments where the aim is to measure how many target cells are killed off at different dosages of a drug (or drug combination), for example, researchers might be seeking to'sense-check' how different drug dosages affect the model, before investigating specific drugs more rigorously. In astronomical studies, researchers are often working with high-dimensional datasets with many confounding correlations. For example, gravitational waves are ripples in space-time catalysed by the movement of massive bodies in space, such as planets or stars [61]. These invisible
phenomena are studied at observatories across the world,32 based on models to describe wave signals and the 'noise' generated by instruments that measure them [62]. Measurements of gravitational waves can be used to infer the properties of black holes that create them, such as their location, mass, and spin, using simulation-based inference to characterise the source of a wave, given the data that detects it. To make such methods more efficient than existing analytical tools, researchers need to take into account the structure that sits underneath it: for example, gravitational wave detectors are located across the globe, and their location affects the angle at which they detect waves hitting the Earth. This structure can be exploited through data sampling strategies to help make machine learning more efficient [62]. An alternative, however, is to use deterministic models that already reflect relevant physical laws [63]. Across these approaches, software packages play an important role in enabling communication and dissemination of methods for wider use.33
Footnote 32: See, for example, the LIGO project. Information available at: [https://www.ligo.caltech.edu](https://www.ligo.caltech.edu)
Footnote 33: See, for example: [https://lscsoft.docs.ligo.org/bilby/](https://lscsoft.docs.ligo.org/bilby/)
### Directions
New modelling approaches and mathematical innovations offer exciting opportunities to integrate domain knowledge, symmetries and invariances into AI systems [64]. Integration can be achieved in different ways:
* Data augmentation can help exploit invariances and symmetries, resulting in improved model performance, by including in the data domain knowledge for a model to ingest.
* Symmetries can be embedded in the design of deep learning systems, for example by using the same convolutional filters in different locations of an image, CNNs can leverage translation and rotation symmetries.
* Latent force models allow representations of known symmetries alongside probabilistic factors, enabling integration of mechanistic models with unknown forces [65, 66].
* Architectural features can restrict model focus to outputs that satisfy symmetries, for example using weight sharing, irreducible representations, or invoking symmetries as constraints.34 Footnote 34: See, for example: [67, 68, 69]
* Loss functions can be deployed to penalise predictions that fail to satisfy physical constraints or symmetries.
In the process, emerging mathematical questions include: how can AI learn invariances from data? And is it possible to quantify the performance gain achieved through this?
Research to develop AI assistants in the lab raises interesting questions about learning strategies and human-machine collaboration. These AI agents would need to be able to learn how to assist another agent, in a multi-agent decision-making scenario, where goals might be unclear, uncertain, or changeable. To tackle this challenge:
* Decision-making with delayed reward or zero-shot learning can help agents solve tasks when there is little or nothing known about the reward function, and no previous behaviour to learn from.
* Interactive knowledge elicitation [70], combining prior knowledge from cognitive science with learning from data [71], and generative user models [72] can support more effective interactions between user and machine.
Across these areas, care is needed in the design of the points of interaction between human and AI system. A core question here is: how can AI researchers extract domain knowledge from relevant experts and integrate it into a machine learning model? Insights from human-machine
interaction studies and collaborative decision-making systems are necessary to create effective interfaces between human and machine, based on factors such as:
* What forms of visualisation are helpful for human users?
* What types of interpretability or explainability are needed for a user to achieve their desired interactions?
* What might be the unintended consequences of human-machine interaction, such as over-confidence in results or over-reliance on the AI system?
* What 'theory of mind' is needed to anticipate how human users might be likely to respond to an AI system?
A challenge in these interactions is that much of the relevant knowledge held by the domain expert might be qualitative: an intuition of how a system works, developed over a long period of study, rather than quantifiable insights.
### Talks given during this workshop session
#### 5.6.1 Virtual laboratories for science, assisted by collaborative AI
_Samuel Kaski_
I introduced two ideas: virtual laboratories for science, aiming to introduce an interface between algorithms and domain science that enables AI-driven scale advantages, and AI-based'sidekick' assistants, able to help other agents research their goals, even when they are not able to yet specify the goal explicitly, or it is evolving. Such assistants would ultimately be able to help human domain experts run experiments in the virtual laboratories. I invited researchers to join the virtual laboratory movement, both domain scientists in hosting a virtual laboratory in their field and methods researchers in contributing new methods to virtual laboratories, simply by providing compatible interfaces in their code. For developing the assistants, I introduced the basic problem of agents that are able to help other agents reach their goals, also in zero-short settings, formulated the problem, and introduced solutions in the simplified setting of prior knowledge elicitation, and in AI-assistted decision and design tasks.
#### 5.6.2 Making data analysis more like classical physics
_David W. Hogg_
The laws of physics are very structured: They involve coordinate-free forms, they are equivariant to a panoply of group actions, and they can be written entirely in terms of dimensionless, invariant quantities. We find that many existing machine-learning methods can be very straightforwardly modified to obey the rules that physical law must obey; physics structure can be implemented without big engineering efforts. We also find that these modifications often lead to improvements in generalization, including out-of-sample generalization, in natural-science contexts. We have some intuitions about why.
The second example is work by Dan Sheldon on analysis of doppler radar to extract bird biomass and motion. The radar measures the radial velocity modulo a constant (i.e., the velocity wraps around to zero). Previous work had attempted to "unwrap" the data using heuristics. Dan instead incorporated the modulus operation into the likelihood function and then developing an algorithm for maximizing this somewhat nasty likelihood. The result has revolutionized radar analysis and has been deployed in the BirdCast product from the Cornell Lab of Ornithology.
The third example is the species occupancy model introduced by MacKenzie et al (2002). When human observers conduct wildlife surveys, they may fail to detect a species even though the species is present. The occupancy model combines this detection probability with a
habitat model. However, the expressiveness of the two models (detection and habitat) must be carefully controlled. Rebecca Hutchinson and I learned this when we tried to replace the linear logistic regression models with boosted trees.
In all cases, downstream use of the estimates that come from such data collection models must be aware of the measurement uncertainties. How can we correctly quantify those uncertainties and incorporate them in the downstream analysis? Maybe there are lessons ecologists can learn from physicists?
#### 5.6.3 Latent force models
_Mauricio A. Alvarez_
A latent force model is a Gaussian process with a covariance function inspired by a differential operator. Such a covariance function is obtained by performing convolution integrals between Green's functions associated with the differential operators, and covariance functions associated with latent functions. Latent force models have been used in several different fields for grey box modelling and Bayesian inversion. In this talk, I will introduce latent force models and several recent works in my group where we have extended this framework to non-linear problems.
#### 5.6.4 Translating mechanistic understandings to stochastic models
_Carl Henrik Ek_
Statistical learning holds the promise of being the glue that allows us to improve knowledge parametrised explicitly by a mechanistic model with implicit knowledge through empirical evidence. Statistical inference provides a narrative of how to integrate these two sources of information leading to an explanation of the empirical evidence in "light" of the explicit knowledge. While the two sources of knowledge are exchangeable in terms of predictive performance they are not if our focus is that of statistical learning as a tool for science where we want to derive new knowledge.
In this talk we will focus on challenges associated with translating our mechanistic understanding into stochastic models such that they can be integrated with data. In particular, we will focus on the challenges of translating composite knowledge. We will show how these structures and the computational intractabilities they lead to make knowledge discovery challenging.
The perceived'success' of machine learning comes from application where we have large volumes of data such that only simple and generic models are needed in order to regularise the problem. This means that much of the progress that have been made with predictive models are challenging to translate into useful mechanisms for scientific applications. In this talk we will focus on challenges associated with translating our mechanistic understanding into stochastic models such that they can be integrated with data. In specific we will focus on the challenges of translating composite knowledge. We will show how these structures and the computational intractabilities they lead to makes knowledge discovery challenging. We will discuss properties that we desire from such structures and highlight the large gap that exists with current inference mechanism.
A research agenda in AI for science
'AI for science' sits at a nexus of disciplines, methods, and communities. Both AI and'science' (broadly defined) share a core interest in learning from data. From this interest emerge different research directions: for AI, questions about the nature of intelligence and how to understand the learning process in humans and machines; for science, the outputs of this learning process are the focus, with the aim of adding new knowledge about natural, physical, and social systems. A distinctive feature of the emerging 'AI for science' agenda is the ability to move between these worlds, using AI to drive progress in science and taking inspiration from science to inspire progress in AI. The result is a continuum of modelling approaches along a spectrum from strongly mechanistic to statistical models, which allow researchers to introduce or operate at different levels of abstraction.
The AI for science community therefore combines the ambitions of AI research with domain-specific goals to advance the frontiers of research and innovation in their discipline, with an engineering focus on designing systems that work in deployment, while operating across scales from the nano- to the interstellar. From these interfaces emerges a research agenda that -- if successful -- promises to accelerate progress across disciplines. Inspired by discussions at the Dagstuhl workshop, a list of research questions arising from this agenda is given in Annex 2. These span three themes:
**Building AI systems for science:** Attempts to deploy AI in the context of scientific discovery have exposed a collection of gaps in current machine learning and AI capabilities. Further work is needed to develop the technical capabilities that will allow AI to be used more effectively in research and innovation; developing those capabilities also offers opportunities to contribute to wider attempts to deliver sophisticated AI systems. Areas for progress include:
* Advancing methods, software and toolkits for high-quality simulation and emulation, which integrate effective uncertainty quantification and leverage advances in machine learning robustness to ensure they operate safely and effectively.
* Detecting scientifically meaningful structure in data, through advances in causal machine learning.
* Encoding domain knowledge in AI systems through integration of scientific laws, principles, symmetries, or invariances in machine learning models, and through virtual, autonomous systems to make research more effective.
**Combining human and machine intelligence:** Effective deployment of AI in science requires effective interactions between human, domain and machine intelligence across all stages of the deployment pathway. AI systems can be made more effective by integrating pre-existing knowledge about the system of study, but mechanisms are needed to extract and encode that knowledge. Effective interfaces are also required in the reverse direction. Translating the outputs of AI analysis to increased human capability requires an understanding of what insights are relevant, how they are best communicated, and the cultural environment that shapes the conduct of science. Areas for progress include:
* Designing interfaces between humans and machines or AI agents that can extract, formalise, and assimilate knowledge that domain researchers have acquired, including tacit knowledge, and that communicate new knowledge back to the user as actionable insights.
* Building mechanisms for explainability that allow researchers to interrogate why and how an AI system delivered a particular result, with the explanations provided being tailored to user need.
* Accelerating the pace of knowledge creation and use, through systems that mine the existing research knowledge base or that automate repetitive or time-consuming elements of the research process.
**Influencing practice and adoption:** By learning from recent experiences of deploying AI for science, the field has an opportunity to promote wider uptake and progress in both scientific domains and in AI research. This requires capturing both the knowledge that the community has already generated, about how to design AI systems, and the know-how about how to overcome practical challenges that accompanies it, while taking action to grow the community of researchers excited about the potential of AI in science. Areas for progress include:
* Supporting new applications, through challenge-led research programmes that promote interdisciplinary collaborations and support co-design of AI systems to help tackle scientific challenges.
* Developing toolkits and user guides that allow researchers to understand which AI tools are suitable for which purposes, and how to deploy those tools in practice.
* Sharing skills and know-how, through community outreach that disseminates knowledge and know-how in how to use AI.
Together, these areas for action highlight the importance of interfaces - between researchers and between modelling approaches - in shaping the development of AI for science (Figure 3).
Figure 3: Interfaces between machine learning and domain researchers, and between data-driven and mechanistic models.
Accelerating progress in AI for science
Building on the impressive advances that machine learning has already supported in many domains, widespread adoption of AI for research has the potential to catalyse a new wave of innovations that in turn could drive greater health, wealth, and wellbeing. The question facing researchers, funders, and policymakers today is how to harness that potential. The challenge is to build capability across the research landscape, connect areas of expertise to areas of need, and to accelerate the transfer of successful ideas between domains.
The experiences of deploying AI for science described in this document, and the research agenda that results from these experiences, suggest a roadmap for action. That roadmap charts a pathway to create an enabling environment for AI in science, by advancing research that delivers AI methods to support scientific discovery, building tools and resources to make AI accessible, championing interdisciplinary research and the people pursuing it, and nurturing a community at the interface of these different domains. Progress across these areas can unlock scientific and methodological advances in AI for science, while also helping answer an emerging question about whether there exists a core discipline of 'AI for science'. The shared themes and interests that emerge from research projects at the interface of AI and scientific domains suggest that there is potential for 'AI for science' to surface as a distinct speciality in computer science. In parallel, domain-specific efforts to drive the adoption of AI as an enabler of innovation are also needed to deliver the benefits of AI for scientific discovery.
### Advance new methods and applications
Efforts to deploy AI in the context of research have highlighted cross-cutting challenges where further progress in AI methods and theory is needed to create tools that can be used more reliably and effectively in the scientific context. Effective simulations are needed to study the dynamics of complex systems; causal methods to understand why those dynamics emerge; and integration of domain knowledge to relate those understandings to the wider world. While elements of these research challenges are shared with other fields - topics such as robustness, explainability, and human-machine interaction also come to the fore in fields such as AI ethics, for example - they share an intersection in the use of AI for science, in the context of efforts to bridge mechanistic and data-driven modelling.
Alongside these 'AI' challenges are a collection of'science' challenges, where researchers, policymakers and publics have aspirations for AI to deliver real-world benefits.35 Such challenges offer the opportunity to accelerate progress in AI, while facilitating interdisciplinary exchanges, and opening the field to input from citizen science or other public engagement initiatives. In developing these research missions, care is needed to define cross-cutting questions or challenges that broaden scientific imaginations, rather than restricting them. The process of converting a complicated scientific problem into something tractable with AI necessarily involves some narrowing of focus; to be successful, mission-led innovation efforts must achieve this focus without losing meaning, or creating benchmarks that misrepresent the complexity of the real-world challenge.
Footnote 35: See, for example: the EU’s Innovation Missions [https://research-and-innovation.ec.europa.eu/funding/funding-opportunities/funding-programmes-and-open-calls/horizon-europe/eu-missions-horizon-europe_en](https://research-and-innovation.ec.europa.eu/funding/funding-opportunities/funding-programmes-and-open-calls/horizon-europe/eu-missions-horizon-europe_en) and UN SDG’s [https://sdgs.un.org/goals](https://sdgs.un.org/goals)
Defining shared challenges could help rally the AI for science community and drive progress in both methods and applications of AI in science. There already exists examples of how such challenges can build coalitions of researchers across domains from which the field can draw inspiration. These include the GREAT08 project, which developed image analysis techniques to study gravitational lensing [73]; the Open Problems in Single Cell Biology challenge, which convened the machine learning community to make progress in Multimodal Single-Cell Data Integration;36 and the SENSORIUM challenge, focused on advancing understandings of how the brain processes visual inputs.37 In pursuing this agenda, researchers can leverage well
established protocols in open-sourcing materials and sharing documentation to help ensure research advances are rapidly and effectively disseminated across disciplines. The result should be more effective methods, and an agile research environment where researchers can flex methods across disciplines.
### Invest in tools and toolkits
Complementing these efforts to build and share knowledge, well-designed software tools can help make accessible the craft skills (or know-how) that make AI for science projects successful. Modelling is a core component of all AI for science projects. In some aspects, the task for the field can be thought of as charting a path between the statistician, whose effectiveness comes from proximity to the domain but whose methods struggle to scale, and the mathematician, whose tools are adopted across domains but with some loss of meaning as the distance between method-generator and adopter increases.
The energy already invested in building effective machine learning models can be leveraged for wider progress across domains through investment in toolkits that support the generalisation of effective approaches. Wide-spectrum modelling tools could offer 'off the shelf' solutions to common AI for science research questions. The challenge for such toolkits is to create an effective interface between tool and user. Connecting with the field of human-computer interaction could generate design insights or protocols to help create more effective human-AI interfaces.
Best practices in software engineering can help, through documentation that supports users to successfully deploy modelling tools. User guides - or taxonomies of which models are best suited for which purposes and under what circumstances -- can also help make accessible to non-expect users the accumulated know-how that machine learning researchers have gained through years of model development and deployment.
A related engineering challenge is that of data management and pipeline-building. To interrogate how a model works, why a result was achieved, or whether an AI system is working effectively, researchers often benefit from being able to track which data contributed to which output. The data management best practices that allow such tracking need to be embedded across AI for science projects. Data management frameworks - such as the FAIR data principles -- have already been developed with the intention of making data more available, and useful, for research. Further investment is now needed in efforts to implement those principles in practice.
Investment in these foundational tools and resources can help build understanding of which AI methods can be used and for what purposes, lowering the barriers to adopting AI methods across disciplines.
### Build capability across disciplines
Central to progress in both research and toolkit engineering is the availability of talented researchers with a passion for advancing science through AI. People matter at all stages of the AI development and deployment pipeline. Successful projects rely on researchers who are motivated to work at the interface of different domains; collaborators who can explain and communicate core concepts in their work across disciplinary boundaries; engineers who can translate the needs of different users into AI toolkits; and convenors that can inspire wider engagement with the AI for science agenda.
Building these capabilities requires multiple points of engagement. Domain researchers need access to learning and development activities that allow them to understand and use foundational methods in machine learning, whether as formal training or through the availability of tutorials or user guides. AI researchers need access to the scientific knowledge that should shape the methods they develop, the skills to translate their advanced knowledge to materials that can be shared for wider use, and the capacity to dedicate time and resource
to learning about domain needs.38 Both need skills in communication, organisation, and convening to operate across disciplines. Without such capability-building, disciplines risk remaining siloed; domains developing unrealistic expectations about what AI can deliver in practice, and AI losing touch with the scientific questions that are most meaningful to domains.
Footnote 38: A comparison here can be drawn with the development of statistics as an enabling discipline for many domains: statisticians have devoted time to understanding domain practices and integrating their work within those practices, often dedicating significant resource to understand the nature of the datasets with which they are working, before introducing modelling ideas.
Institutional incentives shape how individuals engage (or not) with such interdisciplinary exchanges. Interdisciplinary research often takes longer and lacks the outlets for recognition available to those working in single domains, affecting both the motivation of and opportunities for career progression that are open to those working at the interface of different disciplines. Much of the engineering work required to make data and AI accessible beyond a specific project and useful to a wider community is also traditionally unrecognised by academic incentive structures. Aligning individual and institutional incentives in support of interdisciplinarity is a long-standing challenge in research, and one that becomes more critical to address in the context of developments in AI. In this context, there may be new opportunities to recognise and reward successes in AI for science, whether through new fellowships, prizes, or ways of promoting the work done by those at this interface.
### Grow communities of research and practice
The areas for action described above feed into and from each other. Progress in research and application can be leveraged to inspire a generation of researchers to pursue interdisciplinary projects; effective toolkits can make such progress more likely; skills-building initiatives can prime researchers to be able to use these toolkits; and so on, to create an environment where researchers and research advances transition smoothly across disciplines, leading to a rising AI tide that lifts all disciplines. Communities of research and practice are the backdrop for creating such positive feedback loops.
A collection of AI for science initiatives are already building links across the research landscape. The Machine Learning for Science Cluster of Excellence at the University of Tubingen is leveraging the strength of its local ecosystem in AI to drive wider progress in research and innovation;39 the Accelerate Programme for Scientific Discovery at the University of Cambridge is building bridges across disciplines, building a community passionate about opportunities in AI for science;40 the University of Copenhagen's SCIENCE AI Centre provides a focal point for AI research and education in its Faculty for Science;41 New York University's Center for Data Science hosts interdisciplinary faculty pursuing innovative research and education;42 the University of Wisconsin-Madison's American Family Insurance Data Science Institute is developing strategic partnerships to accelerate the use of data science in research;43 new investments by Schmidt Futures across a network of research institutions are supporting new postdoctoral fellowships at the interface of AI and sciences [74]. Together, these initiatives demonstrate the appetite for progress in AI for science.
Footnote 40: Programme website available at: [https://aik.ku.dk](https://aik.ku.dk).
Footnote 41: Programme website available at: [https://cds.nyu.edu](https://cds.nyu.edu).
Footnote 42: Programme website available at: [https://datascience.wisc.edu/institute/](https://datascience.wisc.edu/institute/).
There is an opportunity today to leverage these emerging interests into a wider movement. Existing initiatives can drive capability-building, by making training and user guides open, reaching out to engage domain researchers in skills-building activities, and fostering best practice in software and data engineering across disciplines. The links they establish across research domains can form the basis of new communication channels, whether through discussion forums, research symposia, or newsletters to share developments at the interface
of AI and science. These communications can be deployed to raise the profile of people and projects at this interface, celebrating successes, sharing lessons, and demonstrating the value of interdisciplinary work. Together, they can help develop an infrastructure for AI in science.
That infrastructure may also benefit from new institutional interventions to address long-standing challenges in interdisciplinary AI. New journals could provide an outlet to publish and recognise high-quality AI for science research, bringing in contributions from multiple disciplines and helping translate lessons across areas of work. Membership organisations could help foster a sense of belonging and community for researchers working at the interface of AI, science, and engineering, developing career pathways and incentives. Efforts to convene across disciplines can also catalyse new connections and collaborations.
Emerging from these efforts is a paradigm shift in how to drive progress in science. Historically, a small number of foundational texts have been the catalyst that changed how researchers studied the world; Newton's Principia; Darwin's Origin of Species; and so on. For much of its modern history, scientific knowledge has been transmitted through textbooks; canonical descriptions of the current state of knowledge. Today, the transformative potential of AI is driven by its pervasiveness; its impact in science will be achieved through integration across disciplines. This integration requires widespread mobilisation, convening machine learning researchers, domain experts, citizen scientists, and affected communities to shape how AI technologies are developed and create an amenable environment for their deployment. It takes a community.
### AI and science: building the interface
Advances in AI have disrupted traditional ways of thinking about modelling in science. Where researchers might previously have conceptualised models as mechanistic -- reflecting known forces in the world - or data-driven, the 'AI for science' methods that are emerging today reject this separation. They are both, combining insights from mechanistic and data-driven methods, integrating methods to create something new. What follows from these developments is a spectrum of modelling approaches, which researchers can deploy flexibly in response to the research question of interest.
Today, the field of AI for science is characterised by intersections. Between AI and scientific domains; between science and engineering; between knowledge and know-how; between human and machine. It operates across disciplinary boundaries, across scales from the atomic to the universal, and across both the mission to understand intelligence and the quest to deploy human intelligence to understand the world. Emerging from these missions is a continuum of models and methods that allow researchers to work across domains, extracting the knowledge that humans have acquired, and levels of inquiry, enhancing that knowledge and returning it in actionable form.
As both a domain itself and an enabler of other disciplines, the power of AI in science lies in its ability to convene diverse perspectives in ways that accelerates progress across research areas. AI for science is a rendezvous point. Its next wave of development will come from taking strength from its diversity, and bringing more people into its community.
### Acknowledgments
The Accelerate Programme for Scientific Discovery would like to thank Schmidt Futures for its continuing support, and the donation that enables its work.
|
2302.13442
|
Understanding URDF: A Survey Based on User Experience
|
With the increasing complexity of robot systems, it is necessary to simulate
them before deployment. To do this, a model of the robot's kinematics or
dynamics is required. One of the most commonly used formats for modeling robots
is the Unified Robot Description Format (URDF). The goal of this article is to
understand how URDF is currently used, what challenges people face when working
with it, and how the community sees the future of URDF. The outcome can
potentially be used to guide future research. This article presents the results
from a survey based on 510 anonymous responses from robotic developers of
different backgrounds and levels of experience. We find that 96.8% of the
participants have simulated robots before, and of them 95.5% had used URDF. We
identify a number of challenges and limitations that complicate the use of
URDF, such as the inability to model parallel linkages and closed-chain
systems, no real standard, lack of documentation, and a limited number of
dynamic parameters to model the robot. Future perspectives for URDF are also
determined, where 53.5% believe URDF will be more commonly used in the future,
12.2% believe other standards or tools will make URDF obsolete, and 34.4% are
not sure what the future of URDF will be. Most participants agree that there is
a need for better tooling to ensure URDF's future use.
|
Daniella Tola, Peter Corke
|
2023-02-26T23:51:28Z
|
http://arxiv.org/abs/2302.13442v2
|
# Understanding URDF: A Survey Based on User Experience
###### Abstract
With the increasing complexity of robot systems, it is necessary to simulate them before deployment. To do this, a model of the robot's kinematics or dynamics is required. One of the most commonly used formats for modeling robots is the Unified Robot Description Format (URDF). The goal of this article is to understand how URDF is currently used, what challenges people face when working with it, and how the community sees the future of URDF. The outcome can potentially be used to guide future research.
This article presents the results from a survey based on 510 anonymous responses from robotic developers of different backgrounds and levels of experience. We find that 96.8% of the participants have simulated robots before, and of them 95.5% had used URDF. We identify a number of challenges and limitations that complicate the use of URDF, such as the inability to model parallel linkages and closed-chain systems, no real standard, lack of documentation, and a limited number of dynamic parameters to model the robot. Future perspectives for URDF are also determined, where 53.5% believe URDF will be more commonly used in the future, 12.2% believe other standards or tools will make URDF obsolete, and 34.4% are not sure what the future of URDF will be. Most participants agree that there is a need for better tooling to ensure URDF's future use.
## I Introduction
Simulating robots has become an integrated part of robot system development [1, 2], as it reduces the cost by allowing experimentation with parameters and environments in advance of committing to the physical hardware. A commonly used method to represent a robot's geometry and physical appearance is the Unified Robot Description Format (URDF) [3]. It was introduced with the Robot Operating System (ROS) in 2009 as a format to describe the kinematics, geometries, and dynamics of robots in a universal manner [4].
A URDF file is an XML-based file with an extension of _.urdf_, that can be imported and exported by different tools for visualization or simulation purposes. It describes robot links (rigid bodies) and the joints that connect the links using a tree structure.
The use of URDFs has increased over the years, as evidenced by the trend in Google queries for the term 'URDF', see Fig. 1. Although URDF is widely used, it has inherent issues which have been made clear by multiple roboticists through online forums such as ROS Discourse [5] (posts from 2016-2022), GitHub repositories [6] (posted in 2015), and Google groups [7] (posted in 2015). Additionally, the first ROSCast (in 2016) at MetroRobots.com discussed some of the underlying issues of URDF and potential future directions [8]. At the time of writing (February 2023) many of the challenges with URDF, that were described since 2015, still persist. Over time, see Fig. 2, other open-source object description formats have been developed such as the Simulation Description Format (SDF) [9], Universal Scene Description (USD) [10], and MuJoCo Modeling XML File (MJCF) [11].
With various formats emerging, it is necessary to understand the current state of URDF and its potential future directions. We conducted an anonymous survey on the experience of roboticists with URDF, where we asked about the development of URDF, its limitations, and desired improvements. The results of this survey can be used to direct
Fig. 1: Google trend (trends.google.com/trends) for the search term ‘URDF’ since 2009. Note that the data is an approximation of the number of Google searches. The data points where the search is zero percent are due to insufficient data.
Fig. 2: Timeline presenting the year of the first commit to the default branch of the GitHub repository of each description format.
future research on URDF and robot description formats.
Our contributions in this article include:
* a study with over 500 robotics developers,
* an analysis of challenges and possible improvements,
* presentation of quantitative and qualitative data on the current use of URDF and perspectives for its future use,
* experienced users' knowledge of tools to create URDFs,
* and the survey materials and results allowing other roboticists to build on top of this work.
Section II provides a brief introduction to URDF, followed by the methodology of the survey in Section III, and the results in Section IV. An overall summary and discussion of the survey results is presented in Section V. We conclude in Section VI with our main findings and directions for future work.
## II Background
A URDF file is a human-readable XML file describing the kinematic structure, dynamics, and visual representation of a robot. The visual representation can be defined using basic 3D shapes such as boxes and cylinders or using 3D polygon meshes that typically consist of connected triangles defining the shape of the object. Figure 3 shows two examples of robotic arms modeled with URDF, the left one using basic 3D shapes, and the right one using 3D polygon meshes.
### _URDF File_
URDF is a stand-alone file format that includes relevant modeling details of a robot. It was initially introduced with ROS but has since been adopted by tools both within and outside of the ROS ecosystem. Listing 1 illustrates how a link and a joint of a robot can be modeled using URDF.
#### Ii-A1 Links
are rigid bodies of the robot that can be connected using joints. The base link, on lines 1-8 in Listing 1, represents the fixed base of the robot which is visualized as a cylinder, the upright gray cylinder shown in Fig. 3 (left robot). Further information on link properties can be found in the ROS Wiki page [12]. Note, that only links of rigid bodies can be represented using URDF, deformable bodies cannot.
#### Ii-A2 Joints
connect two links, a parent and child link. The parent link is closer to the robot base, and the child link is closer to the tool tip. The joint, on lines 10-15 in Listing 1, connects the base link with another link, link 1. These two links can be seen on the left robot in Fig. 3, in which the bottom light gray cylindrical link is the parent link, and the long vertical dark gray cylindrical link is the child link. This joint is continuous, meaning it is revolute with no motion limits. Additional joint properties can be found in the ROS Wiki page [13].
### _Xacro_
Xacro is a macro language for XML that is used to construct URDF files [14] and the xacro preprocessor is part of the ROS ecosystem. It provides tags that can be used to configure URDF files based on the application, to reduce redundancy and improve the maintainability of the models. These tags are defined in a URDF-based xacro file with the extension _(.xacro)_. The xacro preprocessor is an executable used to generate URDF files by interpreting the xacro tags and using input values. The preprocessor allows combining multiple xacro files, which is especially beneficial when dealing with complex robotic structures [15].
## III Methodology
### _Motivation_
The motivation for creating the survey is to understand the current position of URDF in the robotics community. We use the survey to answer the following questions:
**Q1:** Who is using URDF?
**Q2:** What is URDF being used for?
**Q3:** What are the limitations of URDF?
**Q4:** What is the future of URDF?
Fig. 3: Example of a URDF with basic shapes (left) and a URDF with polygon meshes (right).
We ask these questions to understand how common the format is, its perceived complexity by the robotics community, and how users envision its future. These questions are important to guide the direction of future research in the area of robot description formats.
### _Survey Design_
An online survey among robotics developers was conducted in December 2022 to answer our research questions. The survey contained in total 22 questions, of them 3 are demographic, 16 are multiple choice where 8 of them also had a text option, 2 are rating questions, and 1 is open-ended. The demographic questions were used to understand the experience and background of the participants. This structure of starting with demographic questions was adapted from another survey on simulation for testing robots [2]. We used the online tool _SurveyXact_[16] to conduct the survey and collect data. To avoid asking irrelevant questions, the survey was structured with different paths depending on the participant responses, resulting in varying numbers of responses for different questions. A list of the questions, their ID, and the number of responses is presented in the appendices in Table V. We will refer to the questions by their ID. Ethical approval of the survey was granted by the Institutional Review Board at Aarhus University with the approval number TECH-2022-013.
### _Recruitment_
To collect responses we distributed the survey via email to the robotics community, through LinkedIn posts (by both authors, nearly 40.000 impressions), a Twitter post, an email to robotics-worldwide, a ROS Discourse post, and posting in robotics Reddit communities. In total, 689 participants took the survey, where 515 of them fully completed the survey and the remaining partially completed it. Out of the 515 responses, 5 did not give their consent therefore ending the survey, leaving 510 completed responses which this article is based on. In our recruitment emails and posts, we described that we are conducting a survey on robot simulation and URDF. As URDF was directly stated in the recruitment description, we can assume that most participants have experience with or knowledge of the format.
The experience of the participants and the organizations where they have simulated robots or worked with URDF are presented in Table I, where 13/510 participants had no experience with this. In total 16/510 participants had never simulated a robot before (based on answers to D1 and S1). The majority of the participants have heard of URDF (477/497) and used it (472/477). The experience of the participants on URDF is presented in Table II. Participants reported different levels of experience with simulation and URDFs, within different sizes and types of organizations. We can claim that the sample size (510) and the participants' background is diverse ensuring that the study is not limited to one type of population, but there is a potential bias towards people in academia (82%) compared to industry (52%).
### _Analysis_
The survey consisted of both quantitative and qualitative questions, allowing the participants to express their own opinion on the challenges they found, desired improvements, and future perspectives on URDF. We used descriptive coding to analyze the open-ended questions, which entails summarizing responses into words or phrases that describe a main topic in the data. Specifically, we used _in vivo coding_ as the phrases defining the topics are taken directly from terms used by the participants themselves [17]. The classification of responses into the different topics was performed based on our subjective opinion.
### _Threats to Validity_
To mitigate the risk of bias in the questions, we followed survey best practices [18] such as allowing open-ended answers for participants to explain their choice, and iteratively pre-testing with volunteers.
Typical biases that can occur in multiple-choice questions are position bias and forced-choice bias. Position bias can occur depending on the order of the choices, and to reduce this we have ordered the choices alphabetically. Forced-choice bias occurs when participants do not agree with any of the choices, but are forced to choose one of the options to continue the survey. To reduce this, we have in many questions allowed participants to select "Other" and describe their answer using text, allowing them to provide information based on their subjective experience/opinion. A bias may also occur when allowing participants to answer both via multiple-choice and text, as many participants want to limit the time spent on a question, and therefore may choose to skip providing additional information using the text option.
Adding further comments or notes was possible at the end of the survey. Some of these comments can be found in the appendix in Section II, and pointed out potential biases in the formulation of the questions, where:
* Three reported that S10 is limiting and induces a bias (participants A, B, and C). Question 10 asks the participants how they have used URDF, with the possibility to choose only one of the choices: _'Always in combination with ROS'_, _'Always in combination with ROS, but would prefer not to require ROS'_, or _'Without ROS'_. We agree with the participants' statements and therefore chose to limit the conclusions drawn from these results.
* One reported S16 is leading by using the word _painful_ and makes it clear that we are looking for a specific result (participant D). The question is open-ended and goes as follows _'What would you say is the most painful part of creating/modifying a URDF model?'_. One of our motivations for creating this survey is a result of our experience with the workflow of creating URDFs, which we believe is painful, and is unfortunately made clear in the question potentially inducing a bias.
* One reported S18 as biased, as it seemed we were asking which of the following improvements should we implement first (participant A). This does not reflect the truth behind the question and its choices. The
improvements we listed in the multiple-choice question were found on community posts and were internally discussed and agreed upon as what evidently seemed to be desired by people. Naturally, such a question will be biased as the majority will choose answers from the list instead of writing their own opinion.
External validity describes how well the results can be generalized [19]. The threat to external validity in this survey is reduced with a large number of participants (\(>\)500), their various organizational backgrounds, and differing years of experience, see Table I and Table II.
Construct validity is related to generalization of results with focus on correctly labeling results by making inferences based on observations. As we have categorized the responses to open-ended questions ourselves, there is a potential threat to the construct validity in the results. Conclusion validity concerns defining the relationship between variables in the data and whether or not the conclusion is reasonable. As we have compared variables between responses from different questions, there is a potential that the conclusions we have made are not entirely reasonable. To mitigate these threats, we have shared the preliminary findings of the survey with people from the community through the same media used for recruitment.
To promote further research we share our recruitment materials, questionnaire, and additional results at the URL1.
Footnote 1: github.com/Daniella1/urdf_survey_material
## IV Results and Analysis
### _General Results_
The participants' experience with simulation and/or URDF, and the organization types where they have used this, are shown in Table I. The participants had last simulated a robot (S1, 497 responses) within the last month (70%), within the last year (24%), in the last 5 years (4%), and 1% more than 5 years ago. The remaining 1% had never simulated a robot.
Table II shows the participants' experience with URDF. 497/510 participants had simulated a robot or worked with URDF before, where 477 had heard of URDF and 472 had used it. One of the participants that had not previously used URDF stated it was too difficult, while another stated it was due to the limitations of representing deformable bodies.
The software tools used by participants to simulate robots (S2, 494 responses) are shown in Fig. 4, where Gazebo was the most used tool (83%), followed by RViz (78%). RViz is part of the ROS ecosystem which is provided by the Open Source Robotics Foundation (OSRF) [20]. The foundation also provides Gazebo, which is easily integrated to work with the ROS ecosystem. The least-used tool is FlexSim which less than 1% of the participants have used. Of the participants, 21% stated they use other tools for simulating robots, and the most popular answers among them were PyBullet (17%), NVIDIA Omniverse consisting of Isaac Sim and Gym (14%), MuJoCo (11%), manufacturer-specific tools such as ABB studio or URSim (9%), and Drake (7%). Manufacturer-specific tools do not typically support URDF, as they provide models of their own robots, making it difficult to simulate a complete application with components from other manufacturers or custom robots.
### _Use of URDF_
**Application domains (S7, 472 responses):** URDF has been used in a variety of application domains, including Manufacturing (46%), Transportation (24%), Agriculture (14%), Medical (12%), Cleaning (8%), Defense (6%), and
\begin{table}
\begin{tabular}{|c c|c c|c c|} \hline
**Years of experience** & **\%** & **Last used** & **\%** & **Experience** & **\%** \\ \hline \(<\)1 & 16.1 & Last month & 66.5 & Beginner & 21.8 \\
1-5 & 61.8 & Last year & 27.3 & Intermediate & 57.4 \\
6–10 & 15.7 & Last 5 years & 5.3 & Expert & 20.8 \\ \(>\) 10 & 5.2 & \(>\)5 years ago & 0.8 & & \\ None & 1.0 & & & & \\ \hline
477 responses & \multicolumn{4}{c|}{472 responses} & \multicolumn{2}{c|}{472 responses} \\ \hline \end{tabular}
\end{table} TABLE II: Participants’ experience with URDF. The last column represents how experienced the participants rate themselves with URDF. Note that the tables are positioned independently of each other.
Fig. 4: The tools that the participants have used to simulate robots with 3D visualizations (S2, 494 responses).
\begin{table}
\begin{tabular}{|c c|c c|c c|} \hline
**Years of experience** & **\%** & **Organization type** & **\%** & **Organization employees** & **\%** \\ \hline \(<\)1 & 10.0 & Academia & 82.3 & 1–10 & 26.0 \\
1-5 & 61.2 & Industry & 52.5 & 11–50 & 23.7 \\
6–10 & 16.7 & Government & 4.0 & 51–100 & 10.3 \\ \(>\)10 & 9.6 & Unaffiliated & 14.7 & 100+ & 40.0 \\ & & groups & & & \\ None & 2.5 & Individual & 25.8 & & \\ & & Other & 0.4 & & \\ \hline
510 responses & \multicolumn{4}{c|}{497 responses} & \multicolumn{2}{c|}{493 responses} \\ \hline \end{tabular}
\end{table} TABLE I: Demographics of the participants’ experience with robot simulation and/or URDF. Examples of _Unaffiliated groups_ are hobby or school clubs. Note that the tables are positioned independently of each other.
Marine (6%). Additionally, 34% of respondents stated they used URDF in other domains, with 28% of the 34% using them in Academia, 7% in Service robots (e.g. for household, retail, healthcare), 6% in Logistics or Warehouses, 6% in Construction robots, and 5% use URDF in Mining.
**Manufacturers (S8, 472 responses):** Of the robots most commonly used with URDF, the manufacturers are Universal Robots (54%), Kuka (37%), Franka Emika (26%), Robotiq (24%), ABB (21%), Clearpath Robotics (21%), Kinova Robotics (15%), Willow Garage (14%), Fanuc (10%), Boston Dynamics (9%), and Yaskawa Motoman Robotics (6%). In addition, 40% of the participants stated they use URDFs of other manufacturers, with 31% stating they have used URDF for self-made/custom robots, and 8% stating the robot URDFs they use are not of any manufacturer. The 8% that did not belong to a manufacturer can also be classified as custom robots, resulting in a total of 39%.
**Robot types (S9, 472 responses):** The types of robots modeled in URDF are shown in Fig. 5. Of these, single robotic arms were the most commonly modeled (81%), while delta robots were the least modeled (4%). Other types of robots were used by 6% of the participants, 14% of whom modeled underwater vehicles. The low percentage of delta robots modeled with URDF can potentially be related to the limitations of URDF not supporting parallel linkages or closed-chain systems.
Responses to multiple questions provide consistent information. The fact that URDFs from Universal Robots (54%) and Kuka (37%) are most commonly used, and the main types being single robotic arms (81%) and mobile robots (64%), implies that these robots are commonly used in the dominant application for URDF which is manufacturing (46%).
**URDF and ROS (S10, 472 responses):** Looking at the use of URDF and ROS, 66% stated always using URDF in combination with ROS, 18% always with ROS but would prefer not to require ROS, and 16% without ROS. As described earlier in Section III, this question was biased as the users could not choose with and without ROS. We can only conclude that URDF is commonly used with ROS, by the large number of responses to this (66%), but also by looking at the high usage of simulation tools compatible with ROS (Gazebo and RViz). We can also state that a number of the participants prefer URDF's independence from ROS, but we cannot rely on the percentage here, as the question is leading and could affect the responses.
### _Creating URDFs_
**Tools to create URDF (S11, 441 responses):** Tools the respondents were aware of are xacro (82%), SolidWorks URDF exporter (57%), Fusion2URDF (17%), OnShape to URDF exporter (12%), the Blender extension Phobos (12%), and PTC Creo to URDF exporter (4%). Additionally, 90% have used xacro to generate URDFs (S12, 361 responses).
**URDF and 3D polygon meshes (S13, 472 responses):** URDF can be used with basic shapes, such as boxes and cylinders, or with 3D polygon meshes that visualize the robot, see Fig. 3. Most of the participants (93%) have used URDF with 3D polygon meshes.
**Methods to obtain URDF (S14, 439 responses):** Of the participants that had used URDF with 3D geometrical meshes, 67% stated they developed the URDF by hand, 61% obtained the URDF from a CAD tool, 59% from a ROS package, 47% from the website of the robot manufacturer, and 20% from a simulation tool. It is important to note that this question was asked with the possibility to answer multiple options, and therefore if a participant has developed a URDF by hand, they may have also used a CAD tool for aligning or working with the meshes. Furthermore, it is possible that some participants have obtained a URDF and subsequently modified it, therefore choosing the option "developing the URDF by hand".
Most robots in ROS packages are described using xacro. The results from the survey showed that 261 participants have used ROS packages, 313 have used xacro, and in total 203 have both obtained URDFs from ROS packages and at some point in their work used xacro. As most ROS packages contain xacro files that need to be processed with the xacro preprocessor, we would assume that the number of participants that have obtained URDFs from ROS packages would be similar to the number of participants that have used xacro before. The fact that only 203/261 have obtained URDFs from ROS packages and also used xacro before shows a discrepancy in our expectations. This may indicate that either many ROS packages contain URDF files that do not need to be preprocessed with xacro, or a misunderstanding by the participants of what a ROS package is. Another interesting fact about the results is the excessive use of xacro, where 313/361 respondents stated they have used xacro before. This indicates that it is a useful tool, but similarly to URDF, it has a number of issues [15].
**Difficulty rating of creating/modifying URDFs (S15, 439 responses):** The participants that have created/modified URDFs rated the difficulty of this, of whom the majority (57%) rated it as medium, 32% experienced it as difficult,
Fig. 5: Robot types modeled in URDF (S9, 472 responses).
7% experienced it as easy, 1% answered "Do not know", and 3% stated they have no experience with this.
Furthermore, one of the participants stated "_We have a coffee mug in the lab that says, 'DO NOT CHANGE THE URDF!'_", anecdotally illustrating the perceived complexity of modifying URDFs.
**Challenges with creating/modifying URDFs (S16, 278 responses):** Of the respondents, that have created/modified URDFs, the most painful parts of the process were setting up the poses and meshes of the robot (21.2%), the lack of tooling making it difficult to debug the URDF (18.6%), and adjusting dynamic and kinematic parameters to be accurate (9.0%). The full overview of the categories is shown in Table VI in the appendix.
### _Challenges, Improvements, and Future Directions_
**Challenges and limitations (S17, 440 responses):** The challenges experienced by the participants are shown in Fig. 6. In addition, 10% of the participants experienced other challenges presented in Table VII in the appendix.
The most common challenges found in the survey from responses to various questions are summarized in Table III.
**Improvements (S18, 461 responses):** The main improvements chosen by the participants are better tools for combining and manipulating URDF files (70%), easier methods or tools to create URDF files (69%), better tools for validating URDF files (65%), better documentation (56%), a versioned standard (i.e., URDF v1.0, URDF v1.1, etc.) (35%), and standardized metadata like originator, date, version, etc. A number of the participants suggested improvements which are categorized in Table VIII in the appendix, with the key points being a clear demand for better tooling for creating, combining and manipulating, and validating URDFs.
**Future directions (S19, 477 responses):** Of the respondents, 53% believed that URDF will be more commonly used in the future with the main arguments being that URDF is the de-facto standard in ROS, and as a lot of tools in ROS rely on it, it would be too difficult to replace URDF with a new format. 34% were not sure of URDF's future use, and 12% did not believe it would be used in the future. Their main arguments were that URDF is too limited and instead other formats that tackle these limitations will take over. More details on these elaborations are presented in Table IX, Table X, and Table XI in the appendix.
The main benefits of URDF perceived by the respondents are summarized in Table IV.
### _Analysis_
In this section we analyze responses from multiple questions and check if there are correlations in the results.
To validate the participants' self-rating of their URDF competences, we compare the years of experience with the participants' rating, see Fig. 7. It is clear that most participants that rate themselves as intermediate have between 1-5 years of experience with URDF, whilst very few with less than a year of experience rate themselves as experts, suggesting the competence self-ratings are reasonable.
Figure 8 shows the participants' future predictions based on self-rated competences with URDF. The results imply there is no significant correlation between the competences of the participants and their future predictions for URDF.
Figure 9 shows the participants' perceived difficulty of creating/modifying URDFs based on their self-rated com
\begin{table}
\begin{tabular}{p{14.2pt}|p{142.3pt}} \hline \hline
**Challenges** & **Details** \\ \hline \multirow{6}{*}{_Format_} & XML syntax and boilerplate code. \\ \cline{2-3} & Multiple xaro files and very long URDF files make it difficult to find components and keep an overview. \\ \cline{2-3} & Does not support parallel linkages or closed-chain systems. \\ \cline{2-3} & No real standard and no versioning, making it difficult to know which features of URDF a simulator supports. \\ \cline{2-3} & Only supports solid bodies, meaning it is not possible to model soft or deformable objects. \\ \cline{2-3} & There are not enough dynamic parameters, e.g. it is not possible to define the elasticity of objects. \\ \cline{2-3} & Does not support nonlinear mimic joints. \\ \cline{2-3} & Not possible to define limits to higher-order variables of joints, e.g. acceleration, jerk, etc. \\ \hline \multicolumn{3}{c|}{Cannot change parameters online in simulation.} \\ \hline \multirow{6}{*}{_Creating URDFs_} & Meshes: difficult to display colors, file paths for meshes, matching visualization and collision meshes, setting the origins. \\ \cline{2-3} & Setting up frames and ensuring they are accurate. \\ \cline{2-3} & Tedious workflow: debugging is complicated as you need to reload the URDF into a visualization tool every time you make a change, and you need to use multiple tools (CAD, XML, and visualization) to develop one URDF. \\ \cline{2-3} & Adjusting parameters to be accurate, e.g. the dynamics or kinematic parameters of inertia, joint limits, and positions need to be accurately defined to represent the real world. \\ \hline \multirow{6}{*}{_Surroundings_} & Lack of documentation makes the learning curve steep. \\ \cline{2-3} & Lack of tooling: to create robots out-of-the-box, for debugging, for validating URDFs, intelligense or linter, one framework or tool to be able to create/modify/validate URDFs, or a light preview tool for live editing, and tools for converting between URDF and other formats. \\ \cline{2-3} & More manufacturers should provide accurate kinematic and dynamic parameters. \\ \cline{2-3} & Difficult to build applications with URDF. \\ \hline \hline \end{tabular}
\end{table} TABLE III: Summarized most common challenges of URDF derived from responses to S16, S17, S18, and S19.
Fig. 6: Challenges encountered by the participants (S17, 440 responses).
petences with URDF. The results imply that the majority of the participants experienced creating/modifying URDFs as of medium difficulty, regardless of their self-rated competence with URDF. There is a clear pattern within the beginner and experts groups. There are fewest beginners in the easy column, with more in medium, and the most in difficult, whilst for the experts the rating is the opposite, i.e., there are fewest experts in the difficult column. This may indicate, as some of the participants have mentioned themselves, that the learning curve of working with URDF is steep, and with more experience the easier it becomes to work with URDF.
Of the participants that found it difficult to create/modify URDFs, 74% found it difficult to create a URDF (as expected), and 60% had issues importing URDFs into simulation tools, see Fig. 10. Issues with URDF not supporting parallel linkages were experienced by 50% of the participants that rated the process of creating/modifying URDF as easy, whilst only 3% of them had issues with creating URDFs. These results can be associated with the results in Fig. 9, where the majority of the URDF experts rated the process of creating/modifying the file as easy, indicating the learning curve of working with URDF is steep. This could potentially be a symptom of a lack of documentation.
Of the participants that do not believe URDF will be used in the future, 45% of them have had difficulties importing the URDF into simulation tools, and 40% have had difficulties creating a URDF, as shown in Fig. 11. Although those numbers are high, there is also a high percentage of participants that believe URDF will be used in the future and have encountered the same challenges, indicating there is no specific challenge that has affected the opinion of the participants.
## V Summary and Discussion
In this section, we answer the questions asked in Section III based on the survey results. Furthermore, discussions on the future focus of URDF are presented.
### _Summary_
#### V-A1 Q1: Who is using URDF?
URDF has mainly been used in academic (82%) and industrial (53%) organizations. The majority of the users (62%) have 1-5 years of experience, and have used URDF within the last month (67%) of taking the survey (conducted December 2022). More than 66% of the participants have used URDF in combination with ROS while others have used it without ROS, showing the format is used both within and outside of the ROS ecosystem.
#### V-A2 Q2: What is URDF being used for?
Manufacturing (46%) and Transportation (24%) are the dominating application domains for URDF's use. The most common robot types modeled with URDF are single robotic arms (81%) and mobile robots (64%), which are also the main types of
\begin{table}
\begin{tabular}{p{85.4pt}|p{284.5pt}} \hline
**Benefits** & **Details** \\ \hline Open-source & Open-source robotics is becoming more common in the industry, and URDF is supported by some of the main ROS tools e.g. RViz and Moveit which are used in open-source robotics. \\ \hline Interoperable & As URDF is not dependent on one specific simulation tool, but is a portable, manufacturer-independent model, it allows users to simulate in any tool that supports URDF and exchange models within the community. \\ \hline Accessible & It is simple enough (compared to some other formats) that many non-experts can use it and follow the structure of the robot, also considering it is a human-readable format. \\ \hline Custom models & It allows developers to create custom models of their robots. \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Summarized benefits and advantages of URDF derived from responses to S19.
Fig. 8: The future predictions of the participants based on their self-rated competences with URDF.
Fig. 7: The participants’ self-rated URDF competences based on their years of experience with URDF.
Fig. 9: The difficulty of creating/modifying URDFs based on the participants’ self-rated competences.
robots used in manufacturing. Most of the participants use URDF with 3D visualizations (93%).
#### V-A3 Q3: What are the limitations of URDF?
Most of the respondents had created URDFs by hand, and this is where the main limitations were found. When creating a URDF multiple tools are needed, such as CAD software for modifying the meshes, a program to write the XML structure, and additional software to visualize and test the URDF. Participants described that the debugging process was tedious; every time a parameter in the URDF file was tweaked, the file was loaded into the visualization software (again) and tested. Additionally, difficulties when importing URDFs into simulation tools, and the inability to model parallel linkages or closed-chain systems were reported.
The main improvements for URDF, desired by the participants, are better tools for creating, manipulating, and validating URDFs. Improved documentation to decrease the steep learning curve of URDF was also high on the list of desired improvements for URDF.
#### V-A4 Q4: What is the future of URDF?
The majority (53%) of the participants believed that URDF will be used in the future. Their main argument being that URDF is the de-facto standard in ROS where many tools rely on it to function, making it too difficult to replace URDF with another format. The remaining participants were either not sure of or did not believe in URDF's future use. Their reasoning was primarily based on the current limitations of URDF. They suspect that another format, which addresses these limitations, will take over in the future.
### _Discussion_
#### V-B1 Other formats
Many of the participants mentioned other formats such as SDF [9], USD [10], and MJCF [11]. A few also mentioned they are working on new formats that they believe will replace URDF. With the existing and emerging formats that can be used for modeling robots, it can be overwhelming for newcomers in robot modeling to determine which format to use. It may be beneficial to create an overview of the capabilities and limitations of the different formats, to allow users to easily determine which format is most suitable for their needs.
#### V-B2 Xml
Contrasting opinions were revealed on using XML for describing URDF files. Some saw an advantage in its human-readability and simplicity, whilst others believed that XML is outdated and the current structure of the format contains a lot of boiler-plate code.
#### V-B3 Tackling challenges
One of the main challenges is having difficulties with importing URDFs into simulation tools. Although, we have not analyzed the causes of this, we suggest creating guidelines for tools to support URDF. These guidelines would potentially make it easier for simulation tools to implement support for URDF, but also provide a consistent user interface across simulation tools, especially if the same function names are used.
Another interesting challenge is that participants could not find URDFs of the specific versions of the robots they were using. This could potentially be tackled by creating a URDF database in which it would be possible for users to share URDFs, rate each other's URDFs, describe issues/limitations, and update their URDF with improved versions.
## VI Conclusion
In this article, we conducted a survey of 510 robotics developers to determine the use of URDF, its current challenges, desired improvements, and future outlooks of URDF by the robotics community. We found that most of the participants that have simulated a robot before have also used URDF. Additionally, the majority had used URDF within the last month when taking the survey (conducted December 2022). The majority of the participants believed that URDF will be more commonly used in the future, especially if it is improved. We identified challenges and limitations of the format, that the participants would like to be tackled or improved. Furthermore, we analyzed and discussed results and provided suggestions on how some challenges can be tackled. The results of this survey can be used as guidelines for future research.
|
2308.10110
|
Robust Mixture-of-Expert Training for Convolutional Neural Networks
|
Sparsely-gated Mixture of Expert (MoE), an emerging deep model architecture,
has demonstrated a great promise to enable high-accuracy and ultra-efficient
model inference. Despite the growing popularity of MoE, little work
investigated its potential to advance convolutional neural networks (CNNs),
especially in the plane of adversarial robustness. Since the lack of robustness
has become one of the main hurdles for CNNs, in this paper we ask: How to
adversarially robustify a CNN-based MoE model? Can we robustly train it like an
ordinary CNN model? Our pilot study shows that the conventional adversarial
training (AT) mechanism (developed for vanilla CNNs) no longer remains
effective to robustify an MoE-CNN. To better understand this phenomenon, we
dissect the robustness of an MoE-CNN into two dimensions: Robustness of routers
(i.e., gating functions to select data-specific experts) and robustness of
experts (i.e., the router-guided pathways defined by the subnetworks of the
backbone CNN). Our analyses show that routers and experts are hard to adapt to
each other in the vanilla AT. Thus, we propose a new router-expert alternating
Adversarial training framework for MoE, termed AdvMoE. The effectiveness of our
proposal is justified across 4 commonly-used CNN model architectures over 4
benchmark datasets. We find that AdvMoE achieves 1% ~ 4% adversarial robustness
improvement over the original dense CNN, and enjoys the efficiency merit of
sparsity-gated MoE, leading to more than 50% inference cost reduction. Codes
are available at https://github.com/OPTML-Group/Robust-MoE-CNN.
|
Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Huan Zhang, Pin-Yu Chen, Shiyu Chang, Zhangyang Wang, Sijia Liu
|
2023-08-19T20:58:21Z
|
http://arxiv.org/abs/2308.10110v1
|
# Robust Mixture-of-Expert Training for Convolutional Neural Networks
###### Abstract
Sparsely-gated Mixture of Expert (MoE), an emerging deep model architecture, has demonstrated a great promise to enable high-accuracy and ultra-efficient model inference. Despite the growing popularity of MoE, little work investigated its potential to advance convolutional neural networks (CNNs), especially in the plane of adversarial robustness. Since the lack of robustness has become one of the main hurdles for CNNs, in this paper we ask: How to adversarially robustify a CNN-based MoE model? Can we robustly train it like an ordinary CNN model? Our pilot study shows that the conventional adversarial training (AT) mechanism (developed for vanilla CNNs) no longer remains effective to robustify an MoE-CNN. To better understand this phenomenon, we dissect the robustness of an MoE-CNN into two dimensions: Robustness of routers (i.e., gating functions to select data-specific experts) and robustness of experts (i.e., the router-guided pathways defined by the subnetworks of the backbone CNN). Our analyses show that routers and experts are hard to adapt to each other in the vanilla AT. Thus, we propose a new router-expert alternating Adversarial training framework for MoE, termed AdvMoE. The effectiveness of our proposal is justified across 4 commonly-used CNN model architectures over 4 benchmark datasets. We find that AdvMoE achieves \(1\%\sim 4\%\) adversarial robustness improvement over the original dense CNN, and enjoys the efficiency merit of sparsity-gated MoE, leading to more than \(50\%\) inference cost reduction. Codes are available at [https://github.com/OPTML-Group/Robust-MoE-CNN](https://github.com/OPTML-Group/Robust-MoE-CNN).
+
Footnote †: Correspondence to: Yihua Zhang\(<\)[email protected]\(>\)
## 1 Introduction
Despite the state-of-the-art performance achieved by the outrageously large networks [1, 2, 3, 4, 5] in various deep learning (DL) tasks, it still remains challenging to train and deploy such models cheaply. A major bottleneck is the lack of parameter efficiency [6]: A single data prediction only requires activating a small portion of the parameters of the full model. Towards efficient DL, sparse Mixture of Experts (MoE) [7, 8, 9, 10, 11, 12, 13, 14, 15] aims to divide and conquer the model parameters based on their optimal responses to specific inputs so that inference costs can be reduced. A typical MoE structure is comprised of a set of 'experts' (_i.e._, sub-models extracted from the original backbone network) and 'routers' (_i.e._, additional small-scale gating networks to determine expert selection schemes across layers). During inference, sparse MoE only activates the most relevant experts and forms the expert-guided pathway for a given input data. By doing so, sparse MoE can boost the inference efficiency (see 'GFLOPS' measurement in **Fig. 1**). Architecture-wise, sparse MoE has been used for both CNNs [8, 16] and vision transformers (VITs) [7, 9, 10, 11, 12, 13, 14, 15, 17]. Yet, we will focus on the former since sparse MoE for CNNs is under-explored compared to non-sparse MoE for CNNs [18, 19, 20], and adversarial robustness (another key performance metric of our work) was extensively studied in the context of CNNs.
It is known that a main weakness of DL is the lack of adversarial robustness [21, 22, 23]. For example, CNNs can be easily fooled by adversarial attacks [21, 22, 23], in terms of tiny input perturbations generated to direct to erroneous predictions. Thus, adversarial training (**AT**) of CNNs has become a main research thrust [24, 25, 26, 27, 28, 29]. However, when CNN meets sparse MoE, it remains elusive if the improved inference efficiency brought by the sparse MoE comes at the cost of more complex adversarial training recipes. Thus, we ask:
[leftmargin=*]
**(Q)** _What will be the new insights into adversarial robustness of sparse MoE-integrated CNNs? And what will be the suited AT mechanism?_
To our best knowledge, problem (**Q**) remains open in the literature. The most relevant work to ours is [30], which investigated the adversarial robustness of MoE and lever
aged the ordinary AT recipe [24] to defend against adversarial attacks. However, it only focused on the ViT architecture, making a vacancy for the research on robustification for the sparse MoE-based CNN (termed **MoE-CNN** in this work). Most importantly, we find that the vanilla AT [24, 25] (widely used to robustify CNNs) is _no longer_ effective for MoE-CNN. Thus, new solutions are in demand.
To address **(Q)**, we need to (1) make careful sanity checks for AT in MoE-CNN, (2) make an in-depth analysis of its failure cases, and (3) advance new AT principles that can effectively improve robustness without losing the generalization and efficiency from sparse MoE. Specifically, our **contributions** are unfolded below.
* We dissect the MoE robustness into two new dimensions (different from CNNs): routers' robustness and experts' robustness. Such a robustness dissection brings novel insights into the (in)effectiveness of AT.
* Taking inspiration from the above robustness dissection, we propose a new Adversarial training framework for MoE, termed AdvMoE, which enforces routers and experts to make a concerted effort to improve the overall robustness of MoE-CNN.
* We conduct extensive experiments to demonstrate the effectiveness of AdvMoE across \(4\) CNN architectures and 4 datasets. For example, AdvMoE outperforms AT on the original dense CNN model (termed Dense) by a substantial margin: \(1\%\sim 4\%\) adversarial robustness improvement and over \(50\%\) reduction of inference overhead; see **Fig. 1** for illustrations on different CNN types and highlighted performance achieved.
## 2 Related Work
**Sparsely-activated Mixture of Experts (Sparse MoE).** As a special instance of compositional neural architectures [31, 32, 33], MoE [4, 7, 8, 9, 10, 11, 16, 18, 19, 20, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143] aims at solving ML tasks in a divide-and-conquer fashion, which creates a series of sub-models (known as the _experts_) and conducts input-dependent predictions by combing the output of sub-models. As an important branch of MoE, sparsely gated MoE [4, 7, 8, 9, 10, 11, 16, 18, 19, 20, 39, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111,
robustness and standard generalization ability. Other work [26, 28, 47, 48, 49, 50, 51, 52, 53, 54] aims at trimming down the computational costs of robust training while maintaining robustness. The work [30] studies the robustness of MoE-based architectures for the first time. Yet, its focus stays on MoE for ViTs and the relationship between model capacity and robustness.
## 3 Problem Statement
In this section, we start by presenting the setup of MoE-CNN in this work and then introduce the robust learning paradigm. The lack of adversarial robustness of deep models inspires us to investigate whether the adversarial training (AT) approach designed for vanilla CNNs keeps effective for MoE-CNN. Through a motivating example, we show that the conventional AT recipe is _incapable_ of equipping MoE-CNN with desired robustness. The resulting performance is even worse than that of AT-produced S-Dense, which has a much smaller model capacity than MoE-CNN. Thus, the question of how to robustify MoE-CNN arises.
Model setup.We consider a CNN-backboned MoE that consists of multiple MoE layers. Each MoE layer involves a router and a vanilla convolutional layer from the backbone CNN model. Within one MoE layer, we define \(N\) experts, each of which picks a subset of the channels from the convolutional layer. Specifically, suppose the \(l\)-th layer contains \(C_{l}\) channels, one expert will contain \(r\times C_{l}\) channels, where we call the ratio \(r\in[0,1]\)_model scale_ and keep it the same across different layers (see **Fig. 1a**). It is worth noting that as \(r\) increases, the per-expert model capacity increases (_i.e._, with more parameters) at the cost of the efficiency reduction. In a forward path, the router first makes an _input-specific_ expert selection. These selected layer-wise experts then form an end-to-end pathway to process this input. We use "_pathway_" to describe one experts-guided forward path (see **Fig. 1a**). We summarize the model setup in **Fig. A1**.
Further, we introduce different model types considered in this work and shown in **Fig. 1a**. First, we term the original dense CNN model '**Dense**', which serves as the _model basis_ for other model types that derive from. Second, we directly shrink the channel number of each layer in Dense (based on the model scale parameter \(r\)) to obtain the'small dense' model (termed '**S-Dense**'). Notably, S-Dense has the size _equivalent to a single pathway_ in MoE-CNN. Third, we use the structured pruning method [50] to create a sparse subnetwork from Dense, with the weight remaining ratio same as the model scale parameter \(r\) in MoE-CNN, which we call '**Sparse-CNN**'. In summary, S-Dense has the smallest model capacity (comparable to a single pathway of MoE-CNN), and should provide the _performance lower-bound_ for MoE-CNN. By contrast, Sparse-CNN has a larger model capacity but is smaller than MoE-CNN as it encodes a data-agnostic pathway of Dense, while MoE-CNN yields data-specific pathways at the same scale. Dense has the largest model capacity but the least inference efficiency.
Adversarial robustness: From CNN to MoE-CNN.It has been known that current machine learning models (_e.g._, CNNs) are vulnerable to adversarial attacks [21, 22, 23]. Towards the robust design, a variety of AT (adversarial training) methods have been developed. The predominant ones include the min-max optimization-based vanilla AT [24] and its TRADES variant [25] that strikes a balance between generalization and adversarial robustness. Throughout the paper, we adopt TRADES as the default conventional AT recipe, which solves the following problem:
\[\min_{\mathbf{\theta}}\mathbb{E}_{(\mathbf{x},y)\in\mathcal{D}}\left[\ell(\mathbf{ \theta};\mathbf{x},y)+\frac{1}{\lambda}\max_{\|\mathbf{\delta}\|_{\infty}\leq\epsilon }\ell_{\mathrm{KL}}(\mathbf{f_{\theta}}(\mathbf{x}),\mathbf{f_{\theta}}(\mathbf{x}+ \mathbf{\delta}))\right]\] (AT)
where \(\mathbf{\theta}\) denotes model parameters to be robustified, \((\mathbf{x},y)\in\mathcal{D}\) is a training sample, drawn from the training set \(\mathcal{D}\), with input feature \(\mathbf{x}\) and label \(y\), \(\ell(\mathbf{\theta},\mathbf{x};y)\) denotes the cross-entropy loss using model \(\mathbf{\theta}\) at data point \((\mathbf{x},y)\), \(\mathbf{\delta}\) signifies the input perturbation variable subject to the \(\ell_{\infty}\)-norm ball of radius \(\epsilon\), \(\mathbf{f_{\theta}}(\cdot)\) denotes the model's predictions, \(\ell_{\mathrm{KL}}\) is the KL divergence loss that characterizes the worst-case prediction stability at the presence of \(\mathbf{\delta}\), and \(\lambda>0\) is a regularization parameter to strike the tradeoff between empirical risk minimization and the robustness of model predictions.
Although AT has been well studied for adversarial robustness of CNNs, there exists few attempts to robustify MoE-CNN. This raises the problem of our interest:
**(Problem statement)** Can MoE-CNN be robustified as effectively as an ordinary CNN using AT? If not, how to robustly train MoE-CNN to achieve robustness not worse than AT-oriented S-Dense, Sparse-CNN, and Dense while preserving MoE's efficiency?
Warm-up study: AT for MoE-CNN is _not_ trivial.Our goal to robustify MoE-CNN includes (1) achieving high robustness, (2) maintaining high prediction accuracy, and (3) making full use of MoE routing to keep the model's high efficiency and expressiveness. Nonetheless, the routing system in MoE brings extra robustification challenges, which never exist in ordinary CNNs. Specifically, the input-specific expert selection in MoE could make the attacker easier to succeed, since input perturbations can _either_ mislead routers to select incorrect experts _or_ fool the pathophysiological predictor. Such a '_two-way attack mode_' makes AT for MoE-CNN highly non-trivial.
**Fig. 2** empirically justifies that the direct application of (AT) to MoE-CNN is problematic. In Fig. 2, we consider ResNet-18 as the model backbone (Dense) and CIFAR-10 for image classification. We apply (AT) to train MoE-CNN
and S-Dense, and report the robust accuracy (RA), _i.e._, testing accuracy over adversarial examples generated by 50-step PGD attacks [24], against different attack strengths \(\epsilon\).
As we can see, although MoE-CNN has a much larger model capacity than S-Dense, it leads to a significant RA drop when the conventional AT approach is applied. This implies that the design of AT for MoE-CNN is far from trivial. A new robust learning protocol is thus needed to improve the robustness of MoE-CNN without losing its merits in efficiency and generalization.
## 4 Methods
In this section, we start by peering into the failure case of (AT) in MoE-CNN by understanding the roles of the routers and pathways in (AT). We empirically show that these individual components are hard to adapt to each other and cannot make a concerted effort in AT. Based on that, we develop a new AT framework for MoE-CNN, AdvMoE, which also takes inspiration from bi-level optimization.
**Dissecting robustness of MoE-CNN: Routers' robustness vs. pathways' robustness.** The main puzzle in robustifying MoE-CNN comes from the coupling between the robustness of routers (which are responsible for expert selection across layers) and the robustness of the input-specific MoE pathways (which are in charge of the final prediction of an input). Given the failure case of AT for MoE-CNN in Fig. 2, we need to understand the roles of routers and pathways in AT, _i.e._, how the adversarial robustness of MoE-CNN is gained in the presence of the 'two-way attack mode'. To this end, we begin by assessing the influence of the routers' robustness on the overall robustness. This is also inspired by the recent pruning literature [50] showing that model robustness can be gained solely from network's sparse topology (regardless of model weights). We thus ask:
**(Q1)** Is improving routers' robustness sufficient to achieve a robust MoE-CNN?
To tackle **(Q1)**, we first split the parameters of MoE-CNN (_i.e._, \(\mathbf{\theta}\)) into two parts, the parameters of routers \(\mathbf{\phi}\) and the parameters of the backbone network \(\mathbf{\psi}\). This yields \(\mathbf{\theta}=[\mathbf{\phi}^{\top},\mathbf{\psi}^{\top}]^{\top}\), where \(\top\) is the transpose operation. We then call (AT) to robustly train routers (\(\mathbf{\phi}\)) but _fix_ the backbone network (\(\mathbf{\psi}\)) at its standard pre-trained weights. We denote this partially-robustified model by \(\mathbf{\bar{\theta}}=[\mathbf{\phi}^{\top},\mathbf{\psi}^{\top}]^{\top}\), where \(\bar{}\) indicates the updated parameters. To answer **(Q1)**, we assess the robustness gain of \(\mathbf{\bar{\theta}}\) vs. 3 baselines (M1-M3): (**M1**) the standard MoE-CNN \(\mathbf{\theta}\), (**M2**) AT-robustified S-Dense, and (**M3**) Sparse-CNN achieved by the robust sparse mask learning method [50] over the original Dense model.
Based on Insight 1, we further peer into the resilience of expert selection decisions to adversarial examples. If expert selections in _all_ MoE layers keep intact in the presence of an adversarial perturbation, we say that the routing system of MoE-CNN is robust against this adversarial example. We then divide adversarial examples into **four categories** according to whether they successfully attacked routers and the router-oriented pathways: _unsuccessful_ attack on _both_ routers and MoE pathways, _successful_ attack on routers but _not MoE_ pathways, _successful_ attack on MoE pathways but _not routers_, and _successful_ attack on _both_ routers and MoE pathways. Here \(\mathbf{\Theta}\) + \(\mathbf{\Theta}\) characterizes the robustness of routers, while \(\mathbf{\Theta}\) + \(\mathbf{\Theta}\) represents that of MoE. Thus, if \(\mathbf{\Theta}\) or \(\mathbf{\Theta}\) takes a large portion of generated adversarial examples, it implies that the routers' robustness does _not_ directly impact the MoE pathway-based predictor's robustness. **Fig. 4** shows the above categories \(\mathbf{\Theta}\)-\(\mathbf{\Theta}\) when attacking the router-robustified MoE-CNN (_i.e._, \(\mathbf{\bar{\theta}}\)). As we can see, routers' robustness indeed improves prediction robustness (as shown by \(31.74\%\) unsuccessful attacks against the MoE predictor in \(\mathbf{\Theta}\)). However, in the total number of unsuccessful attacks against routers (_i.e._, \(\mathbf{\Theta}\)+\(\mathbf{\Theta}\)\(=76.27\%\)), more than half of them successfully fool
Figure 3: Robustness comparison of router-robustified MoE-CNN (_i.e._\(\mathbf{\hat{\theta}}\)) and baseline models (M1 – M3) for different model scales under CIFAR-10 given the backbone network ResNet-18.
Figure 2: Performance of MoE-CNN and S-Dense robustly trained using (AT) on CIFAR-10 with ResNet-18 as the backbone.
the MoE predictor (_i.e._, ). The above results provide us an additional insight:
**Insight 2:** Improving routers' robustness is _not_ sufficient for the MoE predictor to gain satisfactory robustness although the former makes a positive impact.
Both **Insight 1** and **Insight 2** point out that only improving routers' robustness is _not_ adequate to obtain the desired robustness for the overall MoE-CNN. Thus, we next ask:
**(Q2)** Given the router-robustified model \(\bar{\mathbf{\theta}}\), can we equip \(\bar{\mathbf{\theta}}\) with additional robustness by robustly training expert weights (\(\mathbf{\psi}\))? And how does it further impact routers?
To answer **(Q2)**, we call (AT) to further robustly train the backbone network \(\mathbf{\psi}\) on top of the router-robustified model \(\bar{\mathbf{\theta}}\). We denote the resulting model by \(\bar{\mathbf{\theta}}=[\bar{\mathbf{\phi}}^{\top},\bar{\mathbf{\psi}}^{\top}]\).
**Fig. 5** shows the dissection of the robustness of \(\bar{\mathbf{\theta}}\) in the same setup of Fig. 4. Obviously, the overall prediction robustness (**0+\(\mathbf{\Theta}\)**) is further enhanced after updating \(\bar{\mathbf{\theta}}\) to \(\bar{\mathbf{\theta}}\). Thus, the gains in the robustness of experts' weights indeed further help improve the overall robustness. However, this leads to a surprising drop in the router's robustness (**0+\(\mathbf{\Theta}\)**) when comparing \(\bar{\mathbf{\theta}}\) with \(\bar{\mathbf{\theta}}\). This shows that routers' robustness is _not_ automatically preserved if experts are updated. We obtain the following insight into **(Q2)**:
**Input 3:** Robustness of MoE weights (a), (b), (c).
**Output 4:** Adversarial attack success analysis on dissected MoE-CNN models \(\bar{\mathbf{\theta}}=[\bar{\mathbf{\phi}}^{\top},\mathbf{\psi}^{\top}]\) (model scale \(r=0.5\)), where only \(\bar{\mathbf{\phi}}\) is (AT)-robustified. The adversarial evaluation is based on 50-step PGD attack [24] to fool \(\bar{\mathbf{\theta}}\), and other experiment setups align with Fig. 3. The evaluation is carried out on the test set with a total number of \(10000\) samples.
**Input 4:** Adversarial attack success analysis on dissected MoE-CNN models \(\bar{\mathbf{\theta}}=[\bar{\mathbf{\phi}}^{\top},\mathbf{\psi}^{\top}]\) (model scale \(r=0.5\)), where only \(\bar{\mathbf{\phi}}\) is (AT)-robustified. The adversarial evaluation is based on 50-step PGD attack [24] to fool \(\bar{\mathbf{\theta}}\), and other experiment setups align with Fig. 3. The evaluation is carried out on the test set with a total number of \(10000\) samples.
**Input 5:** Adversarial attack success analysis on forest (a), (b), (c), (d), (e), (f), (g), (h),
are fixed. We term the resulting algorithmic framework as Adversarially robust learning for MoE-CNN (AdvMoE); see Algorithm 1 for a summary.
```
1:Initialize: backbone network \(\mathbf{\psi}\), routers \(\mathbf{\phi}\), batch size \(b\), attack generation step \(K\).
2:for Iteration \(t=0,1,\dots,\)do
3: Pick different random data batches \(\mathcal{B}_{\psi}\) and \(\mathcal{B}_{\phi}\) for backbone and router training
4: Lower-level \(\mathbf{\phi}\)-update (with fixed \(\mathbf{\psi}\)): Given \(\mathbf{\psi}\), update \(\mathbf{\phi}\) by minimizing \(\ell_{\text{TRADES}}\) using \(K\)-step PGD attack [24] generator and SGD (with \(\mathcal{B}_{\mathbf{\phi}}\))
5: Upper-level \(\mathbf{\psi}\)-update (with fixed \(\mathbf{\phi}\)): Given \(\mathbf{\phi}\), update \(\mathbf{\psi}\) by minimizing \(\ell_{\text{TRADES}}\) using \(K\)-step PGD attack gen-erator and SGD (with \(\mathcal{B}_{\mathbf{\psi}}\))
6:endfor
```
**Algorithm 1** The AdvMoE algorithm
We highlight that AdvMoE will train robust routers and robust MoE pathways to 'accommodate' each other. In contrast to the conventional AT framework, AdvMoE delivers the coupled \(\mathbf{\phi}^{\star}(\mathbf{\psi})\) and \(\mathbf{\psi}\), where both parts make a concerted effort to improve the overall robustness. We also remark that AdvMoE does not introduce additional hyper-parameters, since in practice we found routers and experts can share the same learning rate and schedules. More implementation details are provided in Appendix B. In the meantime, we remark that since our proposal is a BLO with non-convex lower and upper-level objectives (1). It is difficult to prove the convergence of AdvMoE. Existing theoretical analysis of BLO typically relies on strongly convex assumptions of lower-level problems [58, 59]. Although without a proper theoretical analysis framework, our method converges well in practice (see Appendix C).
## 5 Experiments
In this section, we will demonstrate the effectiveness of our proposed AdvMoE approach on diverse datasets and models. We will also make an in-depth analysis of the router utility and the expert selection distribution for AdvMoE-trained MoE-CNN.
### Experiment Setup
**Model and dataset setups.** To implement MoE-CNN and other baselines, we conduct experiments on ResNet-18 [60], Wide-ResNet-28-10 [61], VGG-16 [62], and DenseNet [63]. Towards fair assessment, our performance comparison between different model types is restricted to using the same model scale parameter \(r\) (see Fig. 1 for an example). By doing so, an input example will leverage the same amount of model parameters for decision-making. For MoE-CNN, we consider \(N=2\) experts with \(r=0.5\) by default, see Appendix B for more details. Dataset-wise, we focus on the commonly used ones to evaluate the adversarial robustness of image classification [24, 25, 64], including CIFAR-10 [65], CIFAR-100 [65], TinyImageNet [66], and ImageNet [66].
**Baselines.** To make our performance comparison informative and comprehensive, we consider three kinds of baselines that are fairly comparable to (AdvMoE). 1 AT (S-Dense): we apply AT to S-Dense; 2 AT (Sparse): we apply the robustness-aware (structured) sparse mask learning method [50] to obtain Sparse-CNN; 3 AT (MoE): we directly apply AT to MoE-CNN, which co-trains the routers and backbone network. Note this method is also adopted in the latest robust training algorithm [30] for ViT-based MoE architectures. It is worth noting that the above baselines use the same number of model parameters as the pathway of MoE-CNN during model prediction. In addition, we cover 4 AT (Dense) (applying AT to Dense) to acquire a robustness performance reference. Yet, we remark that it is _not_ quite fair to directly compare Dense with the aforementioned smaller counterparts, since the former uses a larger model scale (\(r=1.0\)) at test-time inference.
**Training and evaluation.** We use TRADES [25] as the default robust training objective for all baselines. We also follow the literature [24, 25, 27, 64] to set the attack strength by \(\epsilon=8/255\) for CIFAR-10 and CIFAR-100, and \(\epsilon=2/255\) for TinyImageNet and ImageNet. To implement AdvMoE (Algorithm 1), we mimic the TRADES training pipeline but conduct the proposed BLO routine to robustify routers and backbone parameters in an interactive mode. We adopt \(2\)-step PGD attack [24] at training time for _all_ the methods, supported by the recent work [67] showing its compelling performance in AT. We refer readers to Appendix B for more training details. During evaluation, we report standard accuracy (**SA**) on the clean test dataset and robust accuracy (**RA**) against test-time 50-step PGD attacks [24] with the attack strength same as the training values. We also report **GFLOPs** (FLOPS \(\times 10^{9}\)) as an indicator of the test-time inference efficiency.
### Experiment Results
**Overall performance.** Tab. 1** presents the overall performance of our proposed AdvMoE algorithm vs. baselines. We make several key observations below.
**First.** AdvMoE yields a significant robustness enhancement over all the baselines in every data-model setup. Specifically, AdvMoE consistently yields an improvement of around \(1\%\sim 5\%\) on the robustness measured by RA against PGD attacks. Notably, AdvMoE can also outperform \(\bullet\) AT (Dense) in most cases, around \(1\%\sim 4\%\) robustness improvement (see highlighted results in [green]). This is remarkable since Dense (\(r=1.0\)) is twice larger
than an MoE pathway (\(r=0.5\)). **Second**, we observe that AdvMoE has a preference on wider models. For instance, when WRN-28-10 (the widest model architecture in experiments) is used, AdvMoE yields better robustness over the Dense counterpart across all the dataset setups. **Third**, we also observe that the direct AT application to MoE-CNN, _i.e._, AT (MoE), is worse than AT (S-Dense) and AdvMoE in all setups. This is consistent with our findings in Sec. 4. We remark that although the usefulness of AT (MoE) was exploited in [30] for the MoE-type ViT, it is _not_ effective for training MoE-type CNNs anymore. **Fourth**, AdvMoE can retain the high inference efficiency for MoE-CNN, as evidenced by the GFLOPS measurements in Tab. 1. Compared to S-Dense, MoE-CNN introduces minor computational overhead due to the routing system. However, it saves more than \(50\%\) of the inference cost vs. Dense. This implies that our proposal AdvMoE can preserve the efficiency merit of the MoE structure while effectively improving its adversarial robustness.
**Robust evaluation on AutoAttack [68].** In Tab. 2, we provide additional experiments evaluated by AutoAttack [68] (termed **RA-AA**), a popular robustness evaluation benchmark [69]. The experiment setting in Tab. 2 follows **Tab. 1**. We report RA-AA on CIFAR-10 and CIFAR-100 with ResNet-18 and WRN-28-10. As we can see, although AutoAttack leads to a lower RA-AA compared to RA evaluated using PGD attacks (termed
\begin{table}
\begin{tabular}{l|c|c c c|c|c c c} \hline \hline
**Method** & **Backbone** & **RA (\%)** & **SA (\%)** & **GFLOPS(\#)** & **Method** & **Backbone** & **RA (\%)** & **SA (\%)** & **GFLOPS (\#)** \\ \hline \multicolumn{10}{c}{**CIFAR-10**} \\ \hline \(\bullet\) AT (Dense) & & 50.13\(\pm\)0.13 & 82.99\(\pm\)0.11 & 0.54 & \(\bullet\) AT (Dense) & & 51.75\(\pm\)0.12 & 83.54\(\pm\)0.15 & 5.25 \\ \(\circ\) AT (S-Dense) & & 48.12\(\pm\)0.09 & 80.18\(\pm\)0.11 & 0.14 (74\(\%\)\(\downarrow\)) & \(\circ\) AT (S-Dense) & & 50.66\(\pm\)0.13 & 82.24\(\pm\)0.10 & 1.31 (75\(\%\)\(\downarrow\)) \\ \(\circ\) AT (Sparse) & ResNet-18 & 47.93\(\pm\)0.17 & **80.45\(\pm\)**0.13 & 0.14 (74\(\%\)\(\downarrow\)) & \(\circ\) AT (Sparse) & WRN-28-10 & 48.95\(\pm\)0.14 & 82.44\(\pm\)0.17 & 1.31 (75\(\%\)\(\downarrow\)) \\ \(\circ\) AT (MoE) & & 45.57\(\pm\)0.51 & 78.84\(\pm\)0.75 & 0.15 (72\(\%\)\(\downarrow\)) & \(\circ\) AT (MoE) & & 46.73\(\pm\)0.46 & 77.42\(\pm\)0.73 & 1.75 (67\(\%\)\(\downarrow\)) \\ \(\cline{2-10} \(\Delta\)**DivMoE** & & **51.83**\(\pm\)0.12 & 80.15\(\pm\)0.11 & 0.15 (72\(\%\)\(\downarrow\)) & \(\Delta\)**DivMoE** & & **55.73**\(\pm\)0.13 & **84.32**\(\pm\)0.18 & 1.75 (67\(\%\)\(\downarrow\)) \\ \hline \(\bullet\) AT (Dense) & & 46.19\(\pm\)0.21 & 82.18\(\pm\)0.23 & 0.31 & \(\bullet\) AT (Dense) & & 44.52\(\pm\)0.14 & 74.97\(\pm\)0.19 & 0.07 \\ \(\circ\) AT (S-Dense) & & 45.72\(\pm\)0.18 & **80.10**\(\pm\)0.16 & 0.07 (77\(\%\)\(\downarrow\)) & \(\circ\) AT (Sparse) & & 38.07\(\pm\)0.13 & 69.63\(\pm\)0.11 & 0.02 (71\(\%\)\(\downarrow\)) \\ \(\circ\) AT (Sparse) & VGG-16 & 46.13\(\pm\)0.15 & **79.32**\(\pm\)0.18 & 0.07 (77\(\%\)\(\downarrow\)) & \(\circ\) AT (Sparse) & DenseNet & 37.73\(\pm\)0.13 & 67.35\(\pm\)0.12 & 0.02 (71\(\%\)\(\downarrow\)) \\ \(\circ\) AT (MoE) & & 43.37\(\pm\)0.46 & 76.49\(\pm\)0.65 & 0.12 (61\(\%\)\(\downarrow\)) & \(\Delta\)**DivMoE** & & 35.21\(\pm\)0.74 & 64.41\(\pm\)0.81 & 0.03 (57\(\%\)\(\downarrow\)) \\ \(\cline{2-10} \(\Delta\)**DivMoE** & & **49.82**\(\pm\)0.11 & 80.03\(\pm\)0.10 & 0.12 (61\(\%\)\(\downarrow\)) & \(\Delta\)**DivMoE** & & **39.97**\(\pm\)0.11 & **70.13**\(\pm\)0.15 & 0.03 (57\(\%\)\(\downarrow\)) \\ \hline \multicolumn{10}{c}{**CIFAR-100**} \\ \hline \(\bullet\) AT (Dense) & & 27.23\(\pm\)0.08 & 58.21\(\pm\)0.12 & 0.54 & \(\bullet\) AT (Dense) & 27.90\(\pm\)0.13 & 57.60\(\pm\)0.09 & 5.25 \\ \(\circ\) AT (Sparse) & & 26.41\(\pm\)0.16 & 57.02\(\pm\)0.14 (74\(\%\)\(\downarrow\)) & \(\circ\) AT (S-Dense) & & 26.30\(\pm\)0.10 & 56.80\(\pm\)0.08 & 1.31 (75\(\%\)\(\downarrow\)) \\ \(\circ\) AT (Sparse) & ResNet-18 & 26.13\(\pm\)0.14 & 57.24\(\pm\)0.12 & 0.14 (74\(\%\)\(\downarrow\)) & \(\circ\) AT (Sparse) & WRN-28-10 & 25.83\(\pm\)0.16 & 57.39\(\pm\)0.14 & 1.31 (75\(\%\)\(\downarrow\)) \\ \(\cline{2-10} \(\Delta\)**DivMoE** & & 22.72\(\pm\)0.42 & 53.34\(\pm\)0.61 & 0.15 (72\(\%\)\(\downarrow\)) & \(\Delta\)**DivMoE** & & 22.92\(\pm\)0.55 & 53.39\(\pm\)0.49 & 1.75 (67\(\%\)\(\downarrow\)) \\ \(\cline{2-10} \(\Delta\)**DivMoE** & & **28.05**\(\pm\)0.13 & **57.73**\(\pm\)0.11 & 0.15 (72\(\%\)\(\downarrow\)) & \(\Delta\)**DivMoE** & & **28.82**\(\pm\)0.14 & **57.56**\(\pm\)0.17 & 1.75 (67\(\%\)\(\downarrow\)) \\ \hline \(\bullet\) AT (Dense) & & 22.37\(\pm\)0.15 & 52.36\(\pm\)0.17 & 0.31 & \(\bullet\) AT (Dense) & & 21.72\(\pm\)0.13 & 48.64\(\pm\)0.14 & 0.07 \\ \(\circ\) AT (S-Dense) & & 20.58\(\pm\)0.13 & **48.89**\(\pm\)0.14 & 0.07 (77\(\%\)\(\downarrow\)) & \(\circ\) AT (S-Dense) & & 16.86\(\pm\)0.21 & 39.97\(\pm\)0.11 & 0.02 (71\(\%\)\(\downarrow\)) \\ \(\circ\) AT (Sparse) & VGG-16 & 21.12\(\pm\)0.22 & 48.03\(\pm\)0.17 & 0.07 (77\(\%\)\(\downarrow\)) & \(\circ\) AT (Sparse) & DenseNet & 17.72\(\pm\)0.14 & 41.03\(\pm\)0.16 & 0.02 (71\(\%\)\(\downarrow\)) \\ \(\circ\) AT (MoE) & & 19.34\(\pm\)0.43 & 45.51\(\pm\)0.75 & 0.12 (61\(\%\)\(\downarrow\)) & \(\circ\) AT (MoE) & & 14.45\(\pm\)0.45 & 36.72\(\pm\)0.71 & 0.03 (57\(\%\)\(\downarrow\)) \\ \(\cline{2-10} \(\Delta\)**DivMoE** & & **21.21**\(\pm\)0.21 & **48.33**\(\pm\)0.17 & 0.12 (61\(\%\)\(\
RA-PGD, AdvMoE still outperforms AT (S-Dense), AT (Sparse), and AT (MoE) consistently, evidenced by the **bold** numbers in the RA-AA columns.
**MoE-CNN trained by AdvMoE enjoys better router utility.** Based on the results above and the preliminary studies in Sec. 4, we next peer into the performance difference achieved by AT (Sparse), AT (MoE), and AdvMoE from the perspective of pathway diversities. We ask:
1. What is the relationship between the dynamic pathways generated by the routers trained by AdvMoE and the static mask optimized by AT (Sparse)? What is the difference between the routing decisions using AdvMoE and AT (MoE), and how does it impact the performance?
Regarding 1, we investigate the cosine similarity between the pathways generated by training methods, either AT (MoE) or AdvMoE, and the static mask found by AT (Sparse). Since the latter can be regarded as a single pathway used for all the data, we term it _'mask pathway'_ in contrast to _'MoE pathway'_. We calculate the intersection of union (**IoU**) score between the MoE pathway and the mask pathway under each testing dataset (the clean or adversarial version). **Fig. 6** presents the IoU distributions based on the clean and adversarial test datasets (**Fig. 6a** for AdvMoE and **Fig. 6b** for AT (MoE)). We remark that a smaller IoU score indicates a larger discrepancy between the MoE pathway and the mask pathway. As we can see, the IoU distribution of AdvMoE vs. AT (Sparse) in Fig. 6a shifts closer to \(0\) compared with Fig. 6b. This observation applies to both standard and adversarial evaluation and suggests that AdvMoE (our proposal) has a better capability than AT (MoE) to re-build input-specific MoE pathways, which are more significantly different from the input-agnostic mask pathway identified by the pruning-based method, AT (Sparse).
Regarding 2, we observe from Fig. 6 that the routers learned by AT (MoE) are more fragile to adversarial attacks compared to AdvMoE, as evidenced by the less intersection area of adversarial data vs. clean data. This is also aligned with **Insight 3** in Sec.4. Moreover, the routing policy learned by AdvMoE is more diverse than AT (MoE), as indicated by the latter's density-concentrated IoU scores. In contrast, the distribution of AdvMoE is dispersed with a smaller peak value. Therefore, regarding the expert utility, AdvMoE is able to assign the inputs to a larger group of pathways than AT (MoE), making better use of experts.
A coupling effect of expert number \(N\) and per-expert model scale \(r\) on AdvMoE.Recall that there exist two key parameters involved in MoE-CNN (**Fig. A1**): _(a)_ the number of experts \(N\), and _(b)_ the model scale \(r\) that defines the per-expert (or per-pathway) model capacity. Given the backbone model (_e.g._, ResNet-18 in this experiment), a larger \(N\) paired with a small \(r\) implies that each expert may only have limited model capacity, _i.e._, corresponding to a less number of channels. Regardless of \(N\), if \(r=1\), the full backbone network will be used to form the identical decision pathway.
**Fig. 7** shows the RA of MoE-CNN trained by AdvMoE vs. the model scale parameter \(r\) at different values of \(N\). Two insightful observations can be drawn. **First**, there exists an MoE regime (_e.g._, \(N<8\) and \(r\in[0.5,0.9]\)), in which AdvMoE can outperform AT (Dense) (_i.e._, \(r=1\)) by a substantial margin. This shows the benefit of MoE in adversarial robustness. However, if the number of experts becomes larger (_e.g._, \(N=10\)), the increasing diversity of MoE pathways can raise the difficulty of routers' robustification and thus hampers the performance of AdvMoE (see \(N=10\) and \(r=0.8\) in **Fig. 7**). **Second**, there exists an
\begin{table}
\begin{tabular}{l|c|c c c c|c|c c c} \hline \hline
**Method** & **Backbone** & **RA-PGD** (\%) & **RA-AA** (\%) & **SA** (\%) & **GFLOPS**(\%) & **Method** & **Backbone** & **RA-PGD** (\%) & **RA-AA** (\%) & **SA** (\%) & **GFLOPS** (\%) \\ \hline \multicolumn{10}{c|}{**AT (Dense)**} \\ \(\sim\)AT (S-Dense) & & 50.13\(\pm\)0.13 & 44.72\(\pm\)0.15 & 82.99\(\pm\)0.11 & 0.54 & \(\sim\)**4** (Dense)** & & 51.75\(\pm\)0.12 & 45.13\(\pm\)0.12 & 83.54\(\pm\)0.15 & 5.25 \\ \(\sim\)AT (S-Dense) & & 48.12\(\pm\)0.09 & 42.24\(\pm\)0.13 & 80.18\(\pm\)0.11 & 0.14 (74\%) & \(\sim\)AT (S-Dense) & & 50.66\(\pm\)0.13 & 44.14\(\pm\)0.10 & 82.24\(\pm\)0.10 & 1.31 (57\%) \\ \(\sim\)AT (Sparse) & ResNet-18 & 47.93\(\pm\)0.17 & 42.11\(\pm\)0.11 & **80.46\(\pm\)0.13** & 0.14 (74\%) & \(\sim\)AT (Sparse) & WRN-28-10 & 48.95\(\pm\)0.14 & 43.97\(\pm\)0.11 & 82.44\(\pm\)0.17 & 1.31 (57\%) \\ \(\sim\)AT (MoE) & & 45.75\(\pm\)0.14 & 40.20\(\pm\)0.18 & 78.84\(\pm\)0.15 & 0.15 (72\%) & \(\sim\)AT (MoE) & & 46.73\(\pm\)0.14 & 44.11\(\pm\)0.23 & 7.42\(\pm\)0.17 & 1.75 (67\%) \\ \multicolumn{10}{c|}{**AbvMoE**} \\ \hline \multicolumn{10}{c|}{**AT (Dense)**} \\ \(\sim\)AT (S-Dense) & & 27.23\(\pm\)0.08 & 23.11\(\pm\)0.06 & 58.21\(\pm\)0.12 & 0.54 & \(\sim\)AT (Dense) & & 27.90\(\pm\)0.13 & 23.45\(\pm\)0.11 & 57.60\(\pm\)0.09 & 5.25 \\ \(\sim\)AT (S-Dense) & & 26.41\(\pm\)0.16 & 22.11\(\pm\)0.13 & 57.02\(\pm\)0.14 & 0.14 (74\%) & \(\sim\)AT (S-Dense) & & 26.30\(\pm\)0.10 & 22.23\(\pm\)0.13 & 56.80\(\pm\)0.08 & 1.31 (75\%) \\ \(\sim\)AT (Sparse) & ResNet-18 & 26.13\(\pm\)0.14 & 21.89\(\pm\)0.11 & 57.24\(\pm\)0.12 & 0.14 (74\%) & \(\sim\)AT (Sparse) & WRN-28-10 & 25.83\(\pm\)0.16 & 21.97\(\pm\)0.09 & 57.39\(\pm\)0.14 & 1.31 (57\%) \\ \(\sim\)AT (MeE) & & 27.22\(\pm\)0.42 & 16.33\(\pm\)0.25 & 53.34\(\pm\)0.61 & 0.15 (72\%) & \(\sim\)AT (MoE) & & 22.94\(\pm\)0.55 & 17.87\(\pm\)0.24 & 53.39\(\pm\)0.49 & 1.75 (67\%) \\ \hline \multicolumn{10}{c|}{**AvMoE**} \\ \(\sim\)**10.13** & **23.33\(\pm\)**0.06 & **57.73\(\pm\)**0.11 & 0.15 (72\%) & \(\sim\)**AvMoE** & & **28.82\(\pm\)**0.14 & **23.75\(\pm\)**0.12 & **57.56\(\pm\)**0.17 & 1.75 (67\%) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Robustness overview evaluated with AutoAttack [68] (**RA-AA**) on various datasets and model backbone architectures. Other settings strictly follow Tab. 1. The values of RA-PGD, SA, and GFLOPS are repeated from Tab. 1 for better comparison.
Figure 6: The distribution of the intersection of union (IoU) scores of the input-specific pathways generated by AdvMoE (a) and AT (MoE) (b) vs. the static mask found by AT (Sparse). The distribution over the clean test set and the adversarial test set is plotted for AT (MoE) and AdvMoE on setting (ResNet-18, CIFAR-100). Other settings are aligned with Tab.1.
_ineffective_ MoE regime (_e.g._, \(N\geq 8\) and \(r<0.5\)), in which the performance of AdvMoE largely deviates from that of AT (Dense). In this regime, each expert consists only of a small number of channels, which restricts its robust training ability. Accordingly, both the increasing diversity of MoE pathways (large \(N\)) and the limited capacity per pathway (small \(r\)) could impose the difficulties of AT for MoE-CNN. In our experiments, we choose \(r=0.5\) and \(N=2\), which preserves the diversity of MoE pathways (_i.e._, inference efficiency) and retains the effectiveness of robust training.
Performance with different model scales.To make sure the observations and conclusions from Tab. 1 are consistent across different values of the model scale parameter \(r\), we repeated the experiments on (CIFAR-10, ResNet-18) and (CIFAR-10, WRN-28-10) using \(r\in\{0.2,0.5,0.8\}\) to cover the {sparse, medium, dense} regimes with respect to Dense (\(r=1.0\)). **Fig. 8** summarizes the obtained experiment results. As we can see, AdvMoE yields consistent robustness improvements over all the baselines, including Dense. And the improvement rises as the model scale \(r\) increases. This is not surprising as more parameters will be used when processing one input. Yet, a clear drawback brought by the larger model scale \(r\) is the increase of inference cost, evidenced by the GFLOPS numbers. When \(r\) turns to be large (like \(r=0.8\)), the efficiency benefit brought by the pathway sparsification from MoE gradually vanishes. Thus, a medium sparsity (\(r=0.5\)) is a better choice to balance the trade-off between performance and efficiency, which is thus adopted as our default setting.
Extended study: AdvMoE for ViT.To explore the capability of our proposal AdvMoE on ViT-based MoE models (MoE-ViT), **Tab. 3** presents additional results following the recently published SOTA baseline [30] for MoE-ViT. As we can see, AdvMoE is also applicable to MoE-ViT and can boost robustness over the SOTA baseline by over \(1\%\) RA improvement, while achieving a similar level of SA. Thus, although our work focuses on robust training for MoE-CNN, it has the promise of algorithmic generality to other MoE-based architectures. We defer a more comprehensive study in the future.
Additional experiments.We conduct ablation studies on (1) robustness evaluation using AutoAttack [68] (consistent findings can be drawn as PGD attacks), (2) attacks steps used in AT, and (3) additional explorations towards the coupling effect between the number of experts and the model scales. We refer readers to Appendix C for detailed results.
## 6 Conclusion
In this work, we design an effective robust training scheme for MoE-CNN. We first present several key insights on the defense mechanism of MoE-CNN by dissecting adversarial robustness through the lens of routers and pathways. We next propose AdvMoE, the first robust training framework for MoE-CNN via bi-level optimization, robustifying routers and pathways in a cooperative and adaptive mode. Finally, extensive experiments demonstrate the effectiveness of AdvMoE in a variety of data-model setups. Meanwhile, we admit that the AdvMoE requires roughly twice the computational capacity compared to the vanilla AT baseline due to alternating optimization that calls two back-propagations per step. Addressing this efficiency concern presents a meaningful avenue for future work.
## Acknowledgement
The work of Y. Zhang, S. Chang and S. Liu was partially supported by National Science Foundation (NSF) Grant IIS-2207052 and Cisco Research Award. The work of Z. Wang is in part supported by the US Army Research Office Young Investigator Award (W911NF2010240).
Figure 8: Robustness comparison of models trained with different methods under various model scale settings. Results higher than that of AT (Dense) are marked with \(\star\). Other setups are aligned with Tab. 1. Please refer to Appendix C for exact numbers and GFLOPS comparisons.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Method & RA (\%) & SA (\%) & GFLOPS (\#) \\ \hline SOTA[30] & 44.63 & 61.72 & 0.27 \\ AdvMoE & **45.93** & 61.67 & 0.27 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance on robust training for MoE-ViT with the setup (ImageNet, DeiT-Tiny). Other settings follow Tab. 1.
Figure 7: Performance of AdvMoE under CIFAR-10 using ResNet-18 as the backbone network for different values of expert number \(N\) and model scale \(r\). The black dash line denotes the performance of Dense (_i.e._\(r=1\)).
|
2303.12475
|
Scale dependence of the q and T parameters of the Tsallis distribution
in the process of jet fragmentation
|
The dependence of the $q$ and $T$ parameters of the
Tsallis-distribution-shaped fragmentation function (FF) on the fragmentation
scale (found to be equal to the jet mass) is calculated via the resummation of
the branching process of jet fragmentation in the leading-log appriximation
(LLA) in the $\phi^3$ theory. Jet and hadron spectra in electron-positron
($e^+e^-$) annihilations with 2- and 3-jet final states are calculated using
virtual leading partons. It is found that jets, produced earlier in the
branching process, are more energetic, and the energy, angle and multiplicity
distributions of hadrons stemming from them are broader. It is also found that
replacing the LL resummation in the branching process by a single splitting
provides good approximation for the jet energy distribution in 2-jet events.
Furthermore, a micro-canonical statistical event generator is presented for the
event-by-event calculation of hadron momenta in $e^+e^-$ annihilations.
|
Karoly Urmossy, Antal Jakovac
|
2023-03-22T11:39:13Z
|
http://arxiv.org/abs/2303.12475v1
|
Scale dependence of the \(q\) and \(T\) parameters of the Tsallis distribution in the process of jet fragmentation
###### Abstract
The dependence of the \(q\) and \(T\) parameters of the Tsallis-distribution-shaped fragmentation function (FF) on the fragmentation scale (found to be equal to the jet mass) is calculated via the resummation of the branching process of jet fragmentation in the leading-log appriximation (LLA) in the \(\phi^{3}\) theory. Jet and hadron spectra in electron-positron (\(e^{+}e^{-}\)) annihilations with 2- and 3-jet final states are calculated using virtual leading partons. It is found that jets, produced earlier in the branching process, are more energetic, and the energy, angle and multiplicity distributions of hadrons stemming from them are broader. It is also found that replacing the LL resummation in the branching process by a single splitting provides good approximation for the jet energy distribution in 2-jet events. Furthermore, a micro-canonical statistical event generator is presented for the event-by-event calculation of hadron momenta in \(e^{+}e^{-}\) annihilations.
pacs: 13.87.FhFragmentation into hadrons and 12.40.EeStatistical models of strong interactions 05.40.-aFluctuation phenomena-statistical physics
## 1 Introduction
The spectra of hadrons produced in high-energy collisions are more-or-less well described by various versions of phenomenological models based on the _cut-power law_ or _Tsallis-distribution_ (TS), \(f_{TS}(E)\propto[1+(q-1)E/T]^{-1/(q-1)}\). \(E\) is the energy of the hadron, and \(q\) measures the deviation from the Boltzmann-Gibbs (BG) distribution (\(q=1\)). The dependence of the \(q\) and \(T\) parameters on the features of the collisions (\(\sqrt{s}\), centrality, type and number of the produced final state particles) has extensively been studied recently in electron-positron (\(e^{+}e^{-}\)) [1; 2; 3; 4], positron-proton (\(e^{+}p\)) [10]-[11], proton-proton (\(pp\)) [12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29], proton-nucleus (\(pA\)) and deuteron-nucleus (dA) [20; 21; 22; 23; 24; 25; 26; 27; 28; 29], nucleus-nucleus (AA) [20; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44] collisions, and Drell-Yan processes [52]. The found tendencies as well as the quality of the fits to measured data depend on the version of the model used. Some models conjecture that quark-gluon plasma (QGP), an expanding and cooling thermal source is created in the examined collision, and use the Cooper-Frye formula to calculate the distribution of hadrons produced at break-up [16; 17; 18; 19; 26; 27; 28; 29; 33; 34; 35; 36; 37; 38; 39; 40; 43; 44; 45; 46; 47; 48]. These models provide a better agreement with measured data in \(AA\) collisions, especially in the interval, where the hadron's transverse momentum is smaller than its mass \(p_{T}\lessapprox m_{h}\). However, in order to describe both the spectra and the azymuthal anisotropy \(v_{2}\) in AA collisions at \(\sqrt{s}\geq 200\) GeV, and \(p_{T}\) up to 20 GeV/c, two-component models [41; 42; 43; 44; 45; 46; 47; 48] are needed. Such models make out hadron yields from those hadrons, which stem from the QGP (_'soft'_ component), and those, which stem from jets (_'hard'_ component). It has been found that the more central the collisions (the greater the hadron multiplicity), the larger the ratio of the _soft_ hadrons and the closer their TS to the BG distribution. This suggests that in more central collisions, the QGP is closer to equilibrium [44; 45; 46; 47; 48]. The distribution of the _hard_ yields, which dominates the spectrum and \(v_{2}\) for \(p_{T}\gtrap
of jets [50; 51] and direct photons [49] for \(p_{T}\gtrapprox 0.1\sqrt{s}\) in \(pp\), \(pA\) and \(AA\) collsions.
From the theoretical point of view, the TS distribution has been obtained in the canonical and micro-canoncal ensembles in case when the temperature [2; 3], volume [8; 9] or multiplicity [47] fluctuates according to specific patterns due to some external reasons. The TS is also the steady solution of the Langevin and Fokker-Planck equations with damping and noise terms depending linearly on the particle's energy [59]. Besides, thermodynamic properties of the _Linear Sigma Model_[56], the _Nambu-Joana-Lasinio Model_[57], a hadron gas in the presence of magnetic field [54] along with general thermodynamic relations [55] and a small \(q-1\) expansion have been obtained in the framework of the TS statistics. It is, however, important to point out that a _solid derivation of the TS distribution from first principles of the strong interaction is still missing_. Nevertheless, it has been shown in [60] that the \(p_{T}\) spectrum of a leading parton (jet) obtained from the leading order QCD cross section of two-parton scatterings in \(pp\) collisions, approximately takes the form \(d\sigma^{\,Jet}/dx\approx(1-x)^{a}/x^{4.5}\), with \(x=2p_{T}/\sqrt{s}\). The \((1-x)^{a}\) factor comes from the form of the most commonly used parton distribution functions (PDF), and the \(x^{-(4+1/2)}\) factor stems from the hard scattering of partons in the incoming protons and the integration for the not measured other jet momenta. This result has estimated the power of jet spectra in accordance with Tevatron and LHC results, and it has also accounted for the rapid decrease of the distribution for large \(p_{T}\), where the jet energy becomes comparable with \(\sqrt{s}\). In the paper, it is also argued that the low-\(p_{T}\) behaviour of jet spectra, where multiple scattering and non-perturbative processes become important, cannot be explained based on the single hard scattering of partons. These arguments suggest that the \(q\) parameter of the TS may be calculated in perturbation theory (PT), but the \(T\) parameter is of non-perturbative origin.
To avoid having to deal with the parton structure of the colliding particles, in this paper, we examine \(e^{+}e^{-}\) annihilations, and derive the scale dependence of the \(q\) and \(T\) parameters of the TS shaped hadron spectrum in fragmentation processes. Inspired by [62; 63; 67], we use the \(\phi^{3}\) model, which is the simplest asymptotically free quantum field theory (QFT) mimicing the 3-gluon vertex of QCD. As perturbative methods for the calculation of the fragmentation of an off-shell leading parton to hadrons need a non-perturbative input, the form of the fragmentation function (FF) at a low starting scale, which cannot yet be derived from first principles, there is room for model building at this point. For this purpose, we use the model [6; 11] presented in Sec. 2, because it takes into account the finitness of the total energy of the produced hadrons using micro-canonical statistics, and it obtains the TS distribution making use of the experimentally observed negative binomial (NBD) hadron multiplicity fluctuations. Similarly to [63; 64], in Sec. 3, we start from the Dyson-Schwinger equation (DSE) for the fragmentation of a single parton to hadrons, we put momenta of daughter partons onto the mass-shell, and sum up the leading-log (LL) terms via the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) equation [61]. We obtain the dependence of the \(q\) and \(T\) parameters on the virtuality of the leading parton (scale) via solving the DGLAP equation for the moments of the FF. In Sec. 4, we calculate jet and hadron distributions in \(e^{+}e^{-}\) annihilations with 2-, and 3-jet final states using a Monte-Carlo (MC) event generator method based on the previously obtained FF. We summarize the results in Sec. 5.
The approach presented in this paper mainly differs from most pQCD methods in that it allows for the virtualities of the leading partons to be arbitrarily large (within the boundaries set by energy-momentum conservation), whereas usual parton model calculations use on-shell leading partons, as they rely on factorisation theorems (FT). Consequently, we obtain broad distributions for jet masses and for the angles between jet and hadron momenta, unlike in works like [68] which use a kinematic approximation in which, momenta of leading and daughter partons are nearly collinear. FTs [65; 66] prove that if jets are highly boosted bunches of particles of low total momentum squared (\(M_{J}=\sqrt{P_{J}^{2}}\ll E_{J}\)), and the angles between jets are large, then, their contribution to cross sections can be expressed by multiplicative factors (convoluted with the hard part of the process). Interference terms coming from gluon exchange between jets or jets and soft processes can either be incorporated into the jet factors via Wilson-lines, or can be neglected, as they are suppressed by factors of \(M_{J}/E_{J}\ll 1\). However, in a portion of events, the basic conditions of FTs do not hold, as jet masses are comparable to jet energies according to measurements [70; 71; 69].
Our approach resembles the recursive method presented in [67] which generates virtual daughter partons in each round, however, in two steps. In the first step, it generates on-shell daughter partons using a fixed-order cross sections, then in the second step, it generates daughter parton'masses' using the distribution obtained from the fixed-order cross section of on-shell grand-daughter production. This procedure starts again in the next round, until parton virtualities decrease to the order of hadron masses. This way, every parton in a given generation of the cascade may be virtual, however, the vector part of their momenta are generated according to cross sections involving mother and daughter partons, while virtualities are obtaied from cross sections involving daughter and grand-daughter ones. In our method on the other hand, only the leading parton of a jet is virtual, as it produces on-shell daughters in the fragmentation process, but both the vector part of its momentum and its'mass' are generated from the same distribution obtained from the LL resummation of the fragmentation process (not from fixed-order graphs, as in [67]).
## 2 A statistical model for the FF at starting scale
We use the minimalistic conjecture, that at the starting scale \(M_{0}\), the sole constraint required throughout the hadronisation process is the conservation of energy-momentum. This way, hadrons stemming from the leading parton of momentum \(P=\left(\sqrt{M_{0}^{2}+{\bf P}^{2}},{\bf P}\right)\) form a micro-canonical ensemble. In the micro-canonical ensemble, the main quantity, which determines the distribution of particles, is the phasespace, which, for \(n\) massless particles in \(D\) dimensions is
\[\Omega_{n}(P)\;=\;\prod_{i=1}^{n}\int\frac{d^{D-1}{\bf p}_{i}}{p_{i}^{0}}\, \delta^{D}\left(\sum_{j}p_{j}^{\mu}-P^{\mu}\right)\sim M_{0}^{n(D-2)-D}\;. \tag{1}\]
This result follows from dimensional analysis and Lorentz-invariance, but the detailed calculation can be found in Appendix A. Consequently, the one-particle distribution in the micro-canonical ensemble is
\[d_{n}(x,M_{0})\;=\;\frac{\Omega_{n-1}(P-p)}{\Omega_{n}(P)}=\frac{\left(\frac{ 2}{M_{0}}\right)^{\omega^{*}}\Gamma\left(\frac{\omega^{*}\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Note that, when calculating Eq. (6), the \(n=2\) term \(d_{2}(x,M_{0})\) was left out of the sum in Eq. (5). Besides, the TS distribution does not go to zero at \(x=1\), due to the uniform distribution \(d_{3}(x,M_{0})\) which was included in the sum in Eq. (5). If we leave out the \(n=3\) term from the sum, we arrive at a modified version of the TS distribution
\[d^{*}(x,M_{0}^{2})\ =\ \left(\frac{\bar{p}}{1-\bar{p}}\right)^{3}\frac{r(r+1) (r+2)}{\pi M_{0}^{2}}\left[\left(1+\frac{\bar{p}}{1-\bar{p}}x\right)^{-(r+3)}- \left(1+\frac{\bar{p}}{1-\bar{p}}\right)^{-(r+3)}\right]\,, \tag{8}\]
which obeys \(d^{*}(1,M_{0}^{2})=0\), thus, it decreases more rapidly than the TS function for \(x\geq 0.1\) providing a better description of hadron spectra in \(e^{+}e^{-}\) annihilations, and jet spectra in \(pp\) collisions.
Figure 1: Graphs contributing to the fragmentation of a virtual parton of momentum \(P\) into a measured hadron of momentum \(p\) plus anything at order \(g^{4}\) in the \(\phi^{3}\) theory.
## 3 Scale evolution of the FF
To obtain \(D(p,P)\), the probability of a parton of momentum \(P\) producing a hadron of momentum \(p\) (the amputated cut graph blob in Fig. 1.a) at higher scales \(P^{2}\geq M_{0}^{2}\), we use the DSE at \({\cal O}(g^{4})\):
\[D(p,P) = g^{2}\int\frac{d^{D}k}{(2\pi)^{D}}\,\frac{D(p,k)\bar{D}(P-k)}{k^{ 4}(P-k)^{4}}\left[1+g^{2}\int\frac{d^{D}q}{(2\pi)^{D}}\,\bigg{\{}\frac{i\,n_{c }}{q^{2}(q-k)^{2}(P-q)^{2}}\,+\right. \tag{9}\] \[+\left.\frac{i\,n_{d}}{k^{2}q^{2}(q-k)^{2}}+\frac{i\,n_{e}}{(P-k) ^{2}q^{2}(P-q-k)^{2}}\right\}+\left.\bar{n}_{c}\left(\delta Z_{g}+\frac{3}{2} \delta Z_{3}\right)-(\bar{n}_{d}+\bar{n}_{e})\delta Z_{3}\right]\] \[+ g^{4}\int\frac{d^{D}k}{(2\pi)^{D}}\int\frac{d^{D}q}{(2\pi)^{D}} \,\frac{D(p,k)\bar{D}(q-k)\bar{D}(P-q)}{k^{4}(q-k)^{4}(P-q)^{4}}\times\] \[\times\left\{\frac{n_{f}}{q^{4}}+\frac{n_{g}}{(P-k)^{4}}+\frac{n _{h}}{q^{2}(P-k)^{2}}+\frac{n_{i}}{q^{2}(P-q+k)^{2}}\right\}\.\]
Factors like \(\bar{D}(k)=\int\frac{d^{D-1}{\bf p}^{\prime}}{p_{0}^{\prime}}D(p^{\prime},k)\) arise when we integrate out the momenta of un-measured hadrons stemming from other jets. The combinatorial factors \(n_{c},\ldots,n_{i}\) multiply terms depicted in Fig. 1.c-i.
If we were only interested in the partonic part of the branching process, \(D(p,P)\) would denote the distribution of a _parton_ of momentum \(p\) created by the leading parton of momentum \(P\). In that case, the \(p-\)integrated cut blobs would be replaced by the cut propagator: \(\bar{D}(k)/k^{4}\to\delta(k^{2})\) at leading order, and we would arrive at the problem presented in [63].
Let us parametrize the fragmentation function \(D\) for later model building purposes as a product
\[D(p,P) = P^{10-D}\rho\left(P^{2}\right)d\left(\frac{2pP}{P^{2}},P^{2}\right) \tag{10}\]
of a _"virtuality (or jet mass) distribution"_\(\rho\left(P^{2}\right)\) normalized as \(\int dP^{2}\rho\left(P^{2}\right)=1\), and a _"hadron momentum distribution"_\(d\left(x,P^{2}\right)\) normalized as \(\int\frac{d^{D-1}p}{p_{0}}d\left(\frac{2pP}{P^{2}},P^{2}\right)=\bar{n}(P^{2})\), being the average hadron multiplicity in the jet. The \(P^{10-D}\) factor renders the mass-dimension \(\left[D(p,P)\right]=10-2D\) of the FF, and also removes the poles coming from the propagators on both sides of the cut blobs, as we work in \(D=6\) dimensions.
Now, let us simplify the DSE via making the approximation of pushing the virtualities of the daughter partons (cut blobs) down to a negligible value \(m_{0}^{2}\ll P^{2}\). It is interesting that the phasespace of a massless hadron of momentum \(p\) stemming from a leading parton of momentum \(k=(k_{0},{\bf k})=(k_{0},k,{\bf k}_{T})\) is an ellipsoid of center \({\bf k}/2\), length \(k_{0}\) and width \(m=\sqrt{k^{2}}\). This can be seen from the condition defining the boundary of the phasespace, which is the requirement that the phasespace of the rest of the hadrons stemming from the same jet \(\Omega_{n-1}(k-p)\) needs to be positive. According to Eq. (1), it requires that \((k-p)^{2}\geq 0\), or equivalently, \(2pk/k^{2}\leq 1\). For instance, using the parametrisation \(P=(M,0,{\bf 0})\), \(k=(k_{0},k,{\bf 0})\) and \(p=\left(\sqrt{p_{\parallel}^{2}+p_{\perp}^{2}},p_{\parallel},{\bf p}_{\perp}\right)\), we arrive at the equation of an ellipsoid: \(\left(\frac{p_{\perp}}{m/2}\right)^{2}+\left(\frac{p_{\parallel}-k/2}{k_{0}/2} \right)^{2}\leq 1\). If we push the virtuality of the hadronizing parton down to a negligible value \(m\to m_{0}\approx 0\), the phasespace for the daughter hadrons shrinks to a one-dimensional interval as \(p_{T}\to 0\), \(p\to(p_{\parallel},p_{\parallel},{\bf 0})\) with \(p_{\parallel}\in[0,k_{0}]\). Besides, the first argument of \(d(p,k)\) simplifies as \(\frac{2pk}{k^{2}}\to\frac{2p_{\parallel}}{k_{0}+k}\to\frac{p_{ \parallel}}{k}=\frac{x}{z}\), with \(x=\frac{2pP}{P^{2}}=\frac{2p_{\parallel}}{M}\) and \(z=\frac{2kP}{P^{2}}=\frac{2k}{M}\). We may carry out the above procedure in the SDE via setting
\[\rho(\ldots)\to(2\pi)\delta(\ldots),\quad\mbox{and}\quad d\left(\frac{2pk}{k^{ 2}},k^{2}\right)\to(2\pi)^{D-2}\delta^{D-2}({\bf k}_{T})\,d_{0}\left(\frac{x}{ z},m_{0}^{2}\right)\,, \tag{11}\]
where \({\bf k}_{T}{\bf p}=0\). This way, and SDE becomes
\[P^{2}D\left(x,P^{2}\right) = \frac{g^{2}}{2}\,d_{0}(x,m_{0}^{2})\ +\ \frac{g^{4}}{2}\int\frac{dz}{z}\,d_{0}\left(\frac{x}{z},m_{0}^{2}\right)A(z,P^{ 2})\;,\;\;\mbox{with}\] \[A(z,P^{2}) = \delta(1-z)\left[\int\frac{d^{D}q}{(2\pi)^{D}}\left\{\frac{i\,n_ {c}}{q^{2}(q-k)^{2}(P-q)^{2}}+\frac{i\,n_{d}}{m_{0}^{2}q^{2}(q-k)^{2}}\right.\right.\] \[\left.\hskip 42.679134pt+\ \left.\frac{i\,n_{e}}{m_{0}^{2}q^{2}(P-q-k)^{2}} \right\}+\frac{\bar{n}_{c}}{g^{2}}\left(\delta Z_{g}+\frac{3}{2}\delta Z_{3} \right)-\frac{\bar{n}_{d}+\bar{n}_{e}}{g^{2}}\delta Z_{3}\right]\] \[+ \frac{P^{2}}{2\pi}\int\frac{d^{D}q}{(2\pi)^{D}}(2\pi)\delta \left[(q-k)^{2}\right](2\pi)\delta\left[(P-q)^{2}\right]\ \times\] \[\times\ \left\{\frac{n_{f}}{q^{4}}+\frac{n_{g}}{(P-k)^{4}}+\frac{n_{ h}}{q^{2}(P-k)^{2}}+\frac{n_{i}}{q^{2}(P-q+k)^{2}}\right\}\;.\]
Terms in \(A(z,P^{2})\) are calculated in Appendix B.
In order to keep only the LL terms, and to eliminate the collinear divergence from \(A(z,P^{2})\), we differentiate Eq. (12) with respect to \(t=\ln P^{2}\) to obtain
\[\partial_{t}{\cal D}\left(x,P^{2}\right)\ =\ \frac{g^{4}}{2}\int\frac{dz}{z}\,d_{0} \left(\frac{x}{z},m_{0}^{2}\right)\partial_{t}A(z,P^{2}) \tag{13}\]
for the dimensionless function \({\cal D}(x,P^{2})=P^{2}D(x,P^{2})\). (Derivations of the coupling \(g\) results in terms of \({\cal O}(g^{3})\), and thus are neglected.) As \({\cal D}=\frac{g^{2}}{2}d_{0}+{\cal O}(g^{4})\), we arrive at the DGLAP equation:
\[\partial_{t}{\cal D}\left(x,P^{2}\right)\ =\ g^{2}\int\frac{dz}{z}\,{\cal D} \left(\frac{x}{z},P^{2}\right)\Pi(z,P^{2})\;, \tag{14}\]
with splitting function (SF)
\[\Pi(z)\ =\ \frac{\partial}{\partial\ln P^{2}}A(z,P^{2})\ =\ \frac{n_{f}}{(4\pi)^{3}}\frac{1-z}{z^{2}}-\frac{n_{c}}{2(4\pi)^{3}}\delta(1-z )\;. \tag{15}\]
Note that the SF is proportional to the distribution of daughter partons in the LL approximation at \({\cal O}(g^{4})\): \(p_{0}\frac{dN}{d^{5}p}^{DP}\sim\Pi(z,P^{2})\), thus, \(\frac{dN}{dz}^{DP}\sim z^{3}\Pi(z)=az(1-z)+b\,\delta(1-z)\), which is of the form of the SF used in [62].
### Solving the DGLAP equation
Introducing Mellin transforms \(\tilde{f}(\omega)=\int\limits_{0}^{1}dxx^{\omega-1}f(x)\), the DGLAP equation Eq. (14) simplifies to
\[\partial_{t}\tilde{\cal D}\left(\omega,P^{2}\right)\ =\ g^{2}\,\tilde{\cal D} \left(\omega,P^{2}\right)\tilde{\Pi}(\omega)\;, \tag{16}\]
where the Mellin-transform of the SF is
\[\tilde{\Pi}(\omega)\ =\ \frac{1}{(4\pi)^{3}}\left[\frac{n_{f}}{(\omega-1)( \omega-2)}-\frac{n_{c}}{2}\right]\;. \tag{17}\]
The solution of the DGLAP equation is
\[\tilde{\cal D}\left(\omega,P^{2}\right)\ =\ e^{b(P^{2})\tilde{\Pi}(\omega)}\tilde{\cal D} \left(\omega,P_{0}^{2}\right),\quad\mbox{with}\quad b(P^{2})=\int\limits_{\ln P _{0}^{2}}^{\ln P^{2}}dt\,g^{2}=\frac{2}{\beta_{0}}\ln\left[\frac{\ln(P^{2}/A^ {2})}{\ln(P_{0}^{2}/A^{2})}\right]\] \[\mbox{and coupling}\quad g^{2}=\frac{2}{\beta_{0}\ln(P^{2}/A^{2} )}\;. \tag{18}\]
If we substitute our form of the FF in Eq. (10) (and take into account that \({\cal D}(x,P^{2})=P^{2}D(x,P^{2})\)), we obtain that
\[M^{12-D}\rho(M^{2})\tilde{d}(\omega,M^{2})=e^{b(P^{2})\tilde{D}(\omega)}M_{0}^{1 2-D}\rho(M_{0}^{2})\tilde{d}(\omega,M_{0}^{2})\;, \tag{19}\]
where \(d(x,M_{0}^{2})\) is the hadron distribution at a low initial scale in Eq. (5), for which, we have constructed the statistical model in Sec. 2. To solve for both the jet mass distribution \(\rho(M^{2})\) and the hadron distribution \(d(x,M^{2})\), we exploit the normalisation condition
\[\bar{n}(M^{2})=\int\frac{d^{D-1}p}{p}d\left(\frac{2p}{M},M^{2}\right)=\left( \frac{M}{2}\right)^{\omega^{*}}\kappa_{D-1}\tilde{d}(\omega^{*},M^{2})\;, \tag{20}\]
with \(\omega^{*}=D-2\) and solid angle \(\kappa_{D}=2\pi^{D/2}/\Gamma(D/2)\). Consequently, when taking Eq. (19) at \(\omega=\omega^{*}\), \(\tilde{d}\) drops out, and we get the solution for \(\rho\):
\[\rho(M^{2})=\left(\frac{M_{0}}{M}\right)^{14-2D}\frac{\bar{n}_{0}}{\bar{n}} \rho(M_{0}^{2})e^{b(P^{2})\tilde{H}(\omega^{*})}\;. \tag{21}\]
Writing this back into Eq. (19), we obtain the hadron distribution in the jet:
\[\tilde{d}(\omega,M^{2})=e^{b(M^{2})[\tilde{H}(\omega)-\tilde{H}(\omega^{*})]} \left(\frac{M_{0}}{M}\right)^{D-2}\frac{\bar{n}}{\bar{n}_{0}}\,\tilde{d}( \omega,M_{0}^{2})\;. \tag{22}\]
Now, we can readily obtain the FF by substituting Eq. (21) and Eq. (22) into Eq. (10):
\[\tilde{D}(\omega,M^{2})=M_{0}^{10-D}\left(\frac{M_{0}}{M}\right)^{2}\rho(M_{0} ^{2})\,\tilde{d}(\omega,M_{0}^{2})\,e^{b(M^{2})\tilde{H}(\omega)}\;. \tag{23}\]
As we have a model for \(d(x,M_{0}^{2})\), we could use the inverse Mellin-transform to calculate the FF, however, we will use a less complicated approximation in the next section.
Finally, the \(p-\)integrated cut blobs being the factors, which jets contribute to cross sections, if we do not measure the hadrons in them, take the form
\[\bar{D}(M^{2}) = \int\frac{d^{D-1}p}{p}D\left(\frac{2p}{M},M^{2}\right)=\kappa_{D -1}\left(\frac{M}{2}\right)^{D-2}D(\omega^{*},M^{2}) \tag{24}\] \[= M^{D-4}M_{0}^{14-2D}\rho(M_{0}^{2})\,e^{b(M^{2})\tilde{H}(\omega ^{*})}\sim M^{D-4}\ln^{2\tilde{H}(\omega^{*})/\beta_{0}}(M^{2}/\Lambda^{2})\;.\]
### Scale evolution of the \(q\) and \(T\) parameters of the TS distribution
Although the TS function Eq. (6) is not an exact solution of the DGLAP equation, along with the hadron multiplicity distribution Eq. (4), it describes measured data of hadrons stemming from jets in \(e^{+}e^{-}\)[4] and \(pp\)[5; 6] collisions at various energy scales. Based on this, we conjecture that Eq. (5) is a reasonably good approximation of \(d(x,M^{2})\), the hadron distribution in a jet at any scale \(M\). To determine the scale dependence of the parameters \(\bar{n}\) and \(\sigma\) of the model, let us insert \(d(\omega,M)=\sum\limits_{n}{\cal P}_{n}(M)\,n\,\tilde{d}_{n}(\omega,M)\) with
\[\tilde{d}_{n}(\omega,M) = \frac{\left(\frac{2}{M}\right)^{\omega^{*}}\Gamma(\omega)\Gamma \left(\frac{\omega^{*}n}{2}\right)}{\kappa_{D-1}\Gamma(\omega^{*})\Gamma \left(\omega+\frac{\omega^{*}(n-2)}{2}\right)} \tag{25}\]
into Eq. (22), to obtain
\[\sum\limits_{n}{\cal P}_{n}(M)\,n\,\tilde{d}_{n}(\omega,M)=e^{b(M^{2})[ \tilde{H}(\omega)-\tilde{H}(\omega^{*})]}\left(\frac{M_{0}}{M}\right)^{\omega ^{*}}\frac{\bar{n}}{\bar{n}_{0}}\,\sum\limits_{n}{\cal P}_{n}(M_{0})\,n\, \tilde{d}_{n}(\omega,M_{0})\;. \tag{26}\]
As the TS distribution is only an approximation of the exact solution of the DGLAP equation, we cannot eliminate \(\omega\) from this equation. Nevertheless, we may require Eq. (26) to hold for a suitable set of fixed values of \(\omega\) to force the scale evolution of certain moments of \(x\) to be exact. In this paper, we prescribe that Eq. (26) hold for \(\omega=\{\omega^{*},\omega^{*}\pm 1\}\) (which refers to momenta \(\left<\frac{1}{x}\right>,\left<1\right>\), and \(\left<x\right>\)) and obtain
\[\bar{n}(M) = \bar{n}_{0}\,e^{-b\,a_{+}}\;,\] \[\sigma^{2}(M) = \bar{n}\left[\frac{2}{\omega^{*}}+e^{b\,a_{-}}\left(\frac{\sigma_ {0}^{2}}{\bar{n}_{0}}+\bar{n}_{0}-\frac{2}{\omega^{*}}\right)\right]-\bar{n} ^{2}\;, \tag{27}\]
with \(b=b(M^{2})\), \(a_{\pm}=\tilde{\Pi}(\omega^{*}\pm 1)-\tilde{\Pi}(\omega^{*})\) and initial values \(\sigma(M_{0})=\sigma_{0},\bar{n}(M_{0})=\bar{n}_{0}\). Furthermore, we choose the initial scale \(M_{0}\) to be the scale, at which, the multiplicity distribution becomes Poissonian (\(\sigma_{0}^{2}=\bar{n}_{0}\)), to get even simpler results:
\[\bar{n}(M) = \bar{n}_{0}\,e^{-b\,a_{+}}\,\] \[\sigma^{2}(M) = \bar{n}\left[\frac{2}{\omega^{*}}+e^{b\,a_{-}}\left(1+\bar{n}_{0 }-\frac{2}{\omega^{*}}\right)\right]-\bar{n}^{2}\;. \tag{28}\]
The obtained logarithmic growth of the mean hadron multiplicity with the energy scale \(\bar{n}\propto\ln^{a}(M)\) is in accordance with observations. Consequently, the scale dependence of the standard parameters of the hadron multiplicity distribution is
\[p = 1-\frac{\bar{n}}{\sigma^{2}}=1-\frac{1}{\frac{2}{\omega^{*}}+e^{ b\,a_{-}}\left(1+\bar{n}_{0}-\frac{2}{\omega^{*}}\right)-\bar{n}_{0}\,e^{-b\,a_{+} }}\;,\] \[r = \frac{\bar{n}^{2}}{\sigma^{2}-\bar{n}}=\frac{\bar{n}_{0}\,e^{-b \,a_{+}}}{\frac{2}{\omega^{*}}+e^{b\,a_{-}}\left(1+\bar{n}_{0}-\frac{2}{ \omega^{*}}\right)-\bar{n}_{0}\,e^{-b\,a_{+}}-1}\;. \tag{29}\]
In \(D=4\) dimensions, the FF takes the form of the TS distribution Eq. (6), and the scale evolution of its parameters (using Eq. (7)) is
\[q-1 = \frac{\sigma^{2}-\bar{n}}{\bar{n}^{2}+3(\sigma^{2}-\bar{n})}= \frac{1-e^{-b\,(a_{+}+a_{-})}}{3-2e^{-b\,(a_{+}+a_{-})}}\;,\] \[\tau = \frac{\bar{n}}{\bar{n}^{2}+3(\sigma^{2}-\bar{n})}=\frac{\tau_{0} }{3\,e^{b\,a_{-}}-2\,e^{-b\,a_{+}}}\;, \tag{30}\]
having introduced \(\tau_{0}=1/n_{0}\) in accordance with the equipartition principle. This type of logarithmically rising tendency of \(q\), and falling tendency of \(\tau\) has been observed in various high-energy collisions listed in Sec. 1. Looking at the asymptotic behaviour, it is interesting to point out that \(q\) is bounded from above: \(q\;\rightarrow\;\frac{4}{3}\).
Since now we have the approximate form of \(\tilde{d}(\omega,M^{2})\), we may use it in the expression of the FF \(\tilde{D}(\omega,M^{2})\) via only substituting Eq. (21) into Eq. (10):
\[\tilde{D}(\omega,M^{2})=M^{10-D}\left(\frac{M_{0}}{M}\right)^{14-2D}\frac{ \bar{n}_{0}}{\bar{n}}\rho(M_{0}^{2})\,\tilde{d}(\omega,M^{2})\,e^{b(M^{2}) \tilde{\Pi}(\omega^{*})}\sim\tilde{d}(\omega,M^{2})\;. \tag{31}\]
From the point of view of the \(\omega\) dependence, this formula differs from Eq. (23) in the change of \(\tilde{d}(\omega,M_{0}^{2})\,e^{b(M^{2})\tilde{\Pi}(\omega)}\rightarrow\tilde {d}(\omega,M^{2})\).
Figure 2: The partonic part of 2–, and 3–jet final states in the \(e^{+}e^{-}\to hX\) process. We will refer to the process shown in **panel b** as the _“split”_, while the one in **panel c** the _“crossed”_ 3-jet event.
## 4 Hadron and jet distributions in \(e^{+}e^{-}\to 2-3\) jet events
Similarly to [62; 67], we mimic the \(e^{+}e^{-}\to\gamma^{*}\to q\bar{q}\) process via introducing the term \(eA\phi^{2}\) into the Lagrangian, thus, coupling a'scalar photon' to the'scalar parton' field of the \(\phi^{3}\) model. When calculating the distributions of jets stemming from \(e^{+}e^{-}\) annihilations with 2-, and 3-jet final states depicted in Fig. 2, we use the un-cut propagators of the leading partons attached to the \(\bar{D}(k_{i})\) cut blobs given in Eq. (24). These cut blobs are the resummed LL terms of the process (shown in Fig. 3) of a virtual leading parton emitting on-shell daughter partons before fragmenting into hadrons when its virtuality reaches some low scale \(M_{0}\). As the cut blobs along with the parton legs of momenta \(k_{i}\) on both their sides contribute a factor of \(\bar{D}(k_{i}^{2})/k_{i}^{4}\), we introduce the function
\[F_{n}(k_{1},\ldots,k_{n})\;=\;\prod_{i=1}^{n}\frac{\bar{D}(k_{i}^{2})}{k_{i}^{ 4}}\,\delta^{D}\left(\sum_{j=1}^{n}k_{j}-P\right)\propto\prod_{i}\frac{\ln^{ \alpha}\left(\frac{k_{i}^{2}}{\Lambda^{2}}\right)}{k_{i}^{8-D}}\,\delta^{D} \left(\sum_{j}k_{j}-P\right)\;, \tag{32}\]
with \(\alpha=2\tilde{\Pi}(\omega^{*})/\beta_{0}\), and \(P=(\sqrt{s},\mathbf{0})\) being the total momentum of the incoming \(e^{+}e^{-}\) pair. This way, the momentum distribution of jets in a 2-jet event reads
\[\frac{1}{\sigma_{2jet}}\frac{d\sigma_{2jet}}{d^{D}k_{1}d^{D}k_{2}}\;=\;F_{2}(k _{1},k_{2})\;, \tag{33}\]
and we can make out the 3-jet cases in panels \(b\) and \(c\) of Fig. 2 as
\[\frac{1}{\sigma_{3jet}^{b}}\frac{d\sigma_{3jet}^{b}}{d^{D}k_{1}\ldots d^{D}k_ {3}}=g^{2}\frac{F_{3}(k_{1},k_{2},k_{3})}{(k_{1}+k_{2})^{4}}\;,\qquad\frac{1} {\sigma_{3jet}^{c}}\frac{d\sigma_{3jet}^{c}}{d^{D}k_{1}\ldots d^{D}k_{3}}=g^{2 }\frac{F_{3}(k_{1},k_{2},k_{3})}{(k_{1}+k_{2})^{2}(k_{2}+k_{3})^{2}}\;. \tag{34}\]
As we do not focus on the renormalisation of the electric charge \(e\), we have omitted the term with the radiative correction to the photon-parton vertex in the 2-jet case in Fig. 2.a.
In order to regularize the divergences originating from the poles of the propagators of leading partons, we prescribe an extra condition \(k_{i}^{2}\geq m_{0}^{2}\) so that jet masses be larger than some low energy scale \(m_{0}>\Lambda\), being of the order of the proton mass. Such a requirement is natural, as the jet mass cannot be smaller than the sum of the masses of the hadrons it containes. This condition also eliminates the collinear divergence from the 3-jet events, which would arise if leading partons were taken to be on-shell. In models relying on factorisation, such divergences may be removed, for example, by the subtraction of the configurations, in which, some jet momenta are parallel [67].
When calculating distributions of quantities involving jet momenta numerically, we first generated jet momenta randomly according to the uniform distribution in the phasespace, and accepted them, if they satisfied the condition of \(k_{i}^{2}\geq m_{0}^{2}\). Then, we filled the calculated quantities of interest into histograms weighted by the corresponding jet cross-sections in Eq. (33) and Eq. (34). To generate random jet momenta, we exploited that a thermal ensemble
Figure 3: The cut blob in Fig. 2 denoting the fragmentation process of a virtual leading parton of momentum \(P\) fragmenting into hadrons, among which, one has momentum \(p\) in the LL approximation.
readily provides particles with uniform momentum distribution in the phasespace. In fact, given \(n\) particles, having an arbitrary set of initial momenta \(k_{i}\) (\(\sum k_{i}=P\)), after at least \(10\,n\) pairwise _"collisions"_, they reach a thermal state. In these imaginary collisions, the incoming particles exchange a random momentum, thus, the new momentum of one of the pair has uniform distribution in the center-of-mass (CM) frame:
\[d{\cal P}(k)\ =\ dk^{0}\,d|{\bf k}|d\Omega\,|{\bf k}|^{D-2}\,\Theta(k_{0}-|{\bf k }|)\,\Theta\left(E^{CM}-|{\bf k}|-k_{0}\right)\,\Theta\left(\frac{E^{CM}}{2}-|{ \bf k}|\right). \tag{35}\]
This method works also for a mixture of particles of various masses, in case of which, analytic formulas (like Eq. (2), we have obtained for massless, on-shell particles) for the direct generation of particle momenta one-by-one are not available.
Having obtained the jet momenta \(k_{i}\), the probability of a hadron stemming from the \(i^{th}\) jet to have momentum \(p\) is given by \(d\left(\frac{2pk_{i}}{k_{i}^{2}},k_{i}^{2}\right)\) according to Eq. (31). The functional form of \(d\) is given in Eq. (5) (or Eq. (6) in \(D=4\) dimensions), and the values of its parameters depend on the jet mass according to Eq. (28)-(30). This way, the spectrum of hadrons stemming from the \(i^{th}\) jet in an \(n-\)jet event is calculated as
\[\frac{1}{\sigma_{n-jet}^{h}}p_{0}\frac{d\sigma_{n-jet}^{h}}{d^{D-1}{\bf p}}\ =\ \int\prod_{j=1}^{n}d^{D}k_{j}\,\frac{1}{\sigma_{n-jet}}\frac{d\sigma_{n-jet}} {d^{D}k_{1}\ldots d^{D}k_{n}}\,d\left(\frac{2pk_{i}}{k_{i}^{2}},k_{i}^{2} \right). \tag{36}\]
Setting the \(M_{0}\) starting scale of the DGLAP evolution of the FF to be equal to the lower cut-off for the allowed jet masses, \(M_{0}=m_{0}\), our model has 4 parameters: the mean hadron multiplicity \(\bar{n}_{0}\) at starting scale \(M_{0}\), along with the \(\beta_{0}\) and \(\Lambda\) parameters of the coupling. We calibrated these parameters via fitting the calculated hadron distribution in 2-jet events in \(D=4\) dimensions to the spectrum of charged hadrons measured at \(\sqrt{s}=200\) GeV by the OPAL Collaboration [69], and obtained the following values: \(\Lambda\) = 98.3 MeV, \(\beta_{0}\) = 0.08107, \(\bar{n}_{0}\) = 6.814, \(M_{0}\) = 3.038 GeV/\(c^{2}\). Fit results are shown in the **top-left panel of Fig. 4**. As the \(\phi^{3}\) model is renormalizable in \(D=6\) dimensions, we used the formulas for the coupling and the SF \(\tilde{\Pi}(\omega^{*})\) obtained in 6 dimensions, even when calculating the 4-dimensional results. This inaccuracy is, however, of not much relevance as the aim of this paper is to derive the scale dependence of the parameters of the TS distribution in a high-energy process, and to show the effect of using virtual leading partons even in the "hard" part of cross sections.
In the **top-right panel of Fig. 4**, we compare our results on the distribution of the mass of jets in 2-jet events in 4 and 6 dimensions with the available experimental result, which is the distribution of the heavy jet mass (the larger among the two total momentum squared of particles in the semispheres of the phasespace). Although, OPAL data were taken at a slightly lower collision energy (\(\sqrt{s}=189\) GeV), our results are in accordance with it in the high-jet mass range. For lower values, the heavy jet mass distribution decreases rapidly, and we do not expect our result to describe it correctly. Out of curiosity, we have also plotted a dataset measured at a lower energy \(\sqrt{s}=91\) GeV.
In the **bottom panels of Fig. 4**, we compare the calculated distributions of the energy and the vector part of jet momenta in 2-jet events in 4 and 6 dimensions. As expected, these distributions have a sharp peak at \(\sqrt{s}/2\), where the distributions of on-shell particles would have a \(\delta\)-peak. Besides, the 6-dimensional distributions are broader, due to the lower power of jet virtualities in the denominator of Eq. (32).
**Fig. 5** shows distributions of jet energies, momenta, masses and rapidities (\(\eta^{J}=\ln\left(\frac{E^{J}+P_{z}^{J}}{E^{\prime}-P_{z}^{J}}\right)\), where \(P_{z}^{J}\) is the component of \(P^{J}\), which is parallel to the beam axis) calculated in 6 dimensions. Color encoding shows which histogram refers to which jet in Fig. 2, where the Feynmann graphs of the corresponding 2-, and 3-jet processes are depicted. It is trivial that distributions of the first two jets of momenta \(k_{1}\) and \(k_{2}\), colored in red in Fig. 2.b (which we will refer to as the **"split"** process) coincide. So do the distributions of the first and third jets of momenta \(k_{1}\) and \(k_{3}\), colored in blue in Fig. 2.c (which we will call the **"crossed"** process). However, it is interesting that the second jet in the _crossed_ process has the same distributions as do the first two jets in the _split_ process. Besides, a clear hierarchy among the mean jet energies, momenta, and masses are visible. This seems to be in connection with which _"generation"_ the leading parton of a jet is produced. In Fig. 2, we see that green jets belong to the \(1^{st}\) generation, as their leading partons were created in the first splitting. Consequently, their average energy \(\bar{E}^{J}\) is the largest. The magenta jet also belongs to the \(1^{st}\) generation, but it stems from a 3-jet event, where the total energy is distributed among more jets then in the 2-jet case, thus, the \(\bar{E}^{J}\) of the magenta jet is somewhat smaller. The \(\bar{E}^{J}\) of red jets is the smallest, because they belong to the \(2^{nd}\) generation, whereas the \(\bar{E}^{J}\) of the blue jets is in between those of the red and magenta jets, as the blue jets come from the interference of a \(1^{st}-\) and a \(2^{nd}-\)generation jet.
Rapidity, as well as, angular distributions are the same for all the jets due to rotational symmetry.
From the **left panel of Fig. 6**, we can see that in a _split_ process, the angle between the red jets \(\theta_{12}\) and the one between the red and the magenta jet \(\theta_{13}\) are most likely to be ordered as \(\theta_{12}<\frac{\pi}{2}<\theta_{13}\), whereas jets in a _crossed_ process favour the three-pronged star configuration.
As jet masses fluctuate event-by-event in \(e^{+}e^{-}\) annihilations, so do the \(q\) and \(\tau\) parameters of hadron spectra in jets due to Eq. (30). According to the **middle and left panels of Fig. 6**, \(q\) and \(\tau\) has sharp distributions. Consequently, the TS distribution provides a good description of the hadron spectrum, which is the sum of spectra in single jets averaged over jet-momentum fluctuations. It is important to point out, that the scale evolution of FFs can be examined even if the measured spectrum is available at only a single value of \(\sqrt{s}\), if we set the jet mass as fragmentation scale.
As factorisation theorems are expected to work best when jet masses are small compared to jet energies, we have plotted the double-differential distribution \(d\sigma^{J}/dE^{J}dM^{J}\) along with the dependence of the \(\langle M^{J}\rangle/E^{J}\) ratio as a function \(E^{J}\) in **Fig. 7** for the three types of jets produced in our model in 3-jet events. As it can be seen, although, the most probable events are those with small jet masses, the average jet mass is never negligable compared to the jet energy. When one of the jets acquires energy \(E^{J}\geq\sqrt{s}/2\), the situation gets even worse, since in that case, the energies as well as the momenta of the other jets are small. Consequently, all jet momenta has to be small, thus, the mass of the jet with large \(E^{J}\) has to be large as well.
At this point, it is not surprising that jet energy distributions of our model (using off-shell leading partons and LL resummation for the fragmentation process) differ significantly from those, obtained using fixed-order cross sections with on-shell leading partons, as can be seen in **Fig. 8**. The distribution of the on-shell leading partons in 3-jet events
Figure 4: Measured distributions of hadron energy (**top-left**) and heavy jet mass (**top-right**) in \(e^{+}e^{-}\) annihilations compared with our calculations for 2-jet events based on the \(\phi^{3}\) theory with virtual leading partons. **Bottom,** our results for jet energy and momentum distributions in 4 and 6 dimensions.
are
\[\frac{d\sigma^{\,\,\sigma n-shell}}{dx} \sim x(1-x)[1-A\ln(1-x)]\quad\mbox{for jet 1-2 and} \tag{37}\] \[\sim \frac{x^{3}}{1-x}\quad\mbox{for jet 3 in a {\em split} event;}\] \[\sim x^{2}\quad\mbox{for jet 1-3 and}\] \[\sim x(1-x)\quad\mbox{for jet 2 in a {\em crossed} event.}\]
Figure 5: Energy, momentum, mass and rapidity distributions of jets in 2–, and 3–jet final states stemming from \(e^{+}e^{-}\) annihilations calculated at \(\sqrt{s}=\) 200 GeV in \(D=6\) dimensions. Colors show which graph refers to which jet in Fig. 2.
Figure 6: Distributions of the angles between jet momenta (**left**). Distribution of the \(q\) (**middle**) and \(\tau\) (**right**) parameters of the spectrum of hadrons in jets. Colors show which graph refers to which jet in Fig. 2.
The Feynmann-graphs of the processes of the creation of these partons are depicted in Fig. 1.f-i (if we replace the jet blobs by cut free propagators). Consequently, the corresponding distributions are the terms, listed in Eq. (51) multiplied by the phasespace factor \(x^{3}\) (which we have already calculated when determining the SF). In the off-shell case, the distribution of jet 1-2 in a _split_ event coincides with that of jet 2 in a _crossed_ event. In the on-shell case, these distributions in Eq. (37) only differ in the \(\ln(1-x)\) term coming from the dimensional regularisation of the collinear divergence resulting from the configuration when jet 1 and 2 are parallel. A different regulisation method might remove this term. According to Figs. 7-8, the shape of the jet energy distributions obtained using on-, and off-shell partons are close to each other only within limited intervals around \(E^{J}\approx\sqrt{s}/4\pm\sqrt{s}/8\), where \(\langle M^{J}\rangle/E^{J}\approx 0.3\). Outside of this region, the \(\langle M^{J}\rangle/E^{J}\) ratio is much higher, and the on-, and off-shell distributions differ significantly.
In the **middle panel of Fig. 8**, we can see that, although, the shape of the on-shell distribution of jet 3 is not close to that of the off-shell one, it approximates the off-shell distribution in 2-jet events. This observation suggests that, having a pair of jets created in a splitting (as depicted in **Fig. 9**), in order to get a good approximation for the energy distribution of the jet of momentum \(k\), it is enough to keep only the virtuality of the leading parton of
Figure 8: On- vs. off-shell jet energy distributions. Colors show which graph refers to which jet in Fig. 2.
Figure 7: **Top,** double differential distributions of the jet mass and energy. **Bottom,** average jet mass vs. jet energy. Legend shows which graph refers to which jet in the _split_ and _crossed_ events in Fig. 2.
the other jet. Furthermore, it is also enough to keep only the first splitting in the other jet, instead of resumming the whole 'parton ladder' in the LL approximation. This result supports the argument in [67] that, when we create a parton shower, if we neglect the virtualities of partons of a given generation due to using on-shell cross sections, our inaccuracy is compensated for when we create the next generation.
The spectrum, rapidity and angular distributions of hadrons stemming from each jet in 2-, and 3-jet events, shown in **Fig. 10**, can be obtained using Eq. (36). Besides, the distribution of the \(N_{h}\) number of hadrons in a given jet is
\[\frac{1}{\sigma_{n-jet}^{h}}\frac{d\sigma_{n-jet}^{h}}{dN_{h}}\;=\;\int\prod_ {j=1}^{n}d^{D}k_{j}\,\frac{1}{\sigma_{n-jet}}\,\frac{d\sigma_{n-jet}}{d^{D}k_{1 }\ldots d^{D}k_{n}}\,{\cal P}_{N_{h}}(k^{2})\;. \tag{38}\]
The difference between these distributions are due to the hierarchy of the average mass, energy and momentum of the different jets (see Fig. 5) containing the hadrons. The more energetic the jet, the more high-energy hadrons stem from it, as can be seen in the **top-left panel**. The larger the jet mass, the wider the phasespace ellipsoid of the hadrons in the jet, thus, the average angle between hadron momenta and the momentum of the mother jet is larger as well, according to the **bottom-left panel**. The mean number of hadrons in a jet grows monotonically with the jet mass in accordance with the **bottom-right panel** and Eq. (28). Finally, the shape of hadronic pseudo-rapidity (\(\eta=\ln\frac{p_{0}+p_{\star}}{p_{0}-p_{\star}}\)) distributions shown in the **top-right panel** can be derived from rotational invariance and dimensional arguments.
### A micro-canonical event generator
We obtained the hadron distributions in the previous section via generating all the hadrons in the jets using the micro-canonical model presented in Sec. 2. That is, after having generated the jet momenta, we used the NBD multiplicity distribution in Eq. (4) (the parameters of which depend on the jet mass according to Eq. (28)) to obtain the numbers of hadrons in the jets. After that, we generated the momenta of hadrons in each jet according to the micro-canonical ensemble using the random collisional method, described in the paragraph before Eq. (35). As in our approximation, hadrons are on-shell and massless, we replaced Eq. (35) with
\[d{\cal P}(p)\;=\;d|{\bf p}|d\Omega\,|{\bf p}|^{D-2}\,\Theta\left(\frac{E^{CM} }{2}-|{\bf p}|\right) \tag{39}\]
for the generation of the momentum of one of the outgoing particles in the CM frame in each imaginary collision. This way, the one-particle distribution in each jet of \(n\) hadrons becomes of the form of Eq. (3), and the \(n\)-averaged distribution becomes Eq. (5).
Since the results presented in the previous section only involve \(d(x,k^{2})\), the single-hadron distribution in a jet averaged over multiplicity fluctuations (the scale evolution of which, we have derived from first principles), actually, we used the statistical model only to obtain the form of the FF at an initial scale \(M_{0}\approx 3\) GeV (which value we obtained from fitting the model to measured data). However, the micro-canonical fragmentation model could, in principle, be used to obtain multi-particle observables as well, since it is very simple to generate momenta of all the particles in all jets event-by-event within its scope. Although, this way, we would discard the microscopic dynamics of the branching process of parton production within jets, as all measurables are derived solely from the phasespace of particles in a statistical ensemble.
Figure 9: **Left, 2-jet event with off-shell leading partons, where the blobs denote the fragmentation process resummed in the LL approximation. Right, 3-jet event with on-shell leading partons.**
## 5 Summary
* We have summarized the status of the application of the Tsallis (TS) distribution in theoretical and experimental high-energy physics.
* We have resummed the fragmentation processes of a virtual leading parton (initiating the jet) emitting on-shell daughter partons in the leading-log approximation (LLA) in the \(\phi^{3}\) theory. We have found that the fragmentation scale is the virtuality of the leading parton, which is equal to the jet mass \(M_{J}\), and calculated the \(M_{J}\)-dependence of the \(q\) and \(T\) parameters (Eq. (28)) of a TS-shaped fragmentation function (FF).
* Unlike in approaches based on factorisation theorems, in this paper, we have calculated the energy \(E_{J}\), momentum \(|{\bf P}_{J}|\) and mass \(M_{J}\) distributions of jets produced in \(e^{+}e^{-}\) annihilations with 2- and 3-jet final states using _virtual leading partons_ attached to the previously derived TS-shaped FFs. The results show (Fig. 5) that there is a hierarchy among the \(\langle E_{J}\rangle\), \(\langle|{\bf P}_{J}|\rangle\) and \(\langle M_{J}\rangle\) of the jets depending on which generation the leading parton of the jet was produced at. Besides, we have found that the energy distribution of a jet, obtained using virtual leading partons and LL resummation in a 2-jet event, was well approximated by the distribution of an on-shell leading parton, whose jet pair contained only a single splitting (Fig. 8-9).
* Furthermore, when calculating hadronic distributions, we have found that the larger the jet mass, the more, and also more energetic hadrons stem from it. The angle between the momenta of hadrons and that of the jet is larger too (Fig. 10).
* We have developed an event generator to obtain the momenta of all hadrons produced in jets event-by-event, using the FF based on micro-canonical statistics and superimposed NBD hadron multiplicity fluctuations. We have derived that the mean multiplicity of hadrons in a jet depends on the jet mass as \(\bar{n}\sim\ln^{a}(M_{J})\) (see Eq. (26)).
Figure 10: Hadron energy (**top-left**), rapidity (**top-right**), angle (**bottom-left**) and multiplicity (**bottom-right**) distributions. Colors show which graph refers to which jet in Fig. 2.
## 6 Appendix
### A Phasespace of \(n\) massless particles
\[\Omega_{n}(P)\;=\;\prod_{i=1}^{n}\int\frac{d^{D-1}{\bf p}_{i}}{p_{i}^{0}}\,\delta ^{D}\left(\sum_{j}p_{j}^{\mu}-P^{\mu}\right)\sim\int d^{D}s\,e^{-is_{\mu}P^{\mu} }\varphi^{n}(s)\;, \tag{40}\]
where the Fourier-transform of the single-particle phasespace \(\varphi(s)\) can be evaluated in the frame, in which, \(s=(\sigma,{\bf 0})\) (with \(\sigma^{2}=s_{0}^{2}-{\bf s}^{2}\)), and \(p=(p,{\bf p})\):
\[\varphi(\sigma)\;=\;\int\frac{d^{D-1}{\bf p}}{p^{0}}\,e^{is_{\mu}p^{\mu}}\sim \int dp\,p^{D-3}e^{i\sigma p}\sim\frac{1}{(i\sigma)^{D-2}}\;. \tag{41}\]
We may evaluate the inverse Fourier-transform in Eq. (1) in the frame, where \(P=(M_{0},{\bf 0})\), thus,
\[\Omega_{n}(P)\;\sim\;\int d^{D-1}{\bf s}\int ds_{0}\,\frac{e^{-is_{0}M_{0}}}{[ (s_{0}+|{\bf s}|)(s_{0}-|{\bf s}|)]^{n(D-2)/2}}\;. \tag{42}\]
As we have poles at \(s_{0}=\pm|{\bf s}|\), we use Cauchy's formula \(\oint\frac{dzf(z)}{(z-z_{0})^{n}}\sim f^{(n-1)}(z_{0})\), and arrive at terms of the form of
\[\Omega_{n}(P) \sim \sum_{j}A_{j}^{\pm}\int d^{D-1}{\bf s}\,\left(\frac{\partial}{ \partial s_{0}}\right)^{j}e^{-is_{0}M_{0}}\left(\frac{\partial}{\partial s_{0 }}\right)^{n(D-2)/2-1-j}\left.\frac{1}{(s_{0}\pm|{\bf s}|)^{n(D-2)/2}}\right|_ {s_{0}=\pm|{\bf s}|} \tag{43}\] \[\sim\sum_{j}A_{j}^{\pm}\,M_{0}^{j}\int ds\,s^{D-1-n(D-2)+j}\,e^{- isM_{0}}\sim M_{0}^{n(D-2)-D}\;.\]
The actual values of the constant factors \(A_{j}^{\pm}\) multiplying the terms coming from the poles at \(s_{0}=\pm|{\bf s}|\), are of no importance from the point of view of the particle distributions.
### B Calculation of the splitting function
Via introducing the renormalized field and coupling \(g=Z_{g}g_{r}\) and \(\phi=Z_{3}^{1/2}\phi_{r}\), along with \(Z_{3}=1+\delta Z_{3}\) and \(Z_{g}=1+\delta Z_{g}\), we arrive at the renormalised Lagrangian
\[{\cal L} = \frac{1}{2}(\partial_{\mu}\phi_{r})^{2}+(Z_{3}-1)\frac{1}{2}( \partial_{\mu}\phi_{r})^{2}+\frac{g_{r}}{3!}\phi_{r}^{3}+(Z_{g}Z_{3}^{3/2}-1) \frac{g_{r}}{3!}\phi_{r}^{3} \tag{44}\] \[= \frac{1}{2}(\partial_{\mu}\phi_{r})^{2}+\delta Z_{3}\frac{1}{2}( \partial_{\mu}\phi_{r})^{2}+\frac{g_{r}}{3!}\phi_{r}^{3}+(\delta Z_{g}+\frac{ 3}{2}\delta Z_{3})\frac{g_{r}}{3!}\phi_{r}^{3}\;.\]
As \(\delta Z_{3}\) and \(\delta Z_{g}\) are of \({\cal O}(g^{2})\), terms proportional to them come as perturbative corrections. This way, a propagator of momentum \(p\) is \(i/(p^{2}+i\epsilon)\), a vertex is \(-ig_{r}\), the counter terms are \(ip^{2}\delta Z_{3}\) and \(-ig_{r}(\delta Z_{g}+\frac{3}{2}\delta Z_{3})\), and a cut propagator is \(2\pi\delta(p^{2})\). Propagators and vertices on the right of the cuts are complex conjugates.
We may write \(A(z,P^{2})=\delta(1-z)A_{1}(z,P^{2})+A_{2}(z,P^{2})\) in Eq. (12). When calculating \(A_{1}(z,P^{2})\), we use the identity \(\prod\limits_{i}\frac{1}{A_{i}^{\alpha_{i}}}=\frac{\Gamma(\alpha)}{\prod\limits _{i}\Gamma(\alpha_{i})}\prod\limits_{i}\frac{1}{0}\frac{d\xi_{i}\,\xi_{i}^{ \alpha_{i}-1}\delta(1-\alpha)}{(\sum\xi_{i}A_{i})^{\alpha}}\), with \(\alpha=\sum\alpha_{i}\), thus,
\[A_{1}(z,P^{2}) = \int\limits_{0}^{1}d\xi_{1}\int\frac{d^{D}q}{(2\pi)^{D}}\left\{ \int\limits_{0}^{1-\xi_{1}}d\xi_{2}\frac{2i\,n_{c}}{\left[(1-\xi_{1}-\xi_{2})q^ {2}+\xi_{1}(q-\hat{k})^{2}+\xi_{2}(P-q)^{2}\right]^{3}}\right.\;+ \tag{45}\] \[\left.\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+\; \left.\frac{i\,n_{d}}{m_{0}^{2}\left[(1-\xi_{1})q^{2}+\xi_{1}(q-\hat{k})^{2} \right]^{2}}+\frac{i\,n_{e}}{m_{0}^{2}\left[(1-\xi_{1})q^{2}+\xi_{1}(P-q-\hat{ k})^{2}\right]^{2}}\right\}\] \[+ \frac{\bar{n}_{c}}{g^{2}}\left(\delta Z_{g}+\frac{3}{2}\delta Z_{3 }\right)-\frac{\bar{n}_{d}+\bar{n}_{e}}{g^{2}}\delta Z_{3}\;.\]
Substituting \(\tilde{q}=q-\xi_{1}k-\xi_{2}P\) and \(L_{1}=-P^{2}\xi_{2}(1-\xi_{2})\) in the first term, \(\tilde{q}=q-\xi_{1}k\) in the second term and \(\tilde{q}=q-\xi_{1}(P-k)\) in the third term, we get
\[g^{2}A_{1}(z,P^{2}) = i\,g^{2}\int\limits_{0}^{1}d\xi_{1}\int\frac{d^{D}\tilde{q}}{(2 \pi)^{D}}\left\{\,\int\limits_{0}^{1-\xi_{1}}d\xi_{2}\frac{2n_{c}}{(\tilde{q}^ {2}-L_{1})^{3}}\;+\;\frac{n_{d}+n_{e}}{m_{0}^{2}\,\tilde{q}^{4}}\,\right\}+ \tag{46}\] \[+\;\bar{n}_{c}\left(\delta Z_{g}+\frac{3}{2}\delta Z_{3}\right)-( \bar{n}_{d}+\bar{n}_{e})\delta Z_{3}\;.\]
Substituting \(\tilde{q}=(iq_{E},{\bf q}_{E})\) (Wick-rotation) gives
\[g^{2}A_{1}(z,P^{2}) = g^{2}\frac{\kappa_{D}}{(2\pi)^{D}}\int\limits_{0}^{1}d\xi_{1} \int\limits_{0}^{\infty}dq_{E}\,q_{E}^{D-1}\left\{\,\int\limits_{0}^{1-\xi_{1 }}d\xi_{2}\frac{2n_{c}}{(q_{E}^{2}+L_{1})^{3}}\;-\;\frac{n_{d}+n_{e}}{m_{0}^{2 }\,\tilde{q}_{E}^{4}}e^{-\epsilon q_{E}/m_{0}}\right\}+ \tag{47}\] \[+\;\bar{n}_{c}\left(\delta Z_{g}+\frac{3}{2}\delta Z_{3}\right)-( \bar{n}_{d}+\bar{n}_{e})\delta Z_{3}\;.\]
Using \(\int\limits_{0}^{\infty}\frac{dx\,x^{a-1}}{(x+1)^{a+b}}=\Gamma(a)\Gamma(b)/ \Gamma(a+b)\), \(\Gamma(\epsilon)=1/\epsilon-\gamma_{E}\) and setting \(D=6-2\epsilon\) along with \(g\to g\mu^{\epsilon}\), we obtain
\[g^{2}A_{1}(z,P^{2}) = \frac{g^{2}n_{c}}{2(4\pi)^{3}}\left(\frac{1}{\epsilon}-\ln\frac{- P^{2}}{\mu^{2}}-\gamma_{E}+\ln 4\pi-2\int\limits_{0}^{1}d\xi_{1}\;\int\limits_{0}^{ 1-\xi_{1}}d\xi_{2}\ln\xi_{2}(1-\xi_{2})\right)\;- \tag{48}\] \[-\frac{g^{2}}{2(4\pi)^{3}}(n_{d}+n_{e})\frac{1}{\epsilon^{2}}+\; \bar{n}_{c}\left(\delta Z_{g}+\frac{3}{2}\delta Z_{3}\right)-(\bar{n}_{d}+\bar {n}_{e})\delta Z_{3}\;.\]
If we remove the divergences along with the \(P^{2}\) and \(\mu\) independent constants by the counter terms, we are left with
\[A_{1}(z,P^{2}) = -\frac{n_{c}}{2(4\pi)^{3}}\ln\frac{P^{2}}{\mu^{2}} \tag{49}\]
For the calculation of the second line of \(A(z,P^{2})\) in Eq. (12), we parametrize momenta as \(P=(M,0,{\bf 0})\), \(k=(Mz/2,Mz/2,{\bf 0})\) and \(q=\alpha P+\beta k+q_{T}=(\alpha M+\beta Mz/2,\beta Mz/2,{\bf q}_{T})\). This way, the integration measure becomes \(d^{D}q=(M^{2}z/2)d\alpha\,d\beta\,d^{D-2}{\bf q}_{T}\). Besides, \(2Pq=M^{2}(2\alpha+\beta z)\), \(2kq=M^{2}\alpha z\) and \(q^{2}=M^{2}\alpha(\alpha+\beta z)-{\bf q}_{T}^{2}\). Furthermore, due to the \(\delta\left[(q-k)^{2}\right]\) term, \(q^{2}=2kq=M^{2}\alpha z\). This way, the second line of Eq. (12) becomes
\[g^{2}A_{2}(z,P^{2}) = g^{2}\frac{z\kappa_{D-2}}{4(2\pi)^{D+1}}\int d\alpha\int d\beta \int dq_{T}^{2}\,(q_{T}^{2})^{D/2-2}\;\times\] \[\times\;(2\pi)\delta\left[M^{2}\alpha(\alpha+\beta z-z)-q_{T}^{2 }\right](2\pi)\delta\left[M^{2}(1+\alpha z-2\alpha-\beta z)\right]\;\times\] \[\times\;\left\{\frac{n_{f}}{\alpha^{2}z^{2}}+\frac{n_{g}}{(1-z)^ {2}}+\frac{n_{h}}{\alpha z(1-z)}+\frac{n_{i}}{\alpha z(1-2\alpha+z-\beta z)} \right\}\;.\]
Using the integral for \(\beta\) to eliminate the second \(\delta\) function gives
\[g^{2}A_{2}(z,P^{2}) = \frac{g^{2}\kappa_{D-2}}{4(2\pi)^{D-1}M^{2}}\int d\alpha\int dq_{T} ^{2}\,(q_{T}^{2})^{D/2-2}\delta\left[M^{2}\alpha(1-\alpha)(1-z)-q_{T}^{2} \right]\;\times\] \[\times\;\left\{\frac{n_{f}}{\alpha^{2}z^{2}}+\frac{n_{g}}{(1-z)^ {2}}+\frac{n_{h}}{\alpha z(1-z)}+\frac{n_{i}}{\alpha(1-\alpha)z^{2}}\right\}\] \[= \frac{g^{2}\kappa_{D-2}M^{D-6}(1-z)^{D/2-2}}{4(2\pi)^{D-1}}\int d \alpha\,[\alpha(1-\alpha)]^{D/2-2}\;\times\] \[\times\;\left\{\frac{n_{f}}{\alpha^{2}z^{2}}+\frac{n_{g}}{(1-z)^ {2}}+\frac{n_{h}}{\alpha z(1-z)}+\frac{n_{i}}{\alpha(1-\alpha)z^{2}}\right\}\]
Using the identity \(\int\limits_{0}^{1}d\alpha\alpha^{a-1}(1-\alpha)^{b-1}=\Gamma(a)\Gamma(b)/\Gamma(a+b)\) and \(D=6-2\epsilon\) dimensions, where the coupling acquires dimension \(g\to g\mu^{\epsilon}\), the solid angle is \(\kappa_{D}=2\pi^{D/2}/\Gamma(D/2)\) and \(\Gamma(-\epsilon)=-1/\epsilon-\gamma_{E}\), we obtain
\[g^{2}A_{2}(z,P^{2}) = \frac{g^{2}(1-z)}{(4\pi)^{3}}\left(\frac{4\pi\mu}{M^{2}(1-z)} \right)^{\epsilon}\left\{\frac{n_{f}}{z^{2}}\left(-\frac{1}{\epsilon}-\gamma_{ E}\right)+\frac{n_{g}}{6(1-z)^{2}}+\frac{n_{h}}{2z(1-z)}+\frac{n_{i}}{z^{2}}\right\} \tag{52}\] \[= \frac{g^{2}}{(4\pi)^{3}}\left\{-\left[\frac{1}{\epsilon}+\gamma_ {E}+\ln\left(\frac{4\pi\mu}{M^{2}(1-z)}\right)\right]\frac{n_{f}(1-z)}{z^{2}}+\right.\] \[\left.\hskip 28.452756pt+\;\frac{n_{g}}{6(1-z)}+\frac{n_{h}}{2z}+ \frac{n_{i}(1-z)}{z^{2}}\right\}\;.\]
We made use of \(\Gamma(-1+\epsilon)=-1/\epsilon+\gamma_{E}-1\), \(\Gamma(-\epsilon)=-1/\epsilon-\gamma_{E}\). Note that the \(1/\epsilon\) term being the collinear divergence, cannot be eliminated via renormalisation, however, it drops out of the splitting function, which is
\[\Pi(z) = \frac{\partial}{\partial\ln P^{2}}A(z,P^{2})\;=\;\frac{n_{f}}{(4 \pi)^{3}}\frac{1-z}{z^{2}}-\frac{n_{c}}{2(4\pi)^{3}}\delta(1-z)\;. \tag{53}\]
|
2304.07961
|
Adapting the DEVS kernel 'RT-CADMIUM' to the ESP32 embedded platform
|
Discrete Event Modelling of Embedded Systems (DEMES) is a development
methodology based on the Discrete Event Systems (DEVS) specification that
improves the time -to-market by simplifying the development and testing of
embedded systems. CADMIUM is a C++ header-only library developed at Carleton
University that helps simulate models built using the DEVS specification.
RT-CADMIUM is a fork of CADMIUM that provides a development framework that
helps users develop systems using the DEMES technology. RT-CADMIUM, however,
has a limited scope of deployment due to the use of Mbed OS as its Hardware
Abstraction Layer (HAL). This paper provides the methodology for porting the
RT-CADMIUM library to a different platform (ESP32 specifically). This paper
also portrays the performance improvements gained due to this adoption.
|
Sasisekhar Mangalam Govind, John Sahaya Rani Alex, Gabriel A. Wainer
|
2023-04-17T03:06:48Z
|
http://arxiv.org/abs/2304.07961v1
|
# Adapting the Devs Kernel 'RT-CADMIUM'
###### Abstract
Discrete Event Modelling of Embedded Systems (DEMES) is a development methodology based on the Discrete Event Systems (DEVS) specification that improves the time -to-market by simplifying the development and testing of embedded systems. CADMIUM is a C++ header-only library developed at Carleton University that helps simulate models built using the DEVS specification. RT-CADMIUM is a fork of CADMIUM that provides a development framework that helps users develop systems using the DEMES technology. RT-CADMIUM, however, has a limited scope of deployment due to the use of Mbed OS as its Hardware Abstraction Layer (HAL). This paper provides the methodology for porting the RT-CADMIUM library to a different platform (ESP32 specifically). This paper also portrays the performance improvements gained due to this adoption.
## Introduction
In the current era of rapid technological advancement, efficient software design is of utmost importance, as is the optimization of firmware and associated hardware development. The proliferation of 'Smart' appliances has increased the need for IoT-enabled embedded systems[1]. Real-time systems are a vital category of IoT systems that require the timely processing and communication of data to achieve the desired system behaviour and meet the application requirements[2]. They facilitate real-time data acquisition, processing, and the implementation of complex synchronous communication protocols, among other things. A real-time system comprises interconnected subsystems or components that interact with the environment in response to real-time stimuli, resulting in immediate system responses. As a result, stringent timing requirements must be met to achieve such instantaneous response times[3]. However, developing a real-time controller is a challenging task, both technically and financially[4].
Discrete Event Modelling of Embedded Systems (DEMES) is a development methodology based on the Discrete Event System (DEVS) specification[5]. Systems modelling and simulation tend to be widely used in the early stages of development of projects but tend to be abandoned as the project moves from paper to the real world. But maintaining a model for a given system throughout the development cycle would allow for simulation to test the reliability and robustness of the system in conditions that maybe impractical to replicate physically. Since DEMES is based on the formal modelling and simulation paradigm DEVS, it allows the developer to maintain the system models throughout the development cycle[5]. By developing a kernel capable of executing models created using the DEVS formalism, it becomes possible to simulate theoretical systems designed by researchers on a microcontroller, thereby realizing them in the physical world[6]. DEVS specification revolves around timed events, hence the development of a kernel that executes said models would be able to follow very hard timing constraints. The modularity of a model/ system designed using the DEVS specification allows for any module (called atomic models) to be replaced/ upgraded without affecting the operation of the complete system (called coupled model). This would also imply that once a generic atomic
model is created, it can be used in other systems that require its functionality, hence reducing the time to market exponentially.
There exist various DEVS simulators, such as XDEVS from the University of Barcelona and PowerDEVS from the University of Buenos Aires. However, this paper will focus on CADMIUM, which was developed at Carleton University. CADMIUM is a C++ header-only library that enables users to model and simulate DEVS models[7]. RT-CADMIUM, a fork of CADMIUM, permits users to develop DEVS models, simulate them, and execute them in real-time on ARM microcontrollers that support the Mbed OS platform[8]. The principal aim of this research paper is to enhance the RT-CADMIUM kernel. The proposed enhancements involve two main aspects: Firstly, upgrading the foundation of RT-CADMIUM to the latest version of CADMIUM[7] developed by Roman Cardenas. Secondly, this paper proposes a methodology to adapt RT-CADMIUM to other platforms while eliminating its dependence on Mbed OS. The study also intends to enhance the performance of RT-CADMIUM to render it more feasible for commercial deployment. Additionally, the research aims to refine the software-hardware co-design aspects of the development framework.
## 2 Related Work
DEVS (Discrete Event System) specification is a widely used formalism for modelling complex dynamic systems using discrete-event abstraction created by University of Arizona Prof. Bernard P. Ziegler. One of the reasons DEVS was chosen as the basis of the kernel is that the formalism defines both the system structure and the system behaviour[6]. DEVS consists of two types of models: atomic models and coupled models. Atomic models represent the behaviour of individual components, including their current state, the time they will remain in that state, and their input and output ports. On the other hand, coupled models are used to connect groups of models, whether atomic or coupled, by passing outputs from one DEVS model to the inputs of another. These links are created using the coupled model, which can contain both types of DEVS models. Coupled models are useful for creating modular hierarchical designs, which can be easily adapted and modified as necessary[1]. The input/ output events along with the states define the behaviour of a system based on DEVS. The 7-tuple definition of a DEVS atomic model:
\[M=<X,Y,S,s_{0},ta,\delta_{int},\delta_{ext},\delta_{con},\lambda>\]
* X is the set of inputs.
* Y is the set of outputs.
* S is the set of states
* s\({}_{0}\) is an individual state (\(s_{0}\in S\))
* \(ta\colon S\rightarrow\mathbb{R}_{>0}\cup\infty\) is the time advance function.
* \(\delta_{int}\colon S\to S\) is the internal transition function. The model transitions from \(s_{0}\in S\) to \(s_{1}\in S\) after spending \(ta(s_{0})\) time in state \(s_{0}\) without receiving an input.
* \(\delta_{ext}\colon S\times\mathbb{R}\times X\to S\)is the external transition function. This function is triggered when a set of inputs \(X_{b}\subseteq X\) after \(e\) has elapsed since the model entered state \(s\in S\).
* \(\delta_{con}\colon S\times\mathbb{R}\times X\to S\) is the confluent transition function and is responsible for ensuring collisions don't occur. This function is triggered when an internal and an external transition occur at the same time. Generally, the internal transition would be executed first. This function is the main difference between Parallel DEVS (used for RT-Cadmium) and Classic DEVS.
* \(\lambda\colon S\to Y\) is the output function. Triggered right before the internal transition from \(s_{0}\in S\) to \(s_{1}\in S\), and generates outputs \(\lambda(s_{0})=Y_{b}\subseteq Y\) for state \(s_{0}\).
The formal structure of a coupled DEVS model 8-tuple representation:
\[N=<X,Y,C,EIC,EOC,IC>\]
* X is the set of input events
* Y is the set of output events
* C is the set of submodels. Any element \(c\in\mathcal{C}\), is either an atomic or a coupled model defined inside the coupled model.
* EIC is the External Input Coupling. Defines the connections from models outside N to the components \(c\in\mathcal{C}\).
* EOC is the External Output Coupling. Defines the connections from the components within N to models outside N.
* IC defines the connections between any component \(c_{i}\in\mathcal{C}\) and \(c_{j}\in\mathcal{C}\).
The coupled model carries forward the property of any DEVS model. This closure under coupling [9], allows for construction of hierarchical models, whose behaviour remain consistent and predictable[6]. Further, the coupled model defines the structure of the complete system. DEVS decouples model, experiments, and execution engines (allowing for portability and interoperability)[5].
CADMIUM is a tool developed by Carleton university that helps simulate DEVS based models. RT-CADMIUM is a real-time kernel based on CADMIUM that allows for these DEVS models to be implemented in hardware[1]. The interface between the root co-ordinator and the clock of the platform is what enables the CADMIUM to execute the models in real time. The Algorithm that enables the same is shown in Figure 1.
The two main jobs of the co-ordinator are to 1) Collect the outputs 2) Advance the simulation. After these tasks are completed, the simulation moves to the next state. Hence, the simplest timer will advance the collect the outputs, advance the simulation, and then wait for the next event. This ideal time scheduler does not consider the time taken by the platform to evaluate the model, collect the outputs and advance the simulation. This non-zero time would disrupt the simple timer algorithm. Hence, the algorithm shown in Figure 1 was developed over multiple iterations and handles scheduler slip. A user configurable flag can be modified to allow a certain degree of time slip to be permissible. This algorithm employs two timers, execution timer and wait (referred to as timeout timer in the actual code) timer. The execution timer starts at the beginning of an event, runs while the engine collects the output, and stops after the simulation has advanced. Once the simulation has advanced, the algorithm checks if the execution was completed before the start of the next event. If the execution took longer than the assigned deadline, the user is notified about the missed deadline, and a variable is defined to keep track of this slip; the total slip is accumulated over time. If the accumulated value exceeds the scheduler slip allowance, the algorithm halts program execution; else, the algorithm starts the execution of the next event. If, however, the deadline was not missed, the time left to the next event is subtracted from the accumulated slip value. Moreover, if the delta between the current time and the start time of the next event is non-zero, indicating the previous event completed execution prematurely, a wait timer is employed to stall the execution till the start of the next event[1].
Figure 1: Real-time clock with Scheduler slip adjustment [1]
This algorithm is implemented in the "rt_clock.hpp" file within the library.
### RT-CADIMUM: IMPLEMENTATION
The system under discussion employs a generic C++ approach for kernel operations and scheduling, which allows for flexibility and portability across different hardware platforms. However, the implementation of the "rt_clock.hpp" file uses API calls to the Mbed OS hardware abstraction layer (HAL)[1].
This reliance on Mbed OS API calls limit the portability of the library to only those microcontrollers and embedded boards that support Mbed OS. The HAL provides a standardized interface for working with hardware, but it also introduces dependencies that restrict the system's compatibility with other platforms[10].
"rt_clock.hpp", as mentioned earlier, contains 2 timers: the execution timer and the timeout timer. The execution timer tracks the simulation time, which is the elapsed time from the start of the simulation. This timer provides valuable information for identifying bottlenecks and issues in the system's performance. By monitoring the execution time, the system can optimize its performance and ensure efficient use of resources[1].
The timeout timer, on the other hand, tracks the time between internal transitions called sigma. In this context, sigma refers to the time between two events within the system (returned by the time advance function mentioned earlier), such as the arrival of new data or the completion of a task. The timeout timer ensures that the system remains responsive and prevents resource overuse by setting appropriate timeout values. The use of Mbed OS provides a standardized interface for working with hardware, but it also introduces dependencies that restrict the system's compatibility with other platforms[1].
Despite this limitation, the use of Mbed OS can still provide benefits in terms of simplicity and ease of use, particularly for developers who are familiar with the Mbed OS API. However, if portability across a wider range of platforms is a priority, alternative approaches that avoid platform-specific dependencies should be considered.
### RT-CADIMUM: METHODOLOGY OF ADAPTATION
The software/ firmware that directly interacts between the user code and the hardware communication network of the underlying SoC can be regarded as the Hardware Abstraction Layer (HAL). The HAL provides APIs and function calls for the user to make use of, to expedite the development process. The standardization of HALs would allow for total software reusability and would help unify various development workflows[11]. However, it is worth noting that many microcontroller manufacturers provide HAL libraries as part of their development tools and support resources. Some examples of microcontroller manufacturers that offer HAL libraries include STMicroelectronics, Texas Instruments, NXP, and Microchip[12, 13, 14, 15]. These proprietary HALs reduce the time of firmware development on microcontroller units (MCUs) manufactured by the respective companies, but HALs like Mbed OS aim to provide a wider HAL to support a multitude of microcontrollers from various manufacturers. Mbed OS is a monolithic kernel written in C and C++. It was (and continues) to be developed by ARM for low constrained devices[16]. Although Mbed OS promises a lower memory footprint, the lack of support for a major IoT platform like the ESP32 steers us away from this HAL. Replacement or removal of a HAL like Mbed OS would require an alternative HAL to be implemented in its stead. The target platform of interest is the ESP32. The ESP32, more specifically the ESP32-WROOM32D, is a powerful microcontroller that combines Wi-Fi (802.11 b/g/n), dual mode Bluetooth 4.2, and a multitude of peripherals including I2S, I2C, SPI, UART, etc. into a single piece of silicon[17, 18]. The SoC within, is based on two Xtensa LX6 microcontrollers that runs at 240MHz with 520kB of SRAM and upto 64 MB of flash storage[17]. Due to the low price, high performance, and flexible footprint, the ESP32 is a highly favourable option for developers to deploy an IoT system[19]. The ESP-IDF integrated development framework developed by Espressif, is a set of libraries designed in C that act as a HAL and provides developers with complete access to the hardware[18].
Drivers:
The numerous files and pieces of code required to make it easier to integrate sensors and actuators from the physical layer into the CADMIUM layer are all represented comprehensively by the driver block. Specifically, this entails the inclusion of code that invokes APIs from HAL or other libraries, which encapsulate the interactions with the physical environment into Atomic DEVS models. The 2 main driver blocks are:
Digital Input: The Digital Input atomic polls a given hardware pin with a predefined polling rate \(\sigma\) and brings the Boolean data to the CADMIUM layer. For this atomic, the set of inputs \(X=\varphi\) and set of output \(Y\) contains a singular Boolean port.
Digital Output: The Digital Output atomic takes a Boolean input and reciprocates the same on a hardware pin. It moves the data from CADMIUM layer to the hardware layer. The set of inputs \(X\) contains a singular Boolean port, while the set of outputs \(Y=\varphi\).
Logging:
This block represents the set of programs that allow the DEVS logging engine within CADMIUM to interact with the Serial Monitor (or any other forms of text display). Here, virtual functions for 'print to screen' are overridden using the appropriate APIs to interact with the output stream buffers.
Execution timer & Timeout timer:
As has already been discussed in the 'RELATED WORK' section, there are 2 timers that help in scheduling process. The real-time scheduling algorithm requires multiple API calls to the HAL to define the various timers, interrupt upon timer limit etc. The implementation of the same was done using the gptimers provided by the ESP-IDF.
## 3 Case Study: Blinky
A simple demonstration of the working and implementation of a system using CADMIUM is a Blinky program. A blinky is a simple system that allows us to observe the performance of the hardware running in conjunction
Figure 2: Architecture of CADMIUM
with the CADMIUM kernel. The blinky system has an external input and an output enabling us to see the various response times of the system in different scenarios.
The blinky system has a simple task: Blink and LED with a period that switches between 2 values. What the period of oscillation would be for at any instance is controlled by a Boolean input into the system. The input toggles a flag. This flag is used as a switch to alternate between the 2 predefined values for the period. Finally, a Boolean output, which is connected to an LED, toggles based on the period. A model can be created that follows the DEVS specification, the block diagram of which is provided in Figure 3,
The blinkySystem model serves as the top-level model that contains all the atomics. The 'Digital Input' atomic is responsible for accepting user input and is compiled only when building for an embedded board and not during simulation. The Generator atomic model generates Boolean outputs randomly and is compiled only during simulation to provide inputs. The blinky atomic, which is compiled for both building for embedded boards and building for simulation, is responsible for receiving inputs, altering the oscillation period, and producing a Boolean output. Lastly, the Digital output block takes the input from the Blinky atomic and toggles the LED accordingly. This atomic is not compiled during simulation. The switches in Figure 3 aid in visualizing the cases in which the corresponding atomics are compiled or connected to the other atomics. When building for simulation, only the Generator and Blinky are compiled, and when building for deployment, the Digital Input, Blinky, and Digital Outputs are compiled. The implementation of the Digital Input block and Digital Output block have been displayed in previous sections. The implementation of the blinky model is represented by Figure 4.
The blinky atomic has a set of 4 states S = {S1, S2, S3, S4}. The internal transition function transitions Blinky from state S1 to S2 and vice versa after a period of \(\sigma\)1 time units. Similarly, Blinky transitions from S3 to S4 and vice versa after a period of \(\sigma\)2 time units. Formally, \(S1=\delta_{int}(S2),S2=\delta_{int}(S1)\); \(S3=\delta_{int}(S4),S4=\delta_{int}(S3)\). The external transition function (that occurs when an external input is detected) transitions the Blinky from S1 to S3 and vice versa or from S2 to S4 or vice versa depending on the present state. Formally, \(S2=\delta_{ext}(S4),S4=\delta_{ext}(S2)\); \(S1=\delta_{ext}(S3),S3=\delta_{ext}(S1)\). Upon observing Figure 4, the Blinky system toggles between S1 and S2 with a period of \(\sigma\)1 until time t1. At time t1, and external input is received, which triggers \(\delta_{ext}(S1)\), transitioning the system from S1 to S3. The system then continues oscillation between S3
Figure 4: Blinky state diagram
Figure 3: Block diagram of blinkySystem [20]
and S4 with a period of 62 until time t2. At time t2, another input is received, which again triggers the external transition function \(\delta_{ext}(S4)\), transitioning the system from S4 to S2. Every internal transition function calls the output function \(\lambda(S1),\lambda(S2),\lambda(S3),\lambda(S4)\) respectively depending on the present state (state prior to transition).
The complete system comes together as the BlinkySystem coupled model which is the top model that integrates all the atomic models together.
## Results and Discussion
The system was implemented on the ESP32 using the adopted RT-CADMIUM libraries. The simulation output can be observed in Table 1 and the logging output received from the ESP32 is shown in Figure 5 and Table 2
The system was implemented on the ESP32 using the adopted RT-CADMIUM libraries. The simulation output can be observed in Table 1 and the logging output received from the ESP32 is shown in Figure 5 and Table 2
Table 1 shows the logger output when simulating the BlinkySystem model. The logger shows the values of the present at the output ports of every model along with a timestamp and the unique model id. The characteristics of the BlinkySystem can be observed from the simulation output and can be cross verified against the state diagram in Figure 4. As the diagram suggests, the system can be seen to be oscillating between state output 1 and state output 0 with a period of 0.5s. The first three rows of the table portray this trend. At around the 29.6th second, the generator with model ID 2 is seen to produce an output, which feeds into the Blinky input. According to the state diagram (Figure 4), an external input triggers a change in the state oscillation frequency. The same can be observed in Table 1, when the generator produces an output at the 28.5947th second, the period (\(\sigma\)) changes from 0.5s to 1s.
The implementation of the same on the ESP32 also produces a logging output portrayed in Figure 5.
Figure 5 Shows the output of the ESP32 executing the Blink program. This output serial stream can be parsed into a csv file as shown in Table 2
A closer look at Table 2 shows the similarity of the actual output to the simulation. Here, unlike the simulation, 2 models with model ID 1 and 2 seem to be oscillating initially with \(\sigma=0.5\)s. the 2 models that are oscillating are Blinky and digitalOutput. During simulation, digitalOutput model remains disconnected and hence is absent in the simulation. Apart from the presence of digitalOutput, the logger output of the deployed system is comparable to the simulation output. At the 173.6\({}^{\text{th}}\) second the button press is registered, and the \(\sigma\) of oscillation changes from 0.5s to 1s.
## Performance Evaluation
Metrics like memory footprint and response time can be used as external parameters to measure the performance of the system. Further, the amount by which the scheduler has slipped (within the range defined by the MISSED_DEADLINE_TOLERANCE value) will also enable us to measure the performance of the system.
Figure 6 shows the image size of the binary after compilation of the Blinky source code. The parameter of focus for performance evaluation would be the flash size. Flash size describes the size of the binary image after compilation of source code. As per Figure 6 the binary size is shown to be 454003 bytes or 443.3 kB. This is however not the final size of the image loaded into the hardware.
\begin{table}
\begin{tabular}{|r|r|l|l|l|} \hline time & model\_id & model\_name & port\_name & data \\ \hline
172 & 1 & blinky & out & 1 \\ \hline
172 & 2 & digitalOuput & & Pin: 1 \\ \hline
172.5 & 1 & blinky & out & 0 \\ \hline
172.5 & 2 & digitalOuput & & Pin: 0 \\ \hline
173 & 1 & blinky & out & 1 \\ \hline
173 & 2 & digitalOuput & & Pin: 1 \\ \hline
173.6 & 3 & digitalInput & out & 1 \\ \hline
174.6 & 1 & blinky & out & 0 \\ \hline
174.6 & 2 & digitalOuput & & Pin: 0 \\ \hline
175.6 & 1 & blinky & out & 1 \\ \hline \end{tabular}
\end{table}
Table 2: Formatted output of the ESP32
Figure 6: Output memory map of idf.py build
Figure 7 shows the output of the flash tool (idf.py flash). This output gives us a deeper insight into the size of the binary image. Here, we can observe that the image is written in three separate address spaces, namely: 0x00001000, 0x00010000 and 0x00008000. Referring to the ESP-IDF documentation[21] regarding the partition tables, we can observe that this partitioning is the default partitioning scheme for the build configuration of'single factory app (large), no OTA'. Each section is offset by a block multiple of 4kB (0x1000). From Figure 7, we can observe that the largest data block is stored in address 0x0001000 (corresponding to an offset of 64kB). This is where the bootloader starts its execution. We can see that the size of the data loaded into this partition is 518304 bytes, this is the final binary size including the padding. But we can observe that this is then compressed to 242067 bytes or 236.4kB. So, in practice, the binary size can be said to be **242067 bytes**.
Figure 8 shows the compile output when the same Blinky system is compiled using the Mbed OS version of CADMIUM[20]. Here, we can observe that the total binary size comes upto 284468 bytes. This is 41.4 kB more that the ESP-IDF implementation of the same. Considering the image sizes are 200\(+\) kB, a difference of 42 kB may not seem significant, but, considering the available storage size, every byte counts. Fig, 10 shows the output when Blinky is compiled for the STM32 Nucleo-F401RE that has a total of 512kB of flash storage. Hence, to take the flash storage into consideration, the percentage consumption of both the binaries gives a better overview of memory footprint.
Figure 8: Output of mbed compile (image by Ezequielecker-Marcosig[20])
Figure 7: Output of the flash tool
For a performance benchmark, the blinky was also compiled without CADMIUM. Figure 9 shows the build and flash outputs.
From Figure 9(a) we can observe that the boiler plate blinky code alone takes up 185077 bytes of storage. After compression, this is reduced to 96517 bytes (as seen in Figure 9(b)). Hence, CADMIUM implemented directly on ESP-IDF takes up 142kB of memory.
The ESP32 having a total of 4MB of flash, the percentage consumption comes to:
* 4.4% before compression (185077 bytes, without CADMIUM)
* 2.3% after compression (97517 bytes, without CADMIUM)
* 12.3% before compression (518304 bytes, with CADMIUM)
* 5.7% after compression (242067 bytes, with CADMIUM)
The Mbed OS version on the Nucleo board with 512kB of flash storage comes to:
* 54.2% no compression algorithm is run prior to flashing.
Figure 11 shows the simulation output of BlinkySystem on the Nucleo F401RE. It is similar to the output seen in Figure 5. Upon observation of similarities, the 'deadline slip' values stand out. On the ESP32 implementation, the deadline slips by 85 microseconds, that is, the scheduler overshoots the deadline time by 85 microseconds. On the other hand, the STM output shows that the deadline is missed be 85,629 microseconds. This is an improvement of more than 10000%.
Figure 10: Chart showing percentage consumption of flash vs embedded boards.
Figure 9: (a)(b): Output memory map of idf.py build (Blinky sans CADMIUM) and the flash output (from left to right)
## Conclusion
DEMES methodology brings the rigour of formal specification to practical systems implemented on embedded platforms. The RT-CADMIUM platform allowed users to bring their models into the physical world by following the DEVS formalism, so long as they used platforms compliant with Mbed OS. This shortcoming of the ecosystem was identified.
This issue was tackled dichotomously. The foundation of RT-CADMIUM was changed to the newer version of CADMIUM[7], and Mbed OS was removed in favour of bare-metal implementation. A case study was conducted to evaluate the performance of the system. The Blinky system was deployed on the adapted version of RT-CADMIUM, and the previous Mbed OS version of RT-CADMIUM. The performance was measured based on two metrics; memory footprint of the source image on the device flash, and the time taken to execute the transition functions. The memory footprint didn't see a drastic change between the versions of RT-CADMIUM, however, considering the percentage consumption, newer version was found to be drastically more space efficient. Similarly, the time taken to execute transitions was observed to be much lesser than the previous version of RT-CADMIUM. These improvements in execution time and memory footprint shows that this development methodology is growing closer to commercialization.
|
2301.13112
|
Benchmarking optimality of time series classification methods in
distinguishing diffusions
|
Statistical optimality benchmarking is crucial for analyzing and designing
time series classification (TSC) algorithms. This study proposes to benchmark
the optimality of TSC algorithms in distinguishing diffusion processes by the
likelihood ratio test (LRT). The LRT is an optimal classifier by the
Neyman-Pearson lemma. The LRT benchmarks are computationally efficient because
the LRT does not need training, and the diffusion processes can be efficiently
simulated and are flexible to reflect the specific features of real-world
applications. We demonstrate the benchmarking with three widely-used TSC
algorithms: random forest, ResNet, and ROCKET. These algorithms can achieve the
LRT optimality for univariate time series and multivariate Gaussian processes.
However, these model-agnostic algorithms are suboptimal in classifying
high-dimensional nonlinear multivariate time series. Additionally, the LRT
benchmark provides tools to analyze the dependence of classification accuracy
on the time length, dimension, temporal sampling frequency, and randomness of
the time series.
|
Zehong Zhang, Fei Lu, Esther Xu Fei, Terry Lyons, Yannis Kevrekidis, Tom Woolf
|
2023-01-30T17:49:12Z
|
http://arxiv.org/abs/2301.13112v3
|
# Benchmarking optimality of time series classification methods
###### Abstract
Performance benchmarking is a crucial component of time series classification (TSC) algorithm design, and a fast-growing number of datasets have been established for empirical benchmarking. However, the empirical benchmarks are costly and do not guarantee statistical optimality. This study proposes to benchmark the optimality of TSC algorithms in distinguishing diffusion processes by the likelihood ratio test (LRT). The LRT is optimal in the sense of the Neyman-Pearson lemma: it has the smallest false positive rate among classifiers with a controlled level of false negative rate. The LRT requires the likelihood ratio of the time series to be computable. The diffusion processes from stochastic differential equations provide such time series and are flexible in design for generating linear or nonlinear time series. We demonstrate the benchmarking with three scalable state-of-the-art TSC algorithms: random forest, ResNet, and ROCKET. Test results show that they can achieve LRT optimality for univariate time series and multivariate Gaussian processes. However, these model-agnostic algorithms are suboptimal in classifying nonlinear multivariate time series from high-dimensional stochastic interacting particle systems. Additionally, the LRT benchmark provides tools to analyze the dependence of classification accuracy on the time length, dimension, temporal sampling frequency, and randomness of the time series. Thus, the LRT with diffusion processes can systematically and efficiently benchmark the optimality of TSC algorithms and may guide their future improvements.
**Key words** Times series classification, Likelihood ratio test, Optimal benchmark, Stochastic differential equations, ResNet, ROCKET, Random forest 1
Footnote 1: The code for benchmark tests is available at [https://github.com/feilumath/benchmark_TSC](https://github.com/feilumath/benchmark_TSC).
## 1 Introduction
Time series classification (TSC) is one of the central tasks in time series analysis and streaming data processing. Recent years have seen an explosion in the collection of time series data and a surge of TSC algorithms (see e.g., [1, 2, 4, 8, 13, 14, 19, 21, 22, 29, 30]). In particular, the recent reviews [2, 13, 29] have thoroughly compared dozens of TSC algorithms on hundreds of public bakeoff datasets, providing valuable understanding of the algorithms and the TSC tasks.
However, an optimality benchmark remains missing. The need for an optimality benchmark grows along with the fast-growing numbers of datasets and algorithms. Due to a lack of understanding of the complexity of the bakeoff datasets, current empirical benchmarks, which compare all methods using
bakeoff datasets, have skyrocketing computational and data storage cost. Yet, even a top performer is not cleared to be optimal.
An ideal optimality benchmark would have three characteristics: (1) It has a theory-guaranteed optimal reference to provide a direct diagnosis for any TSC method. Notably, a method reaching the benchmark for a type of time series is guaranteed optimal for classifying the underlying stochastic process, and efforts can focus on improving the efficiency and scalability of the method. (2) It is flexible in design to reflect the complexity of time series data in applications, ranging from univariate to multivariate time series, from Gaussian processes to highly nonlinear non-Gaussian processes, and from small to large randomness. (3) It is computationally efficient and scalable.
We propose to benchmark the optimality of binary TSC algorithms in distinguishing diffusion processes by the likelihood ratio test (LRT). The LRT is an optimal classifier because it is uniformly most powerful by the Neyman-Pearson lemma [24]; that is, it has the lowest false positive rate among classifiers with a controlled level of false negative rate. The LRT can be computed for time series sampled from Markov processes with known distributions. Meanwhile, diffusion processes from stochastic differential equations (SDEs) provide a large variety of such Markov processes, and these processes are flexible to reflect the specific features of real-world applications [25, 26]. Additionally, the benchmarking test is computationally efficient and scalable. The LRT does not require training and has a negligible computation cost. Furthermore, the simulation of SDEs can systematically generate large datasets with different lengths, nonlinearities, and levels of randomness. Therefore, LRTs for diffusion processes provide a reference of optimality for the performance (such as the ROC curves and accuracy) of all TSC algorithms.
We demonstrate the LRT benchmarking using three state-of-the-art TSC algorithms: random forest [4], ROCKET [8], and ResNet [30], in five representative classes of diffusion processes. The five processes are Brownian motions with constant drifts, 1-dimensional nonlinear diffusions with different potentials, 1-dimensional linear and nonlinear diffusions, multivariate Ornstein-Uhlenbeck processes, and high-dimensional interacting particle systems. Test results (see, e.g., Figure 5-6) show that the three algorithms achieve optimality in the case of Brownian motions with constant drifts, and they are near optimal for the nonlinear univariate time series and multivariate Gaussian processes. However, these three model-agnostic algorithms are significantly less accurate than the model-aware LRT in the case of high-dimensional nonlinear non-gaussian processes. Thus, it may be helpful to incorporate model information in developing next-generation TSC algorithms.
Additionally, the LRT benchmarks show that the optimal accuracy of TSC depends on the time series's length, dimension, and temporal sampling frequency. Analysis and numerical tests show that the accuracy increases with either time length or dimension, which enlarges the effective sample size. However, the classification rates are not sensitive to the frequency of the observations. Thus, in data collection, it is more helpful to collect data for a longer time rather than at a higher temporal resolution.
The rest of the paper is organized as follows. In Section 2, we cast the TSC as the learning of a function that maps a time series to a binary output so that a TSC algorithm can be viewed as a hypothesis testing method. In particular, we point out that the likelihood ratio test (LRT) is a uniformly most powerful test by the Neyman-Pearson Lemma. Additionally, we show the computation of the likelihood ratio for diffusion processes. Section 3 analytically computes the LRT for two Gaussian processes. The analysis shows the dependence of the classification accuracy on the time series's dimension, length, and frequency. Section 4 describes three examples of nonlinear diffusion processes and specifies the data generation for benchmarking tests. These examples showcase the design of benchmarking tests. We present in Section 5 the test results of benchmarking three scalable TSC algorithms: the random forest, ResNet, and ROCKET. Finally, the Appendix briefly reviews the Girsanov theorem and hypothesis testing.
* FN is also called type I error and FP is called type II error. The true positive rate (TPR) is \(1-\alpha_{k}^{0}\) and the false positive rate (FPR) is \(1-\alpha_{k}^{1}\).
## 2 Time series classification and distinguishing diffusions
We recast binary time series classification as a hypothesis testing problem, so that the likelihood ratio test (LRT) provides an optimal classifier by the Neyman-Pearson Lemma. On the other hand, diffusion processes provide a large variety of time series whose LRT can be computed in a scalable fashion. Thus, we propose to benchmark the optimality of TSC classifiers by LRT in distinguishing diffusions.
### TSC as a function learning problem
In the lens of statistical learning, a binary TSC algorithm learns the probabilities that the time series belongs to two classes from training data [6, 15].
Let the data be the time series (either univariate or multivariate) and their labels,
\[\textbf{Data:}\quad\{\mathbf{x}^{(m)},y^{(m)}\}_{m=1}^{M},\quad\mathbf{x}^{(m) }\in\mathbb{R}^{d\times(L+1)},y^{(m)}\in\{0,1\},\]
where for each \(m\), \(\mathbf{x}^{(m)}=x_{t_{0}:t_{L}}^{(m)}=(x_{t_{1}},\ldots,x_{t_{L}})^{(m)}\) is a sample path of a stochastic process \(\mathbf{X}=X_{t_{0}:t_{L}}\) with \(t_{0}<t_{1}<\ldots<t_{L}\) denoting time indices. Here \(y^{(m)}\) has a label 1 if the times series \(\mathbf{x}^{(m)}\) is in class \(\theta_{1}\); otherwise, its label is 0 if the time series is in class \(\theta_{0}\). We denote the two classes by \(\{\theta_{0},\theta_{1}\}\), which will be used as parameters for the time series models.
A TSC algorithm learns a function with a parameter \(\beta\) from data,
\[f_{\beta}(\mathbf{x})=z,\quad\mathbf{x}\in\mathbb{R}^{d(L+1)},\,z\in[0,1], \tag{2.1}\]
such that the value \(f_{\beta}(\mathbf{x})\) approximates the probability of \(\mathbf{x}\) being in class \(\theta_{1}\), i.e., \(\mathbb{P}(\theta=\theta_{1}\mid\mathbf{X}=\mathbf{x})\). This function leads to a classifier for any threshold \(k\in(0,1)\):
\[F(\mathbf{x},k)=\mathbf{1}_{R_{k}}(\mathbf{x}),\,\,\text{where}\,\,R_{k}=\{ \mathbf{x}:f_{\beta}(\mathbf{x})>k\}, \tag{2.2}\]
where \(R_{k}\) is called the _acceptance region_ to classify the time series \(\mathbf{x}\) as in class \(\theta_{1}\) (equivalently, the _rejection region_ for the class \(\theta_{0}\)).
The confusion matrix of the binary classifier (2.2) with \(\theta_{0}\) as positive is shown in Table 1. For a given threshold \(k\), we have a false negative (FN) prediction if \(F(\mathbf{x},k)=1\) while \(\mathbf{x}\) is in class \(\theta_{0}\), and we have a false positive (FP) prediction if \(F(\mathbf{x},k)=0\) while \(\mathbf{x}\) is in class \(\theta_{1}\). The definitions of true positive (TP) and true negative (TN) are similar. The false negative rates (FNR) and the true negative rates (TNR) rates are the probabilities
\[\begin{split}\text{FNR}(k)&=\alpha_{k}^{0}= \mathbb{E}[F(\mathbf{x},k)\mid\theta_{0})]=\mathbb{P}(R_{k}\mid\theta_{0}) \approx\frac{FN}{TP+FN},\\ \text{TNR}(k)&=\alpha_{k}^{1}=\mathbb{E}[F(\mathbf{ x},k)\mid\theta_{1})]=\mathbb{P}(R_{k}\mid\theta_{1})\approx\frac{TN}{TN+FP}, \end{split} \tag{2.3}\]
where the empirical approximations are based on the number of counts.
Two popular metrics evaluating the performance of the classifier are the _Receiver operating characteristic_ (ROC) curve and _accuracy_. The ROC curve is \((1-\alpha_{k}^{0},1-\alpha_{k}^{1})_{k\in(0,1)}\), the curve of True Positive
\begin{table}
\begin{tabular}{c|c c|c c|} \hline & \multicolumn{2}{c|}{Decision} & \multicolumn{2}{c|}{Rates/ Probability of errors} \\ \hline Truth \textbackslash{Decision} & Accept \(\theta_{0}\) & Reject \(\theta_{0}\) & \\ \hline \(\theta_{0}\) (Positive) & TP & FN & TPR = \(1-\alpha_{k}^{0}\) & FNR =\(\alpha_{k}^{0}=\mathbb{E}[F(\mathbf{x},k)\mid\theta_{0})]\) \\ \(\theta_{1}\) (Negative) & FP & TN & FPR = \(1-\alpha_{k}^{1}\) & TNR =\(\alpha_{k}^{1}=\mathbb{E}[F(\mathbf{x},k)\mid\theta_{1})]\) \\ \hline \end{tabular}
\end{table}
Table 1: Confusion matrix of the classifier with \(\{\theta_{0}\}\) being positive.
Rate (TPR, y-axis) versus False Positive Rate (FPR, x-axis), both parametrized by the threshold \(k\) (see e.g., [10] for an introduction). The ROC curve allows the user to define the threshold and measure the quality of a classifier by the _area under the curve_ (AUC). A rule of thumb is that the larger is the AUC, the better is the classifier. The accuracy is defined by:
\[\text{Accuracy(k)}\ =\frac{1-\alpha_{k}^{0}+\alpha_{k}^{1}}{2}\approx\frac{TP+TN }{TP+TN+FP+FN},\]
where the approximate equality becomes an equality when the sample sizes in the two classes are the same. The maximal accuracy is independent of the threshold:
\[ACC_{*}=\max_{0\leqslant k\leqslant 1}\ \text{Accuracy(k)}\, \tag{2.4}\]
We will use AUC and the maximal accuracy to access the classifiers, because they are independent of a specific threshold. There are many other metrics to fit the goal of a specific field, i.e., choosing a threshold \(k\) to increase the _true positive rate_ (TPR) (aka sensitivity, power, or recall) \(1-\alpha_{k}^{0}\) or to control the false positive rate (FPR) \(1-\alpha_{k}^{1}\) (aka specificity), or a balance balancing these needs [15].
Sampling errors in training and testing.Sampling errors are present in the training and the testing data, thus they affect the accuracy of the classifier. The accuracy of the function learned in a classifier can be analyzed through mathematical and statistical learning theory (see e.g., [6, 9, 15]), and non-asymptotic error bounds are available to quantify the dependence on the data size based on concentration inequalities [7, 11]. The sampling error in the testing stage, on the other hand, can be easily analyzed: the empirical approximation of the rates in (2.3) have a sampling error of order \(O(\frac{1}{\sqrt{m}})\) with \(m\) being the number of test samples, as the next lemma shows (its proof is in Appendix A.2).
**Lemma 2.1** (Sampling error in FNR/TNR): _For each classifier in (2.2), the sampling errors in the empirical approximations of the FNR and TNR rates in (2.3) are of order \(\frac{1}{\sqrt{m}}\sigma_{k,i}\) with \(\sigma_{k,i}=\sqrt{\alpha_{k}^{i}(1-\alpha_{k}^{i})}\) for \(i=0,1\), where \(m\) is the number samples in the test stage. Specifically, let \(\{\mathbf{x}_{j}\}_{j=1}^{m}\) be the test samples, and let \(\widehat{\alpha}_{k,m}^{i}=\frac{1}{m}\sum_{j=1}^{m}F(\mathbf{x}_{j},k)\) conditional on \(\theta_{i}\). Then, \(\widehat{\alpha}_{k,m}^{i}\) converges in distribution to \(\mathcal{N}(0,\sigma_{k,i}^{2})\) as \(m\to\infty\), and \(\mathbb{P}(|\widehat{\alpha}_{k,m}^{i}-\alpha_{k}^{i}|>\epsilon)\leqslant 2e^{- \frac{m\epsilon^{2}}{2}}\) for any \(\epsilon>0\) and \(m>0\)._
However, the learning theory does not provide empirical optimality criteria for the performance of the classifier. The likelihood ratio test in the next section fills the gap.
### Hypothesis testing and the likelihood ratio test
The hypothesis testing methods construct the classifier function in a statistical inference approach (see [5, Chapter 8] and Section A.3 for a brief review). In particular, the _Neyman-Pearson lemma_ provides a powerful tool for analyzing the optimality of a binary classifier: it shows that the likelihood ratio test is a _uniformly most powerful test_ in the class of tests with the same level (see [5, Chapter 8] and Section A.3 for a brief review).
In hypothesis testing, we set the null hypothesis to be \(H_{0}:\theta=\theta_{0}\) and the alternative hypothesis to be \(H_{1}:\theta=\theta_{1}\), and we select a _rejection region_\(R_{k}\) with a threshold \(k\) to reject \(\theta_{0}\). Then, the classifier rejects the null hypothesis \(H_{0}\) if the time series is in the rejection region \(R_{k}\). In other words, we get a false native (FN) if we mistakenly reject \(H_{0}\) while the truth is \(\theta_{0}\), and we get a true negative (TN) if we correctly reject \(H_{0}\) when the truth is \(\theta_{1}\). The false negative rate (FNR) and true negative rate (TNR) are the probabilities in (2.3). The major task in a hypothesis test is to select the rejection region \(R_{k}\), particularly, to select \(R_{k}\) with a tunable threshold \(k\).
The _likelihood ratio test_ (LRT) is a general hypothesis testing method that is as widely applicable as maximum likelihood estimation. It determines the rejection region by statistics derived from the
likelihood ratio. The commonly-used statistics is the log-likelihood ratio
\[l(\mathbf{x}\mid\theta_{1},\theta_{0})=\log\frac{p(\mathbf{x}\mid\theta_{1})}{p( \mathbf{x}\mid\theta_{0})}\]
of the time series data \(\mathbf{x}\). From this statistics, we can define a function approximating the probability of \(\mathbf{x}\) being in class \(\theta_{1}\), which is a counterpart of \(f_{\beta}(\mathbf{x})\) in (2.1): \(f(\mathbf{x})=\frac{1}{e^{l(\mathbf{x}\mid\theta_{1},\theta_{0})}+1}.\) Then, the classifier function for LRT is \(F(\mathbf{x},k)=\mathbf{1}_{R_{k}}(\mathbf{x})\) with the rejection region defined by
\[R_{k}^{\mbox{\tiny{LRT}}}=\{\mathbf{x}:l(\mathbf{x}\mid\theta_{1},\theta_{0})> c_{k}\},\,c_{k}=\log\frac{k}{1-k}, \tag{2.5}\]
for each threshold \(k\in(0,1)\).
The Neyman-Pearson lemma shows that the LRT is optimal in the sense that it has the smallest false positive rate among classifiers with a controlled level of false negative rate:
**Theorem 2.2** (Neyman-Pearson Lemma): _The LRT is a uniformly most powerful classifier. Specifically, let \(\mathbf{x}\) be a sample from one of two distributions with a likelihood ratio \(l(\mathbf{x}\mid\theta_{1},\theta_{0})\) and assume that \(\mathbb{P}(\{\mathbf{x}:l(\mathbf{x}\mid\theta_{1},\theta_{0})=k\})=0\). Then, the test with rejection region \(R_{k}^{\mbox{\tiny{LRT}}}\) defined in (2.5) is uniformly most powerful. That is, it has a false positive rate no larger than any other test with a measurable rejection region \(R\) such that \(\mathbb{P}(R\mid\theta_{0})\leq\mathbb{P}(R_{k}^{\mbox{\tiny{LRT}}}\mid\theta_ {0})\), i.e.,_
\[1-\mathbb{P}(R\mid\theta_{1})\geq 1-\mathbb{P}(R_{k}^{\mbox{\tiny{LRT}}} \mid\theta_{1}),\quad\forall R\ s.t.\ \mathbb{P}(R\mid\theta_{0})\leq\mathbb{P}(R_{k}^{\mbox{\tiny{LRT}}}\mid\theta_ {0}).\]
As a result, the LRT provides an ideal tool to analyze the optimality of TSC algorithms. The ROC curve of the LRT classifier provides an upper bound for the ROC curve of any TSC classifier. Similarly, the LRT classifier's accuracy provides an upper bound for other classifiers.
The LRT classifier can be readily applied to time series with a computable likelihood ratio, and there is no training stage. When the time series is sampled from a Markov process, the transition densities determine the likelihood ratio. Suppose that for each \(\theta_{i}\), the transition probability of the Markov process has a density function \(p(x_{t_{l+1}}\mid x_{t_{l}},\theta_{i})\) for each \(l\). Then, the probability density function of a data path \(x_{t_{0}:t_{L}}\) conditional on \(\theta_{i}\) is
\[p(x_{t_{0}:t_{L}}\mid\theta_{i})=\prod_{l=0}^{L-1}p(x_{t_{l+1}}\mid x_{t_{l}}, \theta_{i}),\]
and the log-likelihood ratio of the path is
\[l(x_{t_{0}:t_{L}}\mid\theta_{1},\theta_{0})=\log\frac{p(x_{t_{0}:t_{L}}\mid \theta_{1})}{p(x_{t_{0}:t_{L}}\mid\theta_{0})}=\sum_{l=0}^{L-1}\log\frac{p(x_{ t_{l+1}}\mid x_{t_{l}},\theta_{1})}{p(x_{t_{l+1}}\mid x_{t_{l}},\theta_{0})}. \tag{2.6}\]
However, the transition probabilities and the likelihood ratio are unavailable for most time series, except for a few simple examples such as Gaussian processes and linear models (see Section 3). In particular, to benchmark the performance of TSC algorithms, it is desirable to have nonlinear time series datasets with varying length, temporal sampling frequency, and dimension. The diffusions defined by stochastic different equations provide a large class of such Markov processes.
### Distinguishing diffusions
Diffusion processes provide a large class of time series whose likelihood ratio can be accurately computed. An Ito diffusion is defined by a stochastic differential equation
\[dX_{t}=b_{\theta}(X_{t})dt+\sigma(X_{t})dB_{t}, \tag{2.7}\]
where \(B_{t}\) is a standard \(\mathbb{R}^{d}\)-valued Brownian motion. Here for simplicity, we assume that both the diffusion coefficient \(\sigma:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d\times d}\) and the drift \(b_{\theta}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) with parameter \(\theta\) are Lipschitz, and the
diffusion satisfies the uniform elliptic condition \(\sum_{1\leq i,j\leq d}c_{i}c_{j}\sigma_{ki}\sigma_{kj}(x)\geq\gamma\sum_{i}c_{i}^{2}\) with \(\gamma>0\) for all \(x\) and \(c_{i}\in\mathbb{R}\). Beyond such diffusions, we can also consider Ito processes with \(b\) and \(\sigma\) being general stochastic processes satisfying suitable integrability conditions [25].
The likelihood ratio of a sample path \(x_{t_{0}:t_{L}}\) of the diffusion \(X_{t_{0}:t_{L}}\) satisfying (2.7) can be computed by numerical approximation of the transition probabilities. In particular, when the temporal sampling frequency is high, i.e., \(\max_{l}\{\Delta t_{l}=t_{l+1}-t_{l}\}\) is small, the Euler-Maruyama scheme
\[\Delta X_{t_{l}}=X_{t_{l+1}}-X_{t_{l}}\approx b_{\theta_{i}}(X_{t_{l}})\Delta t _{l}+\sigma(X_{t_{l}})\Delta W_{l}\]
yields an accurate approximation of the transition probability
\[\widehat{p}(X_{l+1}\mid X_{l},\theta_{i})\propto e^{-\frac{1}{2\Delta t}\| \Delta X_{t_{l}}-b_{\theta_{i}}(X_{t_{l}})\Delta t_{l})\|_{\Sigma}^{2}},\]
where \(\Sigma(x)=\sigma\sigma^{\top}(x)\in\mathbb{R}^{d\times d}\) and \(\|z\|_{\Sigma}^{2}=z^{\top}\Sigma^{-1}z\) for any \(z\in\mathbb{R}^{d}\). Using it in (2.6), we obtain an approximate likelihood ratio:
\[\begin{split}&\widehat{l}(X_{t_{0}:t_{L}}\mid\theta_{1},\theta_{0}) \\ &=\sum_{l=0}^{L-1}\left([b_{\theta_{1}}-b_{\theta_{0}}](X_{t_{l}} )^{\top}\Sigma(Y_{s})^{-1}\Delta X_{t_{l}}-\frac{1}{2}[\|b_{\theta_{1}}\|_{ \Sigma}^{2}-\|b_{\theta_{0}}\|_{\Sigma}^{2}](X_{t_{l}})\Delta t_{l}\right). \end{split} \tag{2.8}\]
As the temporal sampling frequency increases, i.e., \(\max_{l}\{t_{l+1}-t_{l}\}\to 0\), the above likelihood ratio converges to the likelihood ratio of the continuous path \(X_{[0,T]}\). The limit ratio is the Radon-Nikodym derivative between the two distributions of the path, as characterized by the Girsanov theorem (see Section A.1):
\[\begin{split}& l(X_{[0,T]}\mid\theta_{1},\theta_{0})\\ =&\int_{0}^{T}[b_{\theta_{1}}-b_{\theta_{0}}](Y_{s}) ^{\top}\Sigma(Y_{s})^{-1}dY_{s}-\frac{1}{2}\int_{0}^{T}[\|b_{\theta_{1}}\|_{ \Sigma}^{2}-\|b_{\theta_{0}}\|_{\Sigma}^{2}](X_{t})dt.\end{split} \tag{2.9}\]
There are three advantages to benchmarking TSC algorithms by diffusions. First, the LRT of the diffusion processes provides the theoretical optimal rates, which can be used to detect overfitting when training TSC classifiers. Second, the diffusions provide a large variety of testing time series data, whose length, sampling frequency, dimension, and nonlinearity can vary as needed. Third, the likelihood ratio between diffusion processes can be efficiently computed by numerical approximation as in (2.8).
## 3 Examples with analytical likelihood ratios
The likelihood ratio can be computed analytically for Brownian motions with constant drifts and the Ornstein-Uhlenbeck (OU) processes. In particular, these two examples offer insights into how the classification accuracy depends on the temporal sampling frequency, length of paths, the randomness, and the dimension of the time series data.
### Brownian motions with constant drifts
Let \((X_{t},t\geq 0)\) be an \(\mathbb{R}^{d}\)-valued Brownian motion with a constant drift:
\[dX_{t}=\theta dt+\sigma dB_{t},\quad\Leftrightarrow\quad X_{t}=X_{0}+\theta t +\sigma B_{t}, \tag{3.1}\]
where \(\theta\in\{\theta_{0},\theta_{1}\}\subset\mathbb{R}^{d}\) and the process \((B_{t},t\geq 0)\) is the standard Brownian motion starting at \(0\). Then, the exact log-likelihood ratio in (2.6) for a given sample path \(X_{t_{0}:t_{L}}\) is
\[l(X_{t_{0}:t_{L}}\mid\theta_{1},\theta_{0})=\sigma^{-2}\left[(\theta_{1}-\theta _{0})^{\top}(X_{t_{L}}-X_{t_{0}})-\frac{1}{2}(|\theta_{1}|^{2}-|\theta_{0}|^{2 })(t_{L}-t_{0})\right].\]
Note that \(X_{t_{L}}-X_{t_{0}}=\theta(t_{L}-t_{0})+\sigma(B_{t_{L}}-B_{t_{0}})\) for each \(\theta\). Thus, conditional on the hypotheses \(\theta=\theta_{0}\) and \(\theta=\theta_{1}\), the likelihood ratios have distributions
\[\text{Hypothesis }\theta=\theta_{0}: l(X_{t_{0}:t_{L}}\mid\theta_{1},\theta_{0}) \sim-m_{l}+v_{l}Z,\] \[\text{Hypothesis }\theta=\theta_{1}: l(X_{t_{0}:t_{L}}\mid\theta_{1},\theta_{0}) \sim m_{l}+v_{l}Z,\]
where \(Z\) is a standard Gaussian random variable and
\[m_{l}=\frac{1}{2}|\theta_{1}-\theta_{0}|^{2}(t_{L}-t_{0}),\quad v_{l}=\sigma| \theta_{1}-\theta_{0}|\sqrt{t_{L}-t_{0}}.\]
Let the rejection region be \(R_{k}=\{X_{t_{0}:t_{L}}:l(X_{t_{0}:t_{L}}\mid\theta_{1},\theta_{0})>c_{k}\}\) with \(c_{k}=\log\frac{k}{1-k}\) as defined in (2.5). Then, the false negative rate (FNR) and the true negative rate (TNR) of the LRT are
\[\text{FNR}(k) =\alpha_{k}^{0}=\mathbb{P}(x_{t_{0}:t_{L}}\in R_{k}\mid\theta_{0} )=\mathbb{P}(Z>c_{k}v_{l}^{-1}+m_{l}v_{l}^{-1})\] \[\text{TNR}(k) =\alpha_{k}^{1}=\mathbb{P}(x_{t_{0}:t_{L}}\in R_{k}\mid\theta_{1} )=\mathbb{P}(Z>c_{k}v_{l}^{-1}-m_{l}v_{l}^{-1}).\]
Then, the accuracy \(\frac{1}{2}(1-\alpha_{k}^{0}+\alpha_{k}^{1})\) is \(ACC_{k}=\frac{1}{2}+\frac{1}{2}\mathbb{P}\left(-m_{l}v_{l}^{-1}<Z-c_{k}v_{l}^ {-1}<m_{l}v_{l}^{-1}\right)\). Since \(Z\) is centered Gaussian, the threshold maximizing the accuracy is \(k_{*}=\underset{k\in(0,1)}{\arg\max}\ (ACC_{k})=0\). As a result, the maximal accuracy is
\[ACC_{*} =\frac{1}{2}+\frac{1}{2}\mathbb{P}\left(-m_{l}v_{l}^{-1}<Z<m_{l} v_{l}^{-1}\right)\] \[=\frac{1}{2}+\frac{1}{2}\mathbb{P}\left(-\frac{1}{2\sigma}|\theta _{1}-\theta_{0}|\sqrt{(t_{L}-t_{0})}<Z<\frac{1}{2\sigma}|\theta_{1}-\theta_{0 }|\sqrt{(t_{L}-t_{0})}\right).\]
The above FNR and TNR rates and the maximal accuracy depend on three factors: the path length \(t_{L}-t_{0}\), the scale of the noise \(\sigma\) (which affects the variance of the time series), and the distance \(|\theta_{1}-\theta_{0}|\) which depends on the dimension \(d\). As either \(\sqrt{t_{L}-t_{0}}\), \(|\theta_{1}-\theta_{0}|\), or \(\sigma^{-1}\) increases, the maximal accuracy increases. For example, when \(\theta_{0}=a_{0}[1,...,1]^{\top}\), and \(\theta_{1}=a_{1}[1,...,1]^{\top}\), \(|\theta_{1}-\theta_{0}|=d^{1/2}\), and the maximal accuracy is
\[ACC_{k_{*}}=1-\mathbb{P}(|Z|\geq\frac{1}{2\sigma}|a_{1}-a_{0}|\sqrt{d(t_{L}-t_ {0})}).\]
These rates and the maximal accuracy do not depend on the temporal sampling frequency of the time series because the likelihood ratio is exact. However, the temporal sampling frequency will affect the accuracy when the likelihood ratio is approximated numerically as in (2.8), particularly for nonlinear time series; see the numerical examples in Section 5.
### Ornstein-Uhlenbeck processes
Consider two \(\mathbb{R}^{d}\)-valued OU processes with parameters \(\theta\in\{\theta_{0},\theta_{1}\}\subset\mathbb{R}\):
\[dX_{t}=\theta X_{t}dt+\sigma dB_{t}\,\Leftrightarrow\,X_{t+\Delta t}=e^{ \theta\Delta t}X_{t}+\sigma\int_{t}^{t+\Delta t}e^{\theta(t+\Delta t-r)}dB_{r} \tag{3.2}\]
for each \(t>0\), where \((B_{t},t\geq 0)\) is an \(\mathbb{R}^{d}\)-valued standard Brownian motion and \(\sigma>0\) is a constant. Then, conditional on \(X_{t}\) and \(\theta_{i}\), the random variable \(X_{t+\Delta t}\) has a distribution \(\mathcal{N}\left(X_{t}e^{\theta_{i}\Delta t},\frac{\sigma^{2}}{2\theta_{i}} \left(1-e^{2\theta_{i}\Delta t}\right)I_{d}\right)\), and the transition probability density of this Markov process is
\[p(x_{t+\Delta t}\mid x_{t},\theta_{i})=(2\pi\sigma_{i,\Delta t}^{2})^{-d/2} \exp\left(-\frac{1}{\sigma_{i,\Delta t}^{2}}\|x_{t+\Delta t}-e^{2\theta_{i} \Delta t}y_{t}\|^{2}\right)\]
with \(\sigma^{2}_{i,\Delta t}=\frac{\sigma^{2}}{26_{i}}\left(1-e^{2\theta_{i}\Delta t}\right)\). Let \(X_{t_{0}:t_{L}}\) be a discrete path with \(t_{l}=l\Delta t\) for \(0\leq l\leq L\). By the Markov property, the logarithm probability density of \(X_{t_{0}:t_{L}}\) conditional on \(\theta_{i}\) is
\[\log p(X_{t_{0}:t_{L}}\mid\theta_{i})=C-\frac{dL}{2}\log(\sigma^{2}_{i,\Delta t })-\frac{1}{2\sigma^{2}_{i,\Delta t}}\sum_{l=0}^{L-1}\|X_{t_{l+1}}-e^{\theta_ {i}\Delta t}X_{t_{l}}\|^{2},\]
where \(C\) is a constant. Thus, the log-likelihood ratio in (2.6) is
\[l(X_{t_{0}:t_{L}}\mid\theta_{1},\theta_{0})= \frac{dL}{2}\log\left(\frac{\sigma^{2}_{0,\Delta t}}{\sigma^{2}_ {1,\Delta t}}\right)\] \[+\frac{1}{2}\sum_{l=0}^{L-1}\left(\frac{\|X_{t_{l+1}}-e^{\theta_ {0}\Delta t}X_{t_{l}}\|^{2}}{\sigma^{2}_{0,\Delta t}}-\frac{\|X_{t_{l+1}}-e^{ \theta_{1}\Delta t}X_{t_{l}}\|^{2}}{\sigma^{2}_{1,\Delta t}}\right).\]
Let the rejection region be \(R_{k}=\{X_{t_{0}:t_{L}}:l(X_{t_{0}:t_{L}}\mid\theta_{1},\theta_{0})>c_{k}\}\). Note that conditional on \(\theta_{0}\), \(N_{l}:=\frac{1}{\sigma_{0,\Delta t}}\left(X_{t_{l+1}}-e^{\theta_{0}\Delta t}X _{t_{l}}\right)\) has a distribution \(\mathcal{N}(0,I_{d})\) for each \(l\), and \(X_{t_{l+1}}=e^{\theta_{0}\Delta t}X_{t_{l}}+\sigma_{0,\Delta t}N_{l}\). Then, with \(Y_{l}=(e^{\theta_{1}\Delta t}-e^{\theta_{0}\Delta t})X_{t_{l}}+\sigma_{0, \Delta t}N_{l}\), the false positive rate (FNR) is
\[\alpha^{0}_{k}= \mathbb{P}\left(l(X_{t_{0}:t_{L}}\mid\theta_{1},\theta_{0})>c_{k} \mid\theta_{0}\right)\] \[= \mathbb{P}\left(\sum_{l=0}^{L-1}\left[\|N_{l}\|^{2}-\sigma^{-2}_{ 1,\Delta t}\|Y_{l}\|^{2}\right]>2c_{k}-dL\log\left(\frac{\sigma^{2}_{0,\Delta t }}{\sigma^{2}_{1,\Delta t}}\right)\right),\]
with \(N_{l}\sim\mathcal{N}(0,I_{d})\). Similarly, denoting \(Y^{\prime}_{l}=(e^{\theta_{0}\Delta t}-e^{\theta_{1}\Delta t})X_{t_{l}}+ \sigma_{1,\Delta t}N_{l}\), we can compute the true negative rate (TNR)
\[\alpha^{1}_{k}=\mathbb{P}\left(\sum_{l=0}^{L-1}\left[\sigma^{-2}_{0,\Delta t} \|Y^{\prime}_{l}\|^{2}-\|N_{l}\|^{2}\right]>2c_{k}-dL\log\left(\frac{\sigma^{2 }_{0,\Delta t}}{\sigma^{2}_{1,\Delta t}}\right)\right).\]
The optimal threshold \(k=\underset{k}{\arg\max}\ \frac{1}{2}(1-\alpha^{0}_{k}+\alpha^{1}_{k})\) depends on the various factors of the time series, so is the maximal accuracy. The numerical examples in Section 5 shows that the maximal accuracy increases as either \(d\) or \(L\) increases.
## 4 Benchmark design: example diffusions
We demonstrate the construction of diffusions for TSC benchmarking with three representative examples. In each example, the procedure is straightforward: first, we construct pairs of diffusions through varying the drifts. Then, we generate data from these diffusions, and compute the statistics of LRT, which will be used as a reference for the performance of the state-of-the-art machine learning TSC algorithms in the next section.
### Diffusions with different drifts
Nonlinear diffusions can be constructed by varying the drifts \(\{b_{\theta_{i}}\}_{i=0,1}\):
\[dX_{t}=b_{\theta_{i}}(X_{t})\,dt+\sigma(X_{t})dB_{t},\quad b_{\theta_{i}}(X_{ t})=\sum_{j=1}^{J}\theta_{i,j}\phi_{j}(X_{t}), \tag{4.1}\]
where \(X_{t}\in\mathbb{R}^{d}\), \(\theta_{i}=(\theta_{i,1},\ldots,\theta_{i,J})\in\mathbb{R}^{J}\) are the parameters, \(\{\phi_{j}\}\) are _pre-specified_ basis functions, and \((B_{t},t\geq 0)\) is the standard Brownian motion in \(\mathbb{R}^{d}\). Here the diffusion coefficient \(\sigma(X_{t})\) the same for the two diffusions, representing either a multiplicative noise (when it depends on the state) or an additive noise (when it is a constant). To test the optimality of the TSC algorithms, we consider three pairs of nonlinear diffusions: gradient systems with different potentials, SDEs with linear and nonlinear drifts, and high-dimensional interacting particle systems with different interaction kernels.
**Example 4.1** (Different potentials): _Consider two gradient systems with different potentials: a double-well potential \(V_{\theta_{0}}(x)=\frac{1}{2}(|x|^{2}-1)^{2}\) and a single flat well-potential \(V_{\theta_{1}}(x)=\frac{1}{4}|x|^{4}\):_
\[dX_{t}=-\nabla V_{\theta_{i}}(X_{t})dt+dB_{t}.\]
_Writing them in the parametric form \(V_{\theta_{i}}(X)=\sum_{j=0}^{4}\theta_{i,j}|x|^{j}\) with \(\theta_{i}=(\theta_{i,1},\ldots,\theta_{i,4})\), we have \(\theta_{0}=\frac{1}{4}(1,0,-2,0,1)\) and \(\theta_{1}=(0,0,0,0,\frac{1}{4})\)._
The double-well potential is a widely-used prototype model for systems with metastable states [26]. These two potentials are visually different, see Figure 1 (left). Each potential is confining and leads to an ergodic process with a stationary distribution. Thus, long sample paths that explore the full landscape of the potentials can distinguish the diffusions from the empirical densities. However, the short sample paths look similar and are difficult to distinguish, as shown in Figure 1 (right).
**Example 4.2** (Linear v.s. nonlinear drifts): _Consider two 1D It\(\hat{o}\) processes_
\[dX_{t}=b_{\theta}(t,X_{t})dt+X_{t}dB_{t}\]
_with linear and nonlinear drifts \(b_{\theta_{0}}(t,x)=-\pi x+\sin(\pi t)\) and \(b_{\theta_{1}}(t,x)=-0.1x+\cos(\pi x)\), which can be written as \(b_{\theta_{i}}(t,x)=\theta_{i,1}x+\theta_{i,2}\cos(\pi x))+\theta_{i,3}\sin( \pi t))\) with \(\theta_{0}=(-\pi,0,1)\) and \(\theta_{1}=(-0.1,1,0)\)._
The two drift functions are clearly different, since \(b_{\theta_{0}}(t,x)\) is linear in \(x\) and the other is nonlinear in \(x\). Their sample paths are also visually different: the sample paths of \(b_{\theta_{0}}\) are smoother than those of \(b_{\theta_{1}}\)'s (they decay faster); see Figure 2. Thus, we expect that all TSC algorithms can achieve a high accuracy.
Figure 1: Different potentials in Example 4.1 and a few sample paths.
Figure 2: Linear v.s. nonlinear drifts in Example 4.2 and a few typical sample paths.
**Example 4.3** (Interacting particles): _Consider a system with \(N\) interacting agents with \(X_{t}^{j}\in\mathbb{R}^{d_{1}}\) denoting the position or opinion the \(j\)-th agent at time \(t\). Suppose that the agents interact with each other according to the following stochastic differential equation:_
\[dX_{t}^{j}=\frac{1}{N}\sum_{j=1}^{N}\phi_{\theta}(\|X_{t}^{j}-X_{t}^{i}\|)(X_{t }^{j}-X_{t}^{i})+\sigma dB_{t}^{j},\]
_where \(\phi_{\theta}:\mathbb{R}^{+}\rightarrow\mathbb{R}\) is the interaction kernel, \(\{B_{t}^{j},j=1,\ldots,N\}\) are independent standard Brownian motions, and \(\sigma>0\) is a scalar for the strength of the stochastic force. We will consider two types of interaction kernels (see Figure 3 (left))_
\[\phi_{\theta_{0}}(r)=\begin{cases}0.2,&r\in[0,\sqrt{2}),\\ 2,&r\in[\sqrt{2},2),\\ 0,&r\in[2,\infty).\end{cases}\qquad\phi_{\theta_{1}}(r)=\begin{cases}2,&r\in[0,\sqrt{2}),\\ 0.2,&r\in[\sqrt{2},2),\\ 0,&r\in[2,\infty).\end{cases}\]
_This system leads to high-dimensional data, with \(X_{t}=(X_{t}^{1},\ldots,X_{t}^{N})\in\mathbb{R}^{d}\) with \(d=d_{1}N\). We will consider \(d_{1}=2\) and \(\sigma=1\) with \(N\) varying to change the dimension of the system._
Such interacting particle systems have been increasingly studied because of their wide-range of applications in biology, engineering and social science (see e.g., [3, 17, 20, 23]). The difference between the two kernels is the strength of interaction between "far" and "close" neighbors: the kernel \(\phi_{\theta_{1}}\) makes the close neighbors interact stronger than those far away, whereas the kernel \(\phi_{\theta_{0}}\) makes the far neighbors interact stronger than those nearby. Then, the dynamics of the two systems are different, and it is shown in [23] that the more heterophilious kernel \(\phi_{\theta_{0}}\) enhances consensus when there is no stochastic force (i.e., the systems is deterministic). As a result, it is relatively easy to distinguish the two diffusions when the stochastic force is small. On the other hand, when the stochastic force is relatively large (e.g., \(\sigma=1\)), the sample paths of the agents in the two systems are similar (Figure 3 (right)), making the classification a difficult task.
### Data generation and the LRT benchmarks
The simulated diffusion processes allow us to test the dependence of classification performance on three parameters: path length in time \(t_{L}\), the dimension \(d\) of the state, and temporal sampling frequency (by varying the time gap \(\Delta t\)). We test each of the three parameters with four values using two diffusion models, thus in total we generate 24 datasets in 6 cases with these parameters specified in Table 2.
In each dataset, the training data consists of \(M=2000\) sample trajectories \(\{X_{t_{0}:t_{L}}^{(m)}\}_{m=1}^{M}\) of the pair of \(\mathbb{R}^{d}\)-valued diffusions with \(\theta\in\{\theta_{0},\theta_{1}\}\), 1000 paths for each of the pair. Here the time instances are \(t_{l}=l\Delta t\), and these data paths are downsampled from the solutions of the SDEs simulated by the Euler-Maruyama scheme with a fine time step \(\delta=0.01\). For example, the path with \(\Delta t=0.1\) makes
Figure 3: Interaction kernels in Example 4.3 and sample paths of the 1st dimension of an agent.
an observation every 10 time steps from the fine simulated solution. The initial conditions \(\{X_{t_{0}}^{(m)}\}_{m=1}^{M}\) are sampled from the standard normal distribution in \(\mathbb{R}^{d}\). Each sample path is augmented with its time grid \(t_{0}:t_{L}\) with \(t_{0}=0\).
For each dataset, we obtain two types of LRT benchmarks by computing the LRT in two ways: one using the fine paths and the other using the time series dataset, both compute the likelihood ratio using the Euler-Maruyama approximation in (2.8). Since there is no need of training, each classifier makes prediction directly on the whole dataset of \(M\) paths, and returns a single ROC curve, AUC and ACC\({}_{*}\), which will be used as references.
The LRT classifier using the fine solution is called "_LRT hidden truth"_, and it provides the optimal classification rates by the Neyman-Person lemma (see Theorem 2.2). The other LRT classifier using the training data is called "_LRT numerical_". It does not use the hidden fine path, but it uses the diffusion model information that are not used by the TSC algorithms. It has a relatively large numerical error when the SDE is nonlinear, particularly when the observation time interval \(\Delta t\) is much larger than the simulation time step \(\delta\). Thus, it provides a lower baseline for the TSC algorithms. The two LRT benchmarks are the same when the time series are samples of a Gaussian process from a linear SDE, e.g., the cases of Brownian motions with constant drifts and OU processes.
### Discussions on benchmark design
The LRT benchmark design has two main components: selection of the diffusion processes and generation of simulated data. In addition to the examples in Table 2, there is a large variety of diffusion processes from stochastic differential equations in the form of (4.1), such as gradient systems and stochastic Hamiltonian systems [25, 26]. The two diffusions should have the same diffusion coefficient, so that the likelihood ratio can be computed based on the Girsanov theorem.
To generate simulated data, we recommend using the Euler-Maruyama scheme so that the likelihood ratio of the fine trajectory is exact. The time series data are downsampled from the fine trajectories. It is helpful to compute two LRT benchmarks, one using the fine trajectories and the other using the downsampled data, to provide an optimality benchmark and a lower baseline benchmark. In particular, the optimality benchmark can detect the overfitting of a TSC algorithm in the training stage.
Four parameters can be tuned to adjust the theoretical classification accuracy: the time length of paths, the dimension, the temporal sampling frequency, and the strength of the driving noise (as suggested by the analysis in Section 3). The time length of paths and the dimension affect the effective sample size and hence the classification rates. The temporal sampling frequency affects the LRT baseline but it may have a limited effect on the model-agnostic TSC algorithms. At last, a large noise dims the signal from the drifts, thus lowering the accuracy of classification.
\begin{table}
\begin{tabular}{l|l l l} Model & \(d\) & \(L\) & \(t_{L}\), \(\Delta t\) \\ \hline
**a)** Constant drifts & 1 & \(\{10,20,40,80\}\) & \(\{1,2,4,8\}\), \(0.1\) \\
**b)** Different potentials & 1 & \(\{20,40,80,160\}\) & \(\{2,4,8,16\}\), \(0.1\) \\
**c)** OU processes & \(\{1,2,4,8\}\) & 20 & 2, \(0.1\) \\
**d)** Interacting particles & \(\{6,12,24,48\}\) & 20 & 2, \(0.1\) \\
**e)** Linear v.s. nonlinear & 1 & \(\{5,10,20,40\}\) & 1, \(0.1\times\{2,1,0.5,0.25\}\) \\
**f)** Interacting particles & 24 & \(\{10,20,40,80\}\) & 4, \(0.1\times\{4,2,1,0.5\}\) \\ \end{tabular}
* The models “Constant drifts” and “OU processes” are defined in Equations (3.1) and (3.2), and the models “Different potentials”, “Interacting particles” and “Linear v.s. nonlinear” are defined in Examples 4.1–4.3.
\end{table}
Table 2: Settings of the time series data in numerical tests.
Benchmarking random forest, ROCKET and ResNet
### Random forest, ROCKET, and ResNet
We benchmark three scalable TSC methods: random forest [4], ROCKET [8], and ResNet [30]. They have been shown to be state-of-the-art in recent review papers [29, 13, 2]. In particular, the most recent review [29] compares 11 multivariate time series classifiers that are top-performers in [2, 13], including both non-deep learning methods (including ROCKET and HIVE-COTE (Hierarchical Vote Collective Of Transformation-based Ensembles) [19]) and deep learning methods (including ResNet and InceptionTime [14]), using 26 UEA archive datasets [1]. The recommended method is ROCKET due to its high overall accuracy and remarkably fast training time.
Random Forest.The random forest (RF) is an ensemble learning technique that combines a large number of decision trees, and it is applicable to both classification and regression. The original RF described by [4] is a classifier consisting of a collection of tree-structured classifiers \(\{f(\mathbf{x},\beta_{i})\}_{i=1}^{n_{T}}\) with independent identically distributed parameters \(\beta_{i}\) and each tree casts a unit vote for the input \(\mathbf{x}\) to be in a class. These votes lead to a function \(f_{\beta}(\mathbf{x})=\frac{1}{n_{T}}\sum_{1=1}^{n_{T}}f(\mathbf{x},\beta_{i})\) approximating the probability of \(\mathbf{x}\) being in the class (i.e., the probability \(\mathbb{P}(\theta=\theta_{1}\mid\mathbf{x})\) of \(\mathbf{x}\) in the class \(\theta_{1}\) in our notation in Section 2.1). The classifier function with a threshold \(k\) is \(F(\mathbf{x},k)\) as in (2.2). It is user-friendly with only a few parameters easy to tune to achieve robust performance, and its performance is comparable to other classifiers such as discriminate analysis, support vector machine and neural networks [28, 18].
We use the default HalvingRandomSearchCV strategy in scikit-learn [27] to search for parameter values in the ranges listed below.
\begin{tabular}{c|c c c c c} & \# of trees & max depth & max features & min SS & bootstrap \\ \hline RF & \(\{10:100\}\) & \(\{3,\text{None}\}\) & \(\{1:11\}\) & \(\{2:11\}\) & \(\{\text{True},\text{False}\}\), \\ \end{tabular} where "min SS" represents minimal samples split, and the quality of a split is measured by the Gini index Note that number of trees is medium so as to have a comparable computational cost with other methods.
ResNet.The deep residual network (ResNet) for time series classification [30] is a network with three consecutive blocks, each comprised of three convolutional layers, followed by a global average pooling layer and a final dense layer with softmax activation function. The major characteristic is that the three consecutive blocks are connected by residual "shortcut" connections, enabling the flow of the gradient directly through them, thus reducing the vanishing gradient effect [12]. It outperforms other deep learning time series classifiers in [13], especially for univariate datasets [30].
We maintain all hyper-parameter settings from [13].
\begin{tabular}{c|c c c c c} Structure & layers & activate & normalize & residue & dropout \\ \hline ResNet & 9+2 & ReLU & batch & between blocks & none. \\ \end{tabular} There are nine convolution layers in the three blocks, each with the ReLU activation function that is preceded by a batch normalization operation. The number of filters in each convolution layer is 64 in the first block; while the number is 128 for the second and third blocks. In each residual block, the kernel size (or the length of the filter) is set to 8, 5 and 3 respectively for the first, second and third convolution. The optimization settings is also similar to [13]:
\begin{tabular}{c|c c c c c c} Training & optimizer & rlr & epochs & batch & learning rate & weight decay \\ \hline ResNet & Adam & yes & 150 & 16 & 0.001 & 0.0 &, \\ \end{tabular} where " rlr" means that the learning rate is reduced by a half if the model's training loss has not improved for 5 consecutive epochs with a minimum learning rate set to 0.0001. Here we set the epochs to 150 to have a computational cost comparable with other methods while maintaining accuracy.
ROCKET.The ROCKET (Random Convolutional Kernel Transform) [8] is the current state-of-the-art multivariate time series classifier [29]. It uses random transformations followed by a linear classifier (ridge regression or logistic regression). In the transformation part, a large number of random convolution kernels are applied to each time series, each kernel producing a feature map. From each of these feature maps, two features are extracted: the maximal value and the proportion of positive value (ppv). Thus, each random kernel extracts two features from each time series. The linear classifier then makes classification based on these features.
We keep the default setting for ROCKET in the _sktime_ reposItory 2, and we use the ridge regression (the parameter regularization strength \(\alpha\) is searched by the build-in function RidgeCV). The randomness comes from the kernel's parameters: length, weights, bias, dilation, and padding:
Footnote 2: [https://github.com/alan-turing-institute/sktime/blob/master/sktime/transformers/series_as_features/rocket.py](https://github.com/alan-turing-institute/sktime/blob/master/sktime/transformers/series_as_features/rocket.py).
\begin{tabular}{l|l l l l l} Kernel & length & weight & dilation & padding or not & stride \\ \hline ROCKET & \(\{7,9,11\}\) & \(\mathcal{N}(0,1)\) & \([\![2^{x}]\!]\) & equal probability & \(1\). \\ \end{tabular} Here \(x\sim\mathcal{N}(0,A)\) with \(A=\log_{2}\frac{l_{\mathrm{input}}-1}{l_{\mathrm{kernel}}-1}\), where \(l_{\mathrm{input}}\) and \(l_{\mathrm{kernel}}\) are the lengths of the time series and the kernel. The number of kernels is set to \(10000\), resulting in \(20000\) features for each time series.
### ROC curves in a typical test
We compare the performance of these TSC algorithms with the LRT benchmarks in three statistics: the ROC curve in a typical test, the box-and-whisker plots of AUC (area under the ROC curve) and the optimal accuracy (denoted by ACC\({}_{*}\)) in \(40\) different runs. In each run, we train the algorithms using randomly sampled \(3/4\) of the data paths and use the rest \(1/4\) of the data for prediction test. Thus, each algorithm is trained using \(M_{\mathrm{training}}=\frac{3}{4}M=1500\) sample paths and the rates in prediction are computed using \(\frac{1}{4}M=500\) sample paths. By Lemma 2.1, each prediction rate has a standard deviation at the scale of \(\frac{0.5}{\sqrt{500}}=0.02\). Thus, two algorithms perform similarly if the difference between their rates are within the sampling error of \(0.04\) (in two standard deviations).
Figure 4 shows the ROC curves in a typical test in each of the \(6\) cases in the Table 2. Each case uses its first of the four settings, e.g., the constant drifts dataset has \((t_{L},d,\Delta t)=(1,1,0.1)\), the dataset for different potentials has \((t_{L},d,\Delta t)=(20,1,0.1)\), and the OU processes dataset in Case (c) has \((t_{L},d,\Delta t)=(2,1,0.1)\). The datasets for the interacting particles in Cases (d) and (f) have \((t_{L},d,\Delta t)=(2,6,0.1)\) and \((t_{L},d,\Delta t)=(4,24,0.4)\), respectively.
For univariate time series in the Cases (a,b,e), the three algorithms either reach or are close to the optimality benchmark by the LRT. They achieve the optimal benchmark of "LRT hidden truth" for the Brownian motion with constant drifts. They are nearly optimal with curves in-between the two LRT benchmarks in distinguishing the diffusions with different potentials and the diffusions with linear or nonlinear drifts.
For the univariate time series in Case (c) and the multivariate time series in Case (f), the three algorithms are suboptimal as their ROC curves are below the "LRT numerical" with \(\Delta t=0.1\) and \(\Delta t=0.4\), respectively.
However, the three algorithms have unsuccessful classifications in Case (d), which is the multivariate interacting particles with \((t_{L},d,\Delta t)=(2,6,0.1)\). Their ROC curves are around the diagonal line. In contrast, the benchmark of "LRT numerical" with \(\Delta t=0.1\) has a reasonable ROC curve and the ROC curve of "LRT hidden truth" is much higher. Thus, the data has rich information for the classification, and there is room for improvements in these algorithms. We note that the LRT makes use of the model information while the three algorithms are model agnostic. Hence, the success of the "LRT numerical" shows the importance of model information in the classification of nonlinear multivariate time series.
In particular, the contrast between the failure in Case (c) and the success in Case (f) invites further examination of the factors that affect the performance of the algorithms. Note that both Case (d) and
Case (f) are for the interacting particle systems, and they are different only at \((t_{L},d,\Delta t)=(2,6,0.1)\) and \((t_{L},d,\Delta t)=(4,24,0.4)\). Thus, in the next section, we examine the algorithms with varying \((t_{L},d,\Delta t)\). We will also examine the dependence of the classification accuracy on randomness and training sample size (Figure 8). Additionally, a single test is insufficient to draw a conclusive comparison because of the randomness in the data; hence, we run multiple tests in each setting and report the statistics of AUC and ACC in the next section to benchmark the optimality.
Also, one may notice that the random forest lags behind the other two in Case (c) and the ROCKET lags behind in Case (f), both with rate differences larger than two theoretical standard deviations (0.04). Such differences are due to the randomness in the data in this single test, the statistics from multiple tests in the next section show that no method is superior in all settings.
### Optimality benchmarking in AUC and maximal accuracy
We benchmark the optimality of a classifier by examining the statistics of the AUC and optimal accuracy (ACC\({}_{*}\)) in 40 independent simulations for each 4 settings of the 6 cases in Table 2. We present the box-and-whisker plots (the minimum, the maximum, the sample median, and the first and third quartiles, and the outliers) of the AUC and ACC\({}_{*}\), which reflects the randomness in the classifications.
Recall that the "LRT hidden truth" provides an upper bound of optimality and the "LRT numerical" provides a low baseline for them. Thus, a classifier achieves the optimality for the Gaussian processes if its AUC and ACC\({}_{*}\) concentrate around the "LRT hidden truth". A classifier is _suboptimal_ if its AUC or ACC\({}_{*}\) is below the baseline of "LRT numerical", particularly when the temporal sampling frequency of observation is relatively low. We say it is _near optimal_ when its statistics lie in between the benchmark lines, particularly when the two lines are close.
Figure 5 shows the statistics of the AUCs in the six cases with varying path time lengths \(t_{L}\), dimension \(d\) and temporal sampling frequency (through \(\Delta t\)). In the case of univariate time series data, the three algorithms achieve the optimality represented by the LRT hidden truth for the Gaussian process in Case (a), and they are near optimal for nonlinear time series in Cases (b,e). They are unsuccessful in all
Figure 4: ROC curves in a typical test in each of the 6 cases (each using the first of the settings in Table 2). The three algorithms achieve the optimal LRT in Case (a), and they are in-between the two LRT benchmarks in Cases (b,e). They are suboptimal in comparison with the “LRT numerical” in Cases (c,f) and they have unsuccessful classification in Case (d).
settings in Case (d), the high-dimensional interacting particle system with short sample paths, and they are suboptimal in Cases (c,f). These results agree with those from the ROC curves.
Additionally, we notice two patterns. (i) The AUC increases as the path length in time \(t_{L}\) or the dimension \(d\) increases, which can be clearly seen in Cases (a,b,e,d). (ii) The AUC of the three methods is not sensitive to the temporal sampling frequency of observation, because Cases (e,f) show that the AUC changes insignificantly as \(\Delta t\) refines. Note that the slopes of the LRT benchmarks in Case (c) are much steeper than those in Case (d). This is because the entries of the OU processes are independent, whereas the entries of the interacting particles are correlated through the interactions. Thus, the increment of AUC is due to the increased effective sample size through either \(d\) or \(t_{L}\). Such patterns of AUC's
Figure 5: AUC for the 6 Cases with varying \((t_{L},d,\Delta t)\) in Table 2. All three algorithms perform similarly: they reach the optimal LRT for Gaussian processes in Case (a), and they are near optimal in Cases (b,e), suboptimal in the Cases (c,f), and are unsuccessful in Case (d).
Figure 6: Maximal accuracy \((ACC_{\star})\) with varying \((t_{L},d,\Delta t)\) in Table 2. All three algorithms are suboptimal in comparison with the LRT benchmarks.
dependence on path length and sample size will be further examined in Figure 8 for the interacting particle systems.
Figure 6 shows the statistics of the maximal accuracy (\(ACC_{*}\)) in the six cases. It turns out that all three algorithms have smaller maximal accuracy than the benchmark of "LRT numerical" (not to mention the "LRT hidden truth"). Thus, there is a room for their improvements. On the other hand, the two patterns on the dependence of \((t_{L},d,\Delta)\) are similar to those observed in AUC in Figure 5.
Figure 7 shows the statistics of the computation time in training of these tests. The computation is carried out on a node of 3.0GHz Intel Cascade Lake 6248R with 48cores, 192GB RAM 1TB NMVe local SSD. The figure shows that the random forest (RF) has a controlled computation time for all cases. The computation time of either ResNet or the ROCKET increases in the path length (\(L=\frac{t_{L}}{\Delta t}\)) as shown in Cases (a,b,e,f), and is not sensitive to the dimension \(d\) as Cases (c,d) suggests. The ResNet has the largest computation time in most cases. The LRT benchmarks are not shown here because their computation time is negligible (since they only involve the evaluations of the likelihood ratio).
Figure 8 further examines the dependence of the classification performance on the path length \(t_{L}\), the randomness (in terms of \(\sigma\)), and the training sample size in Cases **a)**- **c)**, respectively, for the interacting particle systems. These cases show that the AUC of each method increases when either the path time length increases, or the randomness decreases, or the training sample size increases. In particular, Case **c)** shows that a growing training sample size can significantly improve the AUC of each algorithm; yet, with a training sample size of 4000, their AUCs are far below the LRT benchmarks (which do not need to be trained by taking into account the model information). Additionally, we note that the variation of each algorithm reduces as the sample size increases, indicating that the learning error decays in the sample size. The ResNet has the largest variation among the three algorithms, but its performance improves the most when the sample size increases.
In summary, the LRT benchmarks show that all three algorithms can achieve the LRT optimal AUC for univariate time series and multivariate Gaussian processes. However, these model-agnostic algorithms are suboptimal in classifying nonlinear multivariate time series from high-dimensional stochastic interacting particle systems. Also, the maximal accuracy of each algorithm is below the LRT benchmark
Figure 7: Computation time (in seconds) for the tests with varying \((t_{L},d,\Delta t)\) in Table 2. The random forest (RF) has a controlled computation time. The ResNet and the ROCKET have computation times increasing with the path length (\(L=\frac{t_{L}}{\Delta t}\)) in Cases (a,b,e,f), and not sensitive to the dimension \(d\) in Cases (c,d). The ROCKET has the smallest computation time when the length \(L\) is not large.
in all cases, suggesting room for improvement. Importantly, the LRT benchmarks focuses on the
### Discussion
The performance of a classifier depends on multiple factors, including the design of the classifier, the training data size, and the properties of the time series (such as its dimension, randomness, time length, and temporal sampling frequency). The LRT benchmarks help separate these factors so that we can better examine the classifier.
* The optimal classification accuracy is determined by the distribution of the underlying discrete-time stochastic process from which the time series is sampled. This distribution varies in the properties of the time series, such as its dimension, randomness, time length, and temporal sampling frequency. The optimal classification accuracy increases when the dimension or the time length increases or the randomness reduces, but it is not sensitive to the temporal sampling frequency. Thus, in data collection in practice, it is more helpful to collect data for a longer time rather than a higher sampling frequency.
* The performance of a classifier is bounded above by the optimal classification accuracy, and it is limited by its structure and the training data size. In particular, the training data size can significantly affect the classifier's accuracy. The size needed to achieve a prescribed level of accuracy increases with the uncertainty in the distribution of the time series, as well as the structure of the classifier. A classifier with a larger complexity requires more data to train. The ResNet, which uses neural networks, improves the most from an enlarging sample size compared to the random forest and ROCKET, which use simpler designs. We would expect a bias-variance trade-off for which one can select the degree of complexity of the algorithms adaptive to data size, and we leave this as future work.
* The model-agnostic TSC algorithms do not use the model information and rely on data to learn the classifier function; thus, they require a large amount of training data. In contrast, the LRT relies on the model information and does not need to be trained. Therefore, we would expect a TSC algorithm using the model information can significantly increase the performance while reducing the training data size.
## 6 Conclusion
We have shown that the likelihood ratio test (LRT) distinguishing diffusion processes provides ideal optimality benchmarks for time series classification (TSC) algorithms. The benchmarking is computationally scalable and is flexible in design for generating linear or nonlinear time series to reflect the specific characteristics of real-world applications.
Numerical tests show that the three state-of-the-art TSC algorithms, random forest, ResNet, and ROCKET, can achieve the optimal benchmark for univariate time series and multivariate Gaussian
processes. However, these model-agnostic methods are suboptimal compared to the model-aware LRT in classifying high-dimensional nonlinear non-Gaussian processes.
The LRT benchmarks also show that the classification accuracy increases with either the time length or the time series dimension. However, the classification accuracy is less sensitive to the frequency of the observations. Thus, in data collection, it is more helpful to collect data for a longer time rather than a higher sampling frequency.
In future work, we propose to quantitatively analyze the dependence on these factors in terms of the effective sample size, the bias-variance trade-off in the training of the algorithms, and the incorporation of model information into the algorithms.
## Appendix A Appendix
### Ito-diffusion and the Girsanov theorem
**Theorem A.1** (Girsanov Theorem): _Let \(P_{\theta_{i}}\) be the probability measure induced by the solution of the SDEs (4.1) for \(t\in[t_{0},T]\), and let \(P_{0}\) be the law of the respective drift-less process. Suppose that the drifts \(\{b_{\theta_{i}}\}\) and the diffusion \(\Sigma=\sigma\sigma^{\prime}\) fulfill the Novikov condition_
\[\mathbb{E}_{P_{\theta_{i}}}\bigg{[}\exp\bigg{(}\frac{1}{2}\int_{t_{0}}^{T}b_{ \theta_{i}}(X_{t},t)^{\top}\Sigma^{-1}b_{\theta_{i}}(X_{t},t)dt\bigg{)}\bigg{]} <\infty.\]
_Then, \(P_{\theta_{i}}\) and \(P_{0}\) are equivalent measures with Radon-Nikodym derivative given by_
\[\frac{dP_{\theta_{i}}}{dP_{0}}\big{(}X_{[t_{0},s]}\big{)}=\exp\bigg{(}-\int_{ t_{0}}^{s}b_{\theta_{i}}^{\top}\Sigma^{-1}dX_{t}+\frac{1}{2}\int_{t_{0}}^{s} \left[b_{\theta_{i}}^{\top}\Sigma^{-1}b_{\theta_{i}}\right](X_{t})dt\bigg{)}\]
_for all \(s\in[t_{0},t]\) and \(X_{[t_{0},s]}=(X_{t})_{t\in[t_{0},s]}\). In particular, the likelihood ratio between \(P_{\theta_{1}}\) and \(P_{\theta_{0}}\) is_
\[\frac{dP_{\theta_{1}}}{dP_{\theta_{0}}}\big{(}X_{[t_{0},s]}\big{)}=\exp\bigg{(} -\int_{t_{0}}^{s}[b_{\theta_{1}}-b_{\theta_{0}}]^{\top}\Sigma^{-1}dX_{t}+\frac {1}{2}\int_{t_{0}}^{s}\left[b_{\theta_{1}}^{\top}\Sigma^{-1}b_{\theta_{1}}-b_{ \theta_{0}}^{\top}\Sigma^{-1}b_{\theta_{0}}\right](X_{t})dt\bigg{)}.\]
The proof of Theorem A.1 can be found in [16, Chapter 3.5] or [25, Section 8.6].
### Sampling error in the classification rates
**Proof of Lemma 2.1.** Fix a threshold \(k\), the classifier defines a random variable \(\xi=\xi(\mathbf{x})=F(\mathbf{x},k)\). Then, conditional on \(\theta_{i}\) with \(i\in\{0,1\}\), the random variable \(\xi\) has a Bernoulli distribution that takes the value \(1\) with a probability \(\alpha_{k}^{i}\). In particular, the test samples \(\{\mathbf{x}_{j}\}_{j=1}^{m}\) lead to samples \(\{\xi_{j}\}_{j=1}^{m}\) of \(\xi\), and the empirical approximations of the FNR and TNR by these samples are
\[\widehat{\alpha}_{k,m}^{i}=\frac{1}{m}\sum_{j=1}^{m}\xi_{j},\text{ conditional on }\theta_{i},\,i=0,1.\]
Therefore, by the Central Limit Theorem, the empirical estimators converge in distribution
\[\sqrt{m}[\widehat{\alpha}_{k,m}^{i}-\alpha_{k}^{i}]\to\mathcal{N}(0,\sigma_{ \xi,i}^{2}),\text{ where }\sigma_{\xi,i}^{2}=\alpha_{k}^{i}(1-\alpha_{k}^{i})\]
as \(m\to\infty\) for each \(i=0,1\). Also, the Hoeffding's inequality (see e.g., [6, 7, 11]) implies that for any \(\epsilon>0\),
\[\mathbb{P}(|\widehat{\alpha}_{k,m}^{i}-\alpha_{k}^{i}|>\epsilon)\leq 2e^{- \frac{m\epsilon^{2}}{2}},\]
which provides a non-asymptotic bound for each \(m>0\).
### Hypothesis testing and the Neyman-Pearson lemma
Here we briefly review the hypothesis testing inference method in statistics [5, Chapter 8]. Recall that a hypothesis test is a rule that specifies for which sample values the decision is made to accept a hypothesis \(H_{0}\) as true, and reject the complement hypothesis \(H_{1}\). We assume that the family of distributions of the samples are parametrized by \(\theta\in\Theta\), where \(\Theta\) is the entire parameter space. We denote that the null alternative hypotheses by \(H_{0}:\theta\in\Theta_{0}\) and \(H_{1}:\theta\in\Theta_{0}^{c}\), respectively, where \(\Theta_{0}\) is a subset \(\Theta\). The binary classification is therefore a hypothesis testing with \(\Theta=\{\theta_{0},\theta_{1}\}\) and \(\Theta_{0}=\{\theta_{0}\}\).
The likelihood ratio test is as widely applicable as maximum likelihood estimation. When there are two parameters, it is defined as follows.
**Definition A.2** (Likelihood Ratio Test.): _Let the probability density function (or probability mass function) corresponding to \(\theta_{i}\) be \(f(x\mid\theta_{i})\) for \(i=0,1\). The likelihood ratio statistic for testing \(H_{0}:\theta=\theta_{0}\) versus \(H_{1}:\theta=\theta_{1}\) is:_
\[\lambda(x)=\frac{f(x\mid\theta_{1})}{f(x\mid\theta_{0})}.\]
_A likelihood ratio test (LRT) is any test that determines the rejection region for \(H_{0}\) by \(\lambda(x)\)._
The LRT in (2.5) determines the rejection region using the log-likelihood \(l(x)=\log\lambda(x)\). The rejection region with threshold \(k\in(0,1)\) is equivalent to
\[R_{k}^{\text{LRT}}=\{\mathbf{x}:\frac{1}{\lambda(x)+1}>k\}=\{\mathbf{x}: \lambda(x)>\frac{k}{1-k}\}.\]
The reject region is selected to control the probability of falsely rejecting \(H_{0}\), i.e., false negative rate (FNR). Meanwhile, it is also desirable to control the false positive rate (FPR), e.g., reduce the possibility of false alarms.
The hypothesis tests are evaluated by the probabilities of making mistakes. A strategy to compare hypothesis tests is to control the FNR in a class and compare the FPR. The power function provides a tool to define the class.
**Definition A.3** (Power function, size \(\alpha\) test.): _The power function of the hypothesis test with a rejection region \(R\) and sample \(x\) is the probability \(\beta(\theta)=\mathbb{P}(x\in R\mid\theta)\) as a function of \(\theta\in\Theta\). A test with power function \(\beta\) is a size\(\alpha\) test if \(\sup_{\Theta_{0}}\beta(\theta)=\alpha\); a test with power function \(\beta\) is a level\(\alpha\) test if \(\sup_{\Theta_{0}}\beta(\theta)\leq\alpha\)._
An ideal hypothesis test would have a power function \(\beta(\theta)=0\) for all \(\theta\in\Theta_{0}\) and \(\beta(\theta)=1\) for all \(\theta\in\Theta_{0}^{c}\). Thus, a good test would have \(\beta(\theta)\) close to \(0\) for all \(\theta\in\Theta_{0}\) and \(\beta(\theta)\) near \(1\) for all \(\theta\in\Theta_{0}^{c}\).
Next, we define the uniformly most powerful test as the test with the smallest FPR uniformly for all \(\theta\in\Theta_{0}^{c}\) in the class of tests with a controlled FNR.
**Definition A.4** ( Uniformly Most Powerful (UMP) Test): _Let \(\mathcal{C}\) be a class of tests for testing \(H_{0}:\theta\in\Theta_{0}\) versus \(H_{1}:\theta\in\Theta_{0}^{c}\). A test in class \(\mathcal{C}\), with power function \(\beta(\theta)\), is a uniformly most powerful (UMP) class \(\mathcal{C}\) test if \(\beta(\theta)\geq\beta^{\prime}(\theta)\) for every \(\theta\in\Theta_{0}^{c}\) and every function \(\beta^{\prime}(\theta)\) that is a power function of a test in class \(\mathcal{C}\)._
The Neyman-Pearson lemma shows that a LRT with a rejection region \(R=\{x:\frac{f(x|\theta_{1})}{f(x|\theta_{0})}>c\}\) is a UMP test when \(\Theta_{0}=\{\theta_{0}\}\) and \(\Theta_{0}^{c}=\{\theta_{1}\}\) for any \(c\in(0,\infty)\) such that \(\mathbb{P}(\{x:\frac{f(x|\theta_{1})}{f(x|\theta_{0})}=c\})=0\).
**Theorem A.5** (Neyman-Pearson Lemma): _Consider testing \(H_{0}:\theta=\theta_{0}\) versus \(H_{1}:\theta=\theta_{1}\), where the probability density function (or probability mass function) corresponding to \(\theta_{i}\) is \(f(x\mid\theta_{i})\) for \(i=0,1\), using a test with rejection region \(R\) that satisfies_
\[\left\{\begin{aligned} & x\in R,\text{ if }f(x\mid\theta_{1})>cf(x\mid\theta_{0})\\ & x\in R^{c},\text{ if }f(x\mid\theta_{1})<cf(x\mid\theta_{0}) \end{aligned}\right.\] (A.1)
_for some \(c>0\), and_
\[\alpha=P_{\theta_{0}}(X\in R)\] (A.2)
_Then:_
1. _(Sufficiency) Any test that satisfies_ (A.1) _and_ (A.2) _is a UMP level_ \(\alpha\) _test._
2. _(Necessity) If there exists a test satisfying_ (A.1) _and_ (A.2) _with_ \(c>0\)_, then every UMP level_ \(\alpha\) _test is a size_ \(\alpha\) _test_ (satisfies (A.2)) _and every UMP level_ \(\alpha\) _test satisfies_ (A.1) _except perhaps on a set_ _A satisfying_ \(P_{\theta_{0}}(X\in A)\) _=_ \(P_{\theta_{1}}(X\in A)=0\)_._
AcknowledgmentsTerry Lyons was funded in part by the EPSRC [grant number EP/S026347/1], in part by The Alan Turing Institute under the EPSRC grant EP/N510129/1, the Data Centric Engineering Programme (under the Lloyd's Register Foundation grant G0095), the Defence and Security Programme (funded by the UK Government) and the Office for National Statistics & The Alan Turing Institute (strategic partnership) and in part by the Hong Kong Innovation and Technology Commission (InnoHK Project CIMDA). F.L. and Y.K. are partially supported by the grant DE-SC0021361. The work of F.L. is partially funded by the Johns Hopkins University Catalyst Award and FA9550-20-1-0288. The computation is carried out on the clusters of the Maryland Advanced Research Computing Center. FL would like to thank Professors Geoff Webb and Xingjie Li for helpful comments on the paper.
|
2304.09794
|
The SunPy Project: An Interoperable Ecosystem for Solar Data Analysis
|
The SunPy Project is a community of scientists and software developers
creating an ecosystem of Python packages for solar physics. The project
includes the sunpy core package as well as a set of affiliated packages. The
sunpy core package provides general purpose tools to access data from different
providers, read image and time series data, and transform between commonly used
coordinate systems. Affiliated packages perform more specialized tasks that do
not fall within the more general scope of the sunpy core package. In this
article, we give a high-level overview of the SunPy Project, how it is broader
than the sunpy core package, and how the project curates and fosters the
affiliated package system. We demonstrate how components of the SunPy
ecosystem, including sunpy and several affiliated packages, work together to
enable multi-instrument data analysis workflows. We also describe members of
the SunPy Project and how the project interacts with the wider solar physics
and scientific Python communities. Finally, we discuss the future direction and
priorities of the SunPy Project.
|
The SunPy Community, Will Barnes, Steven Christe, Nabil Freij, Laura Hayes, David Stansby, Jack Ireland, Stuart Mumford, Daniel Ryan, Albert Shih
|
2023-04-19T16:24:40Z
|
http://arxiv.org/abs/2304.09794v1
|
# The SunPy Project: An Interoperable Ecosystem for Solar Data Analysis
###### Abstract
The SunPy Project is a community of scientists and software developers creating an ecosystem of Python packages for solar physics. The project includes the sunpy core package as well as a set of affiliated packages. The sunpy core package provides general purpose tools to access data from different providers, read image and time series data, and transform between commonly used coordinate systems. Affiliated packages perform more specialized tasks that do not fall within the more general scope of the sunpy core package. In this article, we give a high-level overview of the SunPy Project, how it is broader than the sunpy core package, and how the project curates and fosters the affiliated package system. We demonstrate how components of the SunPy ecosystem, including sunpy and several affiliated packages, work together to enable multi-instrument data analysis workflows. We also describe members of the SunPy Project and how the project interacts with the wider solar physics and scientific Python communities. Finally, we discuss the future direction and priorities of the SunPy Project.
solar physics, sunpy, data analysis, Python, heliophysics
## 1 Introduction
The SunPy Project is an organization whose mission is to develop and facilitate a high-quality, easy-to-use, community-led, free and open-source solar data analysis ecosystem based on the scientific Python environment. The vision of the project is to build a diverse and inclusive solar physics and heliophysics community that supports scientific discovery and enables reproducibility through the development of accessible, open-source software (Bobra et al., 2020). To achieve this mission and to make this vision a reality, the SunPy Project maintains and guides the development of a number of Python packages including
the sunpy core package, and organizes educational activities around the use of Python for solar-physics research.
As the scientific Python environment matured in the early 2010s (Hunter, 2007; Harris et al., 2020; Virtanen et al., 2020), the development of a Python package devoted to solar physics became viable. This led to the founding of the SunPy Project in 2011 by scientists at NASA Goddard Space Flight Center. The goal of the SunPy Project at that time was to develop a package that provided the core functionality needed for solar data analysis in Python. To distinguish the software package from the wider project, this original package is now known as the sunpy core package (The SunPy Community et al., 2020). As the SunPy Project and sunpy grew, an ecosystem of affiliated packages (see Section 2.2) was developed to keep the sunpy core package from becoming too large and difficult to manage.
The SunPy Project is committed to the principles of open development. All code is hosted and openly-developed on GitHub1 in order to enable anyone to contribute code or provide feedback. All packages within the SunPy Project must be under an Open Source Initiative (OSI)2 approved license. Discussion is hosted on several open communication channels which include weekly community calls, mailing lists, a Discourse forum, and instant messaging via Matrix3. Additionally, the SunPy Project has a code of conduct4 to ensure that communication within the project is open, considerate, and respectful.
Footnote 1: [https://github.com/sunpy/sunpy](https://github.com/sunpy/sunpy)
Footnote 2: [https://opensource.org](https://opensource.org)
Footnote 3: [https://matrix.org/](https://matrix.org/)
Footnote 4: [https://sunpy.org/coc](https://sunpy.org/coc)
The aim of this paper is to give a high level description of the SunPy Project, including its various components, and to describe the direction of the project in the coming years. Section 2 describes the various Python packages that form the project, including both the sunpy core package (Section 2.1) and the various affiliated packages (Section 2.2). Section 3 gives an overview of the roles within the project and describes how to become involved with SunPy Project. Section 4 describes the various activities of the project within the broader solar physics community. Finally, Section 5 lays out a vision for the future of the SunPy Project.
## 2 Code
### The sunpy core package
The sunpy package is the central pillar of the SunPy Project (The SunPy Community et al., 2020) and provides the fundamental tools for accessing, loading, and interacting with solar physics data in Python. As we will discuss in Section 2.2, sunpy functions as one part of a larger ecosystem of packages for doing solar physics research in Python. While other packages in the ecosystem may focus on particular analysis techniques or analyzing data from specific instruments, the sunpy "core" package is focused on providing general tools for working with solar physics data. As an example, coordinate transformations between common solar coordinate systems are provided by the sunpy core package because they are needed for the analysis of nearly all solar imaging data and are critical for performing multi-instrument studies. However, correcting an AIA image to account for instrument degradation would not belong in sunpy because it is specific to data from one instrument. This allows the sunpy core package to be relatively small in size, thereby assuring its maintainability over time.
The primary components of the sunpy package are described briefly in the following paragraphs. For a more in-depth description of each of these components, see The SunPy Community et al. (2020, Section 4). The full documentation of the sunpy Application Programming Interface (API) is provided in the hosted online documentation5.
Footnote 5: The sunpy API is fully documented here: [https://docs.sunpy.org](https://docs.sunpy.org)
#### 2.1.1 Components of the Core Package
To search for and download data, sunpy provides the Fido interface for searching across a variety of data providers (e.g., the Virtual Solar Observatory (VSO)6, or the Joint Science Operations Center (JSOC)7) maintained within the solar community. Internally, Fido is both an interface that defines the search API for creating data queries as well as a collection of client classes that provide a translation between this user-facing API and the search parameters accepted by individual data providers. A complete list of all supported data sources is provided in the documentation for using Fido8. Section 4.1.1 of The SunPy Community et al. (2020) also provides a comprehensive discussion of the data sources that Fido searches by default. Additionally, Fido can also be extended to search additional data sources that may not be included in sunpy (e.g., the Solar Orbiter Archive, see Section 2.2.2). Attributes such as time, wavelength, and instrument name, among others, can be used to filter these search results. By providing a single interface to many disparate data sources, sunpy, via Fido, easily enables multi-instrument research workflows.
Footnote 6: [https://sdac.virtualsolar.org/cgi/search](https://sdac.virtualsolar.org/cgi/search)
Footnote 7: [http://jsoc.stanford.edu](http://jsoc.stanford.edu)
Footnote 8: [https://docs.sunpy.org/en/stable/guide/acquiring_data/fido.html](https://docs.sunpy.org/en/stable/guide/acquiring_data/fido.html)
Once a user has downloaded data, the TimeSeries and Map objects can be used to load and visualize time series and two-dimensional image data, respectively. These objects hold the data alongside the associated metadata in order to perform metadata-aware operations such as concatenation for time series or cropping for image data. In the case of Map, a World Coordinate System (WCS, e.g., Greisen and Calabretta, 2002) is also constructed from the associated metadata to enable easy mapping between pixel and world coordinates via astropy. In nearly all cases, solar image data is stored in the FITS format (Wells et al., 1981) which has an accompanying well-defined metadata standard (Pence et al., 2010). The accompanying metadata for each Map object adheres to this standard. Solar time series data, however, do not have a standard metadata or file format and are stored in a variety of file formats, including FITS, netCDF, JSON, or plain text. As such, the metadata associated with each TimeSeries object is much more sparse compared to Map, but at minimum will include the time of each observation as well as some information about the associated instrument that made the observation.
Additionally, by extending the astropy coordinates framework (see Section 3.3 of The Astropy Collaboration et al., 2018, for more details), sunpy provides definitions of, and transformations between, common solar coordinate systems. Coordinates expressed using these frames can be used to represent the positional information of solar features and events. sunpy implements both observer-dependent (e.g., helioprojective Cartesian) and observer-independent (e.g., Stonyhurst heliographic) coordinate frames (Thompson, 2006). Each Map object instance also carries with it the corresponding coordinate frame of that image and the coordinate of the observer as defined by the position of the observatory given in the associated metadata.
#### 2.1.2 Testing Infrastructure
sunpy includes thousands of unit, regression and integration tests that are run using the pytest testing framework. This test suite is run on every pull request opened on the sunpy GitHub repository using GitHub Actions to ensure that contributions to the codebase do not lead to unexpected regressions. A full description of our testing practices can be found in our developer documentation.9
Footnote 9: A complete guide to running the tests and the associated infrastructure can be found here:[https://docs.sunpy.org/en/latest/dev_guide/contents/tests.html](https://docs.sunpy.org/en/latest/dev_guide/contents/tests.html)
#### 2.1.3 Release Schedule
There is a new release of the core package with feature enhancements approximately every six months. Every other release is designated a long term support (LTS) release and receives bug fixes for a year rather than for six months. Additionally, there are bug fix releases every month. For each release a digital object identifier (DOI) is automatically generated and a record is created on Zenodo.10 By providing regularly scheduled, versioned releases of sunpy, the SunPy Project enables reproducibility. For example, if a researcher is attempting to reproduce a result from a paper that used sunpy v2.0.2, she can create a new virtual environment and install that exact version of sunpy, even if the current version is many versions ahead of v2.0.2.
Footnote 10: The most current release on Zenodo can be found here: [https://doi.org/10.5281/zenodo.7314636](https://doi.org/10.5281/zenodo.7314636)
This release process is completely automated through GitHub Actions.11 When a release is tagged, an action is triggered that tests the package on all supported versions of Python and all supported operating systems. If the packages build successfully, they are automatically uploaded to to Python Package Index (PyPi) and subsequently the release is updated on conda-forge.
Footnote 11: The GitHub Actions templates used are available here: [https://github.com/OpenAstronomy/github-actions-workflows](https://github.com/OpenAstronomy/github-actions-workflows)
### Affiliated Packages
As the sunpy package grew and the amount of domain- and instrument-specific code being developed in Python increased, it became increasingly challenging to store and maintain the functionality needed for all solar physics research in one package. As such, the affiliated package system was introduced (Mumford and Christe, 2014) so that the sunpy core package could be generic enough for other packages to build on. The goal of this system is to support and promote software packages outside the scope of the sunpy core package, and to provide guidance to developers in implementing and maintaining the specific functionality provided by an affiliated package. This fosters code-ownership while ensuring the set of affiliated packages are interoperable and follow a set of common standards (see Section 2.2.1). The SunPy Project provides development support through our community development efforts and by providing a package template as a foundation. In addition, affiliated packages are advertised at conferences and workshops where a SunPy poster, talk, or tutorial is given.
As a result of the creation of the affiliated package ecosystem, components of the sunpy core package that were tied directly to specific instruments or data analysis methods have recently been moved out into other affiliated packages. One example of this is aiapy (Barnes et al., 2020), a package for processing data from the Atmospheric Imaging Assembly (AIA, Lemen et al., 2012) on the _Solar Dynamics Observatory_(SDO, Pesnell et al., 2012). Prior to version 2.1, sunpy included functionality for calibrating level 1 AIA data. In 2019, in collaboration with the SunPy Project, the AIA instrument team began developing aiapy to provide a number of AIA-specific analysis routines in Python, including the aforementioned calibration software. aiapy became an affiliated package in 2020 and the AIA-specific functionality that previously
lived in sunpy was deprecated and subsequently removed. This relocation of the code allows the AIA instrument team to have full autonomy over their calibration routines and release updates to their software on a more frequent timescale than that of the sunpy core package. At the same time, aiapy users and developers are able to take full advantage of the SunPy Project ecosystem.
Outside of the current list of affiliated packages, current and future NASA and ESA missions 12, as well as ground-based telescopes, such as the Daniel K. Inouye Solar Telescope (DKIST), have begun developing user tools for data analysis and/or pipelines for data calibration built on top of the SunPy ecosystem. While these packages are not yet affiliated, the SunPy Project has assisted in coordinating development efforts between these teams in order to foster a more interoperable ecosystem.
Footnote 12: This includes, but is not limited to, the Interface Region Imaging Spectrometer (IRIS), several instruments on _Solar Orbiter_, as well as the X-Ray Telescope (XRT) and the Extreme ultraviolet Imaging Spectrometer (EIS) onboard _Hinode_
#### 2.2.1 Application Process
The affiliated package application process is completed in the open on GitHub and is open to all, both individuals and larger collaborations (e.g., instrument teams). To begin the process, an applicant opens an issue on the SunPy Project website GitHub repository13 and provides details about the package, including the package name, the maintainers, a link to the code repository, and a link to the documentation. The Affiliated Package Liaison (see Section 3.2) then selects a SunPy Project member to review the candidate affiliated package against the following criteria:
Footnote 13: [https://github.com/sunpy/sunpy.org](https://github.com/sunpy/sunpy.org)
* _functionality_ -- is the package relevant to the solar physics community?
* _integration_ -- does the package make use of the existing ecosystem?
* _documentation_ -- is there hosted documentation, including examples and an API reference?
* _testing_ -- are there automatically run tests and is the coverage extensive?
* _duplication_ -- does the package duplicate existing functionality in the ecosystem?
* _community_ -- is there a code of conduct and do the developers engage the wider community?
* _development status_ -- is the project actively maintained, including versioned releases?
The assigned project member then scores the package in each category using a "stoplight" system (i.e., a package is scored green, orange, or red in each category). A detailed description of each criterion and the scoring for each is available on the affiliated package page of the SunPy Project website14. The submitting author of the affiliated package may also request an alternate reviewer, in which case the Affiliated Package Liaison will assign a new SunPy Project member to review the package. At the end of the review, the candidate package is either accepted, marked as provisional, or not accepted. If the package is accepted, the affiliated package is added to the list of affiliated packages on the SunPy Project website. If the package is marked as provisional or is not accepted, the reviewer and the Affiliated Package Liaison will work with the package authors to help them achieve provisional or accepted status. Accepted affiliated packages are reviewed once a year to ensure the interoperability of the ecosystem does not regress and that affiliated packages are actively maintained.
Footnote 14: [https://sunpy.org/project/affiliated](https://sunpy.org/project/affiliated)
In all cases, the goal of the affiliated package review process is to broaden the ecosystem of tools for solar data analysis in Python. These criteria are not meant to be exclusionary, but rather to ensure interoperability and consistency across the ecosystem for the benefit of both users and developers. Interoperability in this
context, means that affiliated packages should make use of the existing sunpy core data structures, (e.g., Map and Timeseries), in lieu of their own custom data structures. In the context of searching for and downloading data, affiliated packages should use the Fido interface and extend Fido for additional data sources as needed.
#### 2.2.2 Current Ecosystem
At the time of writing, the SunPy Project has a rich and growing ecosystem of affiliated packages. In addition to the sunpy core package, the affiliated package ecosystem includes:
* aiapy for functionality specific to the AIA instrument (Barnes et al., 2020)
* ndcube for generic handling of \(N\)-dimensional data sets with a world coordinate system (WCS) (Ryan et al., 2021).
* pfsspy for magnetic-field extrapolation (Stansby et al., 2020)
* sunkit-instruments for instrument-specific code that does not have a dedicated package (Ryan et al., 2022).
* sunkit-image for solar-specific image analysis or reduction techniques (Freij et al., 2022).
* sunpy-soar15 for querying the Solar Orbiter Archive (SOAR)16 Footnote 15: [https://github.com/sunpy/sunpy-soar](https://github.com/sunpy/sunpy-soar)
Footnote 16: [https://soar.esac.esa.int/soar/](https://soar.esac.esa.int/soar/)
To demonstrate how several of the affiliated packages can be used together with sunpy in a scientific workflow, we show an example in Figure 1 of how coronal loop structures can be analyzed using potential magnetic field extrapolations and multi-point extreme ultraviolet (EUV) observations. We have included a Jupyter notebook that illustrates each step of this workflow in the GitHub repository that accompanies this paper.17
Footnote 17: The GitHub repository for this paper, including the complete text and all code to generate Figure 1, can be found at [https://github.com/sunpy/sunpy-frontiers-paper](https://github.com/sunpy/sunpy-frontiers-paper)
First, we use the Fido interface provided by sunpy to search for and download a synoptic magnetogram from the Helioseismic Magnetic Imager (HMI, Scherrer et al., 2012) on SDO for Carrington rotation 2255 which began on 2022-03-08. This is shown in the left panel in the top row of Figure 1. Next, we identify active region NOAA 12976 which appeared near disk center, as seen from SDO, at 2022-03-29 21:04. The red box overlaid on the synoptic magnetogram is centered on the active region when it appeared at disk center at a Carrington longitude of \(65^{\circ}\).
Since we are interested in the EUV observations of active region 12976, we also use Fido to query the VSO for data from AIA on SDO and the Extreme Ultraviolet Imager (EUVI) on the _Solar Terrestrial Relations Observatory_(STEREO, Howard et al., 2008). Additionally, we use the sunpy-soar package to allow Fido to search for and download data from the SOAR. Here, we query the SOAR for data from the Extreme Ultraviolet Imager (EUI, Rochus et al., 2020) on _Solar Orbiter_(Muller et al., 2020).
The middle row of Figure 1 shows full disk EUV images from AIA (left), the full-sun imager (FSI) on EUI (middle), and EUVI on the STEREO-A spacecraft (right). We use the aiapy package to correct the AIA image (middle panel) for instrument degradation and update the pointing information. The red box in each panel is centered on active region 12976, as seen from the respective spacecraft, and has a width and height of 700 arcseconds. The top right panel of Figure 1 shows the Stonyhurst heliographic longitude and
## 4 Results
Figure 1: Illustration of multiple affiliated packages, including sunpy, sunpy_soar, aiapy, and pfsspy, working together. **Top row:** The left panel shows the HMI synoptic magnetogram for Carrington rotation 2255. The red box is centered on the active region. The right panel shows the Stonyhurst heliographic longitude and radius (in AU) for SDO, STEREO A, and _Solar Orbiter_ on 2022-03-29. **Middle row:** Full disk images from SDO AIA at 171 Å (left), SolO FSI at 174 Å (middle), and STEREO-A EUVI at 171 Å (right). All three images were downloaded using sunpy along with sunpy_soar to query the SOAR for the _Solar Orbiter_ image. The AIA image was calibrated using aiapy. The red box in each panel is centered on the AR shown in the top panel. **Bottom row:** Cutouts of the regions denoted in each image in the middle row. pfsspy is used to compute a potential magnetic field solution from the magnetogram (top row) and trace field lines through the resulting volume. These field lines, shown in green, are transformed to the appropriate coordinate system of each instrument using sunpy.
radius (in AU) of SDO, STEREO-A and _Solar Orbiter_ as derived from the observer location metadata of each image.
Viewing the active region from the vantage points of these three spacecraft (separated by \(>90^{\circ}\)), we gain a better understanding of its three-dimensional structure. Additionally, we use the pfsspy package to compute a potential field extrapolation from the corresponding synoptic magnetogram as shown in the top row of Figure 1. We trace field lines from areas of negative magnetic flux inside the red box corresponding to active region 12976. The resulting field lines are overlaid in green on top of the cutouts from each EUV image in the bottom row of Figure 1. Each field line traced using pfsspy is an astropy coordinate object expressed in terms of a Carrington heliographic coordinate frame defined in sunpy. As such, it is straightforward to transform each field line to the observer-dependent coordinate frame of each image as defined by corresponding observatory using the plotting functionality provided in astropy. The interoperability between astropy, sunpy, sunpy-soar, aiapy, and pfsspy allows us to easily examine the three-dimensional magnetic structure of the active region and see to what extent the derived potential field corresponds to the EUV emission as observed by these three spacecraft.
## 3 PEOPLE
### Board and Lead Developers
The current structure of the SunPy Project is governed by the SunPy Project board (Christe, 2018). The board is a self-electing oversight board which delegates the majority of the day-to-day operations of the project to a lead developer, who in turn delegates it to members of the community. The lead developer has overall responsibility for the large scale organization of the sunpy core package, and ensures that pull requests comply with stated standards and align with the goals of the SunPy Project. The deputy lead developer supports the lead developer and fills in when the lead is absent. The board's role is to steer the overall direction of the SunPy Project and consists of scientists and researchers who are not necessarily involved directly with the day-to-day development of the sunpy core package.
### Community Roles
There are several specific community (or executive) roles within the SunPy Project that perform important duties related to the overall development and maintenance of the project. These roles encompass a range of responsibilities from the development of the core package and affiliated packages to project communication and liaison. The community roles are held by members of the wider solar community who are actively involved in the SunPy Project. Anyone interested in a community role is encouraged to apply.
At present, there are nine community roles within the SunPy Project. From the development side, these include the Lead Developer and the Deputy Lead Developer who are responsible for the development of the sunpy core package, support the development of affiliated packages, and lead the development community. To assist the Lead/Deputy Developers, there are several development community roles which include:
* Continuous Integration Maintainer
* Release Manager
* Webmaster
* Communication and Education Lead who is responsible for the overall engagement with the wider community
* Lead Newcomer and Summer of Code mentor who assists new contributors and oversees the Google Summer of Code project
* Affiliated Package Liaison who is responsible for overseeing the affiliated package review process (see Section 2.2.1) and working with developers of current and candidate affiliated packages
### Maintainers and Contributors
The development of the sunpy core package depends principally on an established team of volunteers that support the Lead and Deputy Lead Developers. These volunteer _maintainers_ are given commit access to the sunpy repository and are predominantly, though not exclusively, scientists from the solar community who use sunpy in their work. In addition to this group of core maintainers, there is a steady influx of new _contributors_, averaging around 20-25 people per year. These contributors enable a wider range of features and code changes than would otherwise be normally possible due to the time constraints of the established team of volunteers. Within this subset of maintainers are members who maintain the specific sub-packages within sunpy like sunpy.map or sunpy.coordinates. These individuals are selected due to either their specific knowledge of the topic or their expertise with these sub-packages.
Contributing to the SunPy Project includes a wide range of activities, not all of them programming related. These include reporting bugs by raising issues on GitHub, requesting features, writing code and tests, providing feedback on pull requests, correcting or adding documentation, helping people who have problems or questions in communication channels and more. The SunPy Project is always looking for any new volunteers or people willing to contribute their time.
## 4 Community
### Engagement with the Solar Physics Community
In order for the SunPy Project to maintain and grow the sunpy core package and affiliated packages within the ecosystem, engagement with the wider solar physics community is critical. The mission of the SunPy Project is to be community-led, and the development is driven by the needs of the solar physics community. To facilitate this, the SunPy Project is building a community for which there is inclusive and open communication between those developing sunpy and those using sunpy in their scientific research. Active contributions from users in terms of bug reports, issues encountered with code or documentation, and feature requests are all vital to the sustainability and future of the SunPy Project. We emphasize that being part of the SunPy Project does not necessarily mean writing software. Contributions in the form of feedback and suggestions are equally important.
To foster communication, the SunPy Project supports several communication platforms (see Section 4.1.1) through which users and developers can regularly interact. The SunPy Project posts on solar physics noticeboards about recent releases and regularly advertises sunpy and affiliated packages at scientific conferences, providing tutorials and support. We also ask that if sunpy is used for scientific work that it is cited in the literature18, thereby increasing its visibility to the scientific community and ultimately contributing to the continued growth and development of the package. More recently, the SunPy Project has improved communications and established relationships with data providers such as VSO and the SOAR, and teams supporting both operating and developing instruments and missions. The SunPy Project is always looking for ways to improve the accessibility of the project and to grow the community.
#### 4.1.1 Communication Channels
Over the years, the usage of sunpy and affiliated packages within the solar physics community has increased, and with that methods to communicate within the SunPy community have also increased. At the time of writing, several distinct communication channels are available. These include:
* Multiple GitHub repositories for bug reports and feature requests. These are listed under the SunPy Project GitHub organization 19. Footnote 19: [https://github.com/sunpy](https://github.com/sunpy)
* Real time messaging 20. Footnote 20: Links to join the Matrix chat can be found at [https://sunpy.org/help.html](https://sunpy.org/help.html)
* Mailing lists 21. Footnote 21: [https://groups.google.com/q/sunpy](https://groups.google.com/q/sunpy)
* An online community forum 22. Footnote 22: [https://community.openastronomy.org/](https://community.openastronomy.org/)
* Weekly public calls that anyone can participate in 23.
Footnote 23: [https://sunpy.org/jitsi](https://sunpy.org/jitsi)
Each has their own distinct purpose, and was created as a need arose for their existence. For example, the GitHub repository is used for the development of sunpy and issues and bugs can be raised there. However some scientists may not be familiar with GitHub and would like to ask a general question on how to do something within sunpy. For this, the mailing list, community forum, or real time Matrix chat may be the most appropriate. We actively encourage users and those interested in contributing to use any or all of these communication channels.
In addition to the main communication platforms specific to the SunPy Project, we maintain a presence within other communication channels used by the wider heliophysics community, including Helionauts24 and communication channels used by the Python in Heliophysics Community.
Footnote 24: [https://helionauts.org/](https://helionauts.org/)
### Python in Heliophysics Community (PyHC)
The Python in Heliophysics Community (PyHC)25(Barnum et al., 2022) is a project with similar goals as the SunPy Project, but focuses on the wider Heliophysics community (Burrell et al., 2018). These include providing coding standards (Annex et al., 2018), curating a list of participating projects26, hosting bi-monthly community meetings, and organizing an inaugural summer school for early career researchers. The SunPy Project is actively involved in PyHC, with sunpy being one of the core PyHC packages. SunPy Project members regularly attend community meetings and present updates. The SunPy Project also took part in the PyHC 2022 summer school. Moving forward, PyHC and the SunPy Project will continue to collaborate and build upon efforts of using sunpy and affiliated packages within the larger heliophysics Python ecosystem.
Footnote 25: [https://heliopthon.org/](https://heliopthon.org/)
### Collaboration with the Wider Python Ecosystem
The sunpy package forms part of the wider Python scientific ecosystem, requiring active collaboration with other scientific Python packages. Whenever possible, we aim to contribute to relevant open-source projects rather than duplicating functionality. As an example, large parts of sunpy depend on core
functionality developed in the astropy package, including support for handling units, times, and coordinates.
The SunPy Project is sponsored by NumFOCUS, "a nonprofit supporting open code for better science"27. NumFOCUS provides financial and organizational support for several important packages (e.g., numpy, pandas and xarray) and facilitates collaboration between packages throughout the scientific Python ecosystem. One example of this is the annual NumFOCUS summit that brings together the leaders of these packages to discuss interoperability, funding sources and other high-level topics that improve the Python ecosystem as a whole.
Footnote 27: [https://numfocus.org/](https://numfocus.org/)
In addition, the SunPy Project is a member of the OpenAstronomy organization28. OpenAstronomy was created to collaborate on outreach, organize conferences such as Python in Astronomy, develop common tooling for infrastructure, and apply to internship programs such as Google Summer of Code (GSoC)29 and Outreachy30. GSoC has been an invaluable source of programming effort for the SunPy Project over the past decade. The contributions from participants in this program have been crucial to sunpy. Examples of successful projects include the conversion to using astropy.time and creating a new Python API wrapper for Helioiviewer.org31. As the focus of the OpenAstronomy organization is the broader astrophysics and astronomy community, the SunPy Project's participation has enabled closer ties within the rapidly growing Python-in-astronomy landscape.
Footnote 28: [https://openastronomy.org/](https://openastronomy.org/)
Footnote 29: [https://summerofcode.withgoogle.com/](https://summerofcode.withgoogle.com/)
Footnote 30: [https://www.outreachy.org/](https://www.outreachy.org/)
Footnote 31: [https://hvpy.readthedocs.io/](https://hvpy.readthedocs.io/)
## 5 The Future of the SunPy Project
Development within the SunPy Project is driven by, and for, the solar physics community, responding to the needs of researchers for data analysis tools and techniques, and software for working with data from new missions. This means both the sunpy core package and other affiliated packages are continually changing and expanding. In September 2022 several members of the SunPy Project met at a coordination meeting to discuss the future of the project. Two key areas that emerged were updating the governance structure, and creating a roadmap for future development. The roadmap provides:
1. a set of priorities for developers to work on in the medium term.
2. a well scoped list of work items that funding can be sought for.
3. a mechanism to solicit input from the wider solar physics community on the medium term priorities from SunPy Project.
At the time of writing, items in the draft roadmap include:
* Improving support and functionality for data with spectral coordinates (e.g., rastering spectrometers)
* Enabling multi-dimensional data sets (i.e., beyond 2D images).
* Improving support for running sunpy on cloud infrastructure.
* Creating a consistently structured set of documentation across all the SunPy Project packages.
* Adding functionality to rapidly visualize large data sets.
* Restructuring the project governance to, among other things, transform the lead developer positions into a multi-person steering committee and create an embudsperson role.
The next step is consultation with the wider solar physics community. We invite feedback on this proposed roadmap via any of the aforementioned communication channels (see Section 4.1.1) or by opening an issue on the repository used for tracking high-level, project-wide tasks and suggestions32.
Footnote 32: [https://github.com/sunpy/sunpy-project](https://github.com/sunpy/sunpy-project)
## 6 Conclusion
In this paper, we have summarized the SunPy Project and its various components, including the code developed and maintained by the project (Section 2), the people that comprise the project (Section 3) and the community that the project serves (Section 4). In particular, we have discussed how the sunpy package and the wider set of affiliated packages form a software ecosystem for solar physics research in Python and illustrated the types of analyses that such an ecosystem enables (see Figure 1). Finally, we have summarized a tentative roadmap to steer the direction of the SunPy Project in the coming years. Importantly, we hope that such a high level description will provide a more clear understanding of the SunPy Project and the wider ecosystem and will encourage contributions of all forms from the global solar physics community.
## Appendix
Here we provide a glossary of terms used throughout this paper:
* SunPy Project: The board and lead/deputy developers, the community roles, maintainers, and every package under its supervision.
* sunpy: The core package for using Python for scientific research in solar physics.
* SunPy ecosystem: The collection of packages that use or interface with sunpy and support scientific research in solar physics, including sunpy
* Affiliated package(s): Solar physics related functionality outside the scope of the sunpy core package and that satisfies the standards enumerated in Section 2.2.1.
## Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
## Author Contributions
W.T.B. contributed to writing the text and created Figure 1. S.C., N.F., L.A.H, and D.S. contributed to writing the text. All authors contributed to manuscript planning and revision and have read and approved the submitted version.
## Data Availability Statement
All of the data used to create Figure 1 are publicly available at the following repositories:
* AIA and EUVI data are available through the VSO: [https://sdac.virtualsolar.org/cgi/search](https://sdac.virtualsolar.org/cgi/search)
* EUI data are available through the SOAR: [https://soar.esac.esa.int/soar/](https://soar.esac.esa.int/soar/)
* HMI synoptic magnetogram data are available through the JSOC: [http://jsoc.stanford.edu/](http://jsoc.stanford.edu/)
Furthermore, the text and accompanying scripts to query, download, and process the data and make Figure 1 are publicly available in the GitHub repository33 that accompanies this paper.
Footnote 33: [https://github.com/sunpy/sunpy-frontiers-paper](https://github.com/sunpy/sunpy-frontiers-paper)
## Funding
W.T.B., A.Y.S., and S.J.M. are supported by an award from the NASA Research Opportunities in Space and Earth Sciences (ROSES) Open-Source Tools, Frameworks, and Libraries (OSTFL) program. N.F is supported by NASA under contract NNG09FA40C (_IRIS_) and NNG04EA00C (SDO/AIA). L.A.H is supported by an ESA Research Fellowship. We acknowledge financial contributions from Google as part of the Google Summer of Code program and from the European Space Agency as part of the Summer of Code in Space program. We acknowledge financial contributions from NumFOCUS for "Improving the Usability of sunpy's Data Downloader". Additionally, we acknowledge funding from the Solar Physics Division of the American Astronomical Society for SunPy workshops and tutorials at annual meetings.
## Acknowledgments
We thank everyone who has supported and contributed to the SunPy Project in any manner. sunpy makes use of astropy, a community-developed core Python package for Astronomy (Astropy Collaboration et al., 2022).
|
2303.02666
|
Learned Lossless Compression for JPEG via Frequency-Domain Prediction
|
JPEG images can be further compressed to enhance the storage and transmission
of large-scale image datasets. Existing learned lossless compressors for RGB
images cannot be well transferred to JPEG images due to the distinguishing
distribution of DCT coefficients and raw pixels. In this paper, we propose a
novel framework for learned lossless compression of JPEG images that achieves
end-to-end optimized prediction of the distribution of decoded DCT
coefficients. To enable learning in the frequency domain, DCT coefficients are
partitioned into groups to utilize implicit local redundancy. An
autoencoder-like architecture is designed based on the weight-shared blocks to
realize entropy modeling of grouped DCT coefficients and independently compress
the priors. We attempt to realize learned lossless compression of JPEG images
in the frequency domain. Experimental results demonstrate that the proposed
framework achieves superior or comparable performance in comparison to most
recent lossless compressors with handcrafted context modeling for JPEG images.
|
Jixiang Luo, Shaohui Li, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong
|
2023-03-05T13:15:28Z
|
http://arxiv.org/abs/2303.02666v1
|
# Learned Lossless Compression for JPEG via Frequency-Domain Prediction
###### Abstract
JPEG images can be further compressed to enhance the storage and transmission of large-scale image datasets. Existing learned lossless compressors for RGB images cannot be well transferred to JPEG images due to the distinguishing distribution of DCT coefficients and raw pixels. In this paper, we propose a novel framework for learned lossless compression of JPEG images that achieves end-to-end optimized prediction of the distribution of decoded DCT coefficients. To enable learning in the frequency domain, DCT coefficients are partitioned into groups to utilize implicit local redundancy. An autoencoder-like architecture is designed based on the weight-shared blocks to realize entropy modeling of grouped DCT coefficients and independently compress the priors. We attempt to realize learned lossless compression of JPEG images in the frequency domain. Experimental results demonstrate that the proposed framework achieves superior or comparable performance in comparison to most recent lossless compressors with handcrafted context modeling for JPEG images.
## 1 Introduction
Storage and transmission of large-scale image datasets, _e.g._, ImageNet [19] and Flicker [4], are necessary for training deep neural networks (DNNs). JPEG [47] is the most popular image compression standard. The JPEG codec leverages transform coding, chroma subsampling, quantization and entropy coding in a sequence to remove spatial redundancies. However, JPEG is inferior to JPEG2000 [40] and BPG [12] due to fixed \(8\times 8\) discrete cosine transform (DCT) and Huffman coding. Recently, task-specific (lossy) and task-free (lossless) methods have been developed to further compress JPEG images.
Task-specific methods compress the JPEG images with the guidance of image processing tasks. Liu _et. al_[31] developed DeepN-JPEG, a JPEG compression framework for image classification based on the high-frequency bias observed in experiments. DeepN-JPEG achieves about 350% compression ratio on ImageNet without degrading the classification performance of deep neural networks. Li _et. al_[30] optimized JPEG quantization table with sorted random search and composite heuristic optimization and achieves a gain of 20%-200% compression ratio at the same accuracy. Besides storage reduction, task-specific methods can also improve the performance of image processing. For example, Choi _et. al_[18] estimated image-specific quantization tables with deep neural networks to improve the tasks of image classification performance, image captioning, and visual quality. However, task-specific compression methods are lossy, as they introduce extra distortion to the input JPEG images by adjusting the quantization tables.
Task-free methods are developed for universal compression of JPEG images. Lepton [26] is one of the representative work that utilizes manufactured context models for precise distribution prediction on each discrete cosine transformation (DCT) coefficient. Lepton provides about 23% bitrate saving over original JPEG images and achieves efficient decompression via a parallelized arithmetic coding. Besides Lepton, mozjpeg [5] and Brunsli [2] can also reduce the size of JPEG images by around 10% and 22% without introducing extra distortion, respectively. It is worth mentioning that universal lossless compression algorithms such as Lempel-Ziv-Markov chain-Algorithm (LZMA) [38] can also be employed on the JPEG images, but can only achieve a trivial compression gain, _e.g._, about 1%. Although these lossless JPEG compressors are practical and efficient, they suffer from tedious design of context models. For example, Lepton uses kinds of manually designed contexts to model the conditional distribution and requires enormous statistical experiments to determine the parameters for prediction.
Recent development in end-to-end compression reveals the potential of getting rid of tedious handcrafted context modeling. The neural-network-based entropy models are developed for differentiable distribution modeling of the latent representations. However, existing methods for learned lossless compression are developed for RGB images and
cannot be directly employed to effectively compress JPEG images (_i.e._, DCT coefficients). Recalling lossy image compression, the hyper-prior based entropy model [11] can dramatically improve the rate-distortion performance of end-to-end image compression methods. In the hyper-prior model, the latent representations is supposed to follow certain parameterized distributions (_e.g._ Gaussian distribution and Laplace distribution). Then the parameters for these distributions are predicted with a neural network, and utilized for arithmetic coding. To enhance the decoding process, the information about these distributions is compressed and transmitted to the decoder as a packed prior.
Inspired by the hyper-prior model, in this paper, we propose a novel framework for lossless compression of JPEG images. As depicted in Figure 1, the proposed framework achieves end-to-end optimized distribution prediction for arithmetic coding of DCT coefficients by incompletely decoding JPEG bitstream. The contributions of this paper are summarized as below.
* We propose a novel framework for learned lossless compression of JPEG images. The proposed framework achieves comparable performance to the carefully designed traditional methods such as Lepton.
* We achieve end-to-end optimized distribution prediction of DCT coefficients incompletely decoded from the JPEG images via frequency partitioning and learning. Grouped DCT coefficients are adopted to improve the compression performance.
* We design the weight-shared residual blocks to constitute an autoencoder-like architecture that improves compression performance and maintains a low memory consumption during training.
This work is the learned lossless compressor specifically designed for JPEG images. Different from existing learned lossless methods, the DCT coefficients are partitioned into several frequency groups to enable end-to-end optimized distrubtion prediction. An autoencoder-like architecture is designed based on weight-shared blocks to realize entropy modeling of grouped DCT coefficients and independently compress the priors. Experimental results show that the proposed method outperforms LZMA, mozjpeg, and Brunsli, and is comparable to Lepton in terms of compression ratio.
## 2 Related Work
### Learned Lossless Compression
Lossless compression has been studied for both universal data and images for a long time, and recent development of deep learning methods has stimulated new researches in this field. For example, DeepZip [23] used recurrent neural networks (RNNs) and bits-back coding [28] to realize lossless compression for universal data, especially for sequence data. Aiming at lossless image compression, L3C [32] estimated the distribution of each pixel in RGB domain with a serialized hierarchical probabilistic model. Moreover, Mentzer _et. al_[34] and Cheng _et. al_[17] suggested that compressing residual of compressed image with traditional methods is also feasible for lossless image compression with end-to-end models.
However, these models are not practical for JPEG image recompression for their low efficiency. The JPEG images have been lossy compressed and typically have a file size more than 20x smaller than the original image. But the most efficient lossless compression methods can only achieve 2x to 3x compression rate. Thus, the lossless compression in RGB domain cannot improve the compression rate of JPEG images. Therefore, a DCT domain compression is required.
Figure 1: Illustration of our proposed recompression framework. To implement a frequency domain recompression, JPEG bitstream is incompletely decoded to DCT coefficients. These DCT coefficients are further compressed with arithmetic coding, which benefits from a carefully designed distribution predictor. Besides, this distribution predictor compress the priors about the distributions as side information that contained in the recompressed bitstream.
### Frequency Learning
Conventional neural networks utilize RGB images as input, where the spatial information would be captured. For saving decoding time of JPEG images, methods exploiting DCT coefficients are explored. Gueguen _et. al_[24] trained a convolutional neural network (CNN) directly on the DCT coefficient acquired from JPEG bitstream, which gains acceleration compared with standard residual network (ResNet). Ehrlich _et. al_[21] redefined convolution, batch normalization, and ReLU leveraging the linearity of the JPEG transform. Xu _et. al_[48] used DCT coefficients as input and selected the most significant components to perform image inference, which reduce the burden of data transmission. This method is verified in image detection and classification, with a superior performance over conventional methods. These methods proves the potential of frequency learning and inspire us to use end-to-end models for lossless frequency compression.
## 3 Methodology
This section demonstrates the framework for lossless compression of JPEG images. We first present the pipeline of the proposed framework, and then describe implementation details of the end-to-end distribution predictor.
### Proposed Framework
The proposed framework is developed for the DCT coefficients obtained by incomplete decoding of JPEG images. For clarity, we denote the tensor of DCT coefficients as \(x\in\mathbb{R}^{H\times W\times 192}\), that consists of 64 frequency components of DCT coefficients over 3 color planes. The details of the arrangement is described in Section 3.2. As shown in Figure 2, \(x\) is partitioned into several frequency groups in the sense of "low frequency", "middle frequency", and "high frequency". The partitioned groups are denoted as \(x_{0},x_{1},\cdots,x_{n}\) with \(x_{i}\in\mathbb{R}^{H\times W\times C_{i}}\) and \(\sum_{i}C_{i}=192\). \(x_{i}\) is supposed to obey a multivariate Gaussian distribution, where the mean \(\mu_{i}\) and the scale \(\sigma_{i}\) are predicted with distribution predictor. Moreover, the distributions of each group are estimated separately with a specific distribution predictor. The predictors share the same structure, while the parameters are learned independently since the scale and correlation of different frequency is distinguished. Overall, the predictors are constructed like an auto-encoder, where the output is the means and scales for Gaussian distribution.
Figure 2: The left part is the data processing: \(B_{\alpha}\) represents the number of channels. Then _Frequency Band_ is split into three _Grouped Channels_. The right part is the network structure: The input is _Grouped Channels_, namely \(x_{i}\). _Conv_ and _DeConv_ is the convolution layer with \(3\times 3\) kernel, \(C\) channels and \([s,s]\) stride. Besides, the stride of first two _EncBlocks_ and the last two _DecBlocks_ are \([2,2]\), the others are \([1,1]\). We set \(C=48\) of the last _EncBlock_, and the others are \(384\). _Leaky_ReLU_ is the activation function. \(Q\) is the quantization, \(y_{i}\) is the features to be encoded. _AE_ and _AD_ are the arithmetic encoder and arithmetic decoder. \(\mu_{i}\), \(\sigma_{i}\) are the parameters for arithmetic coder. \(C_{i}\) is the output channel and it is the same with input channels of grouped channels. Then \(x_{i}\), \(y_{i}\) and \(QT\) are compressed to form the final bitstream.
Here, the encoder and decoder are denoted as \(\mathcal{E}_{i}\) and \(\mathcal{D}_{i}\).
\[y_{i}=[\mathcal{E}_{i}(x_{i})],\quad[\mu_{i},\sigma_{i}]=\mathcal{D}_{i}(y_{i}), \tag{1}\]
where \(\lfloor\cdot\rceil\) represents the rounding operation and \(y_{i}\) is the quantized output of the encoder. \(y_{i}\) is the prior of \(\mu_{i}\) and \(\sigma_{i}\), and is quantized to integer symbols for entropy coding. \(y_{i}\) is encoded and embedded into the bitstream, and is transmitted to the decoder.
Considering end-to-end training for the distribution predictors, the whole framework is optimized based on a joint loss to balance the performance over all frequency components. The loss function is designed as
\[L =\sum_{i}(R_{x_{i}}+\lambda R_{y_{i}})\] \[=\sum_{i}\left(\mathbb{E}[-\log_{2}(p_{x_{i}})]+\lambda\mathbb{E }[-\log_{2}(p_{y_{i}})]\right), \tag{2}\]
where \(R_{x_{i}}\) and \(R_{y_{i}}\) is the average bit consumption of \(x_{i}\) and \(y_{i}\). The probability of \(x_{i}\) can be inferred from the parameterized Gaussian distritbution. With the estimated \(\mu_{i}\) and \(\sigma_{i}\), the probability of integer symbol \(a=x_{i}^{h,w}\) in \(x_{i}\) is
\[p(a,\sigma,\mu)=\int_{a-0.5}^{a+0.5}\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(a- \mu)^{2}}{2\sigma^{2}}}, \tag{3}\]
where \(\mu=\mu_{i}^{h,w}\) and \(\sigma=\sigma_{i}^{h,w}\) are the mean and scale on the corresponding positions of \(\mu_{i}\) and \(\sigma_{i}\). The probability of \(y_{i}\) is estimated with the method introduced in [10]. It is worthy mentioning that \(\lambda\) in Equation (2) is expected to be 1 for the lossless compression task, but we find that progressively increasing \(\lambda\) would improve the compression performance. The details of the tuning strategy is elaborated in Section 5.2.
### Implementation Details
**Arrangement of DCT Coefficients.** Suppose the original image has a size of \(N\times M\), then these coefficients are stored in \(\lceil\frac{N}{8}\rceil\times\lceil\frac{M}{8}\rceil\) blocks of \(8\times 8\), where \(\lceil\cdot\rceil\) means the ceil operation. Besides, considering the three color planes (_i.e._, Y, Cb, and Cr) adopted in JPEG for compressing color image, \(x\) has the shape of \(\lceil\frac{N}{8}\rceil\times\lceil\frac{M}{8}\rceil\times 8\times 8\times 3\). To simplify the notation of this 5-dimensional tensor, we rearrange the coefficients according to their frequency and color planes, while keep the first two dimensions unchanged. As shown in Figure 4, the \(8\times 8\) DCT coefficients in each block is indexed in a Zig-Zag order ranging from 0 to 63. The coefficients on different color planes with the same index placed in the following order: Y\(\rightarrow\)Cb\(\rightarrow\)Cr\(\rightarrow\)Y\(\rightarrow\)Cb\(\rightarrow\)Cr\(\cdots\). Moreover, we define \(H\equiv\lceil\frac{N}{8}\rceil,W\equiv\lceil\frac{M}{8}\rceil\). Then the 5-dimensional tensor \(x\) is reshaped into the shape mentioned before, as \(H\times W\times 192\).
**Network Architecture**. The main body is the autoencoder as shown in Figure 2, including encoder and decoder. Encoder consists of five _EncBlock_, and each _EncBlock_ is composed of convolutional layer with \(3\times 3\) kernel and leaky ReLU activation function. The first two _EncBlock_ have stride with \(2\), thus encoder subsamples with \(4\) times to transfer input coefficients \(x_{i}\) into \(\hat{y}_{i}\). Then we quantize \(\hat{y}_{i}\) into \(y_{i}\) for arithmetic encoding and decoding. And decoder has _DecBlock_ and _Weight-Shared Block_, and each _DecBlock_ is composed of deconvolutional layer with \(3\times 3\) kernel and leaky ReLU activation function. The last two _DecBlock_ has stride with \(2\) and decoder upsamples the extracted feature with \(4\) time. Then the last convolutional layers output the parameters of Gaussian \(\mu_{i}\) and \(\sigma_{i}\) to generate the probability of input \(x_{i}\).
**Weight-Shared Block.** The weight-shared block in Figure 3 is introduced in the decoder side to facilitate training. When Figure 3(\(a\)) has three iterations, Figure 3(\(a\)) has the same structure with Figure 3(\(b\)). But three blocks in Figure 3(\(a\)) share the same parameters. Different from the ResNet in [25], the weight of these blocks share the same weights. Thus we name it weight-shared block. Besides, it has forward flow(from input to output) as shown in the left of Figure 3(\(a\)) and the backward flow(from output to input) as shown in the right of Figure 3(\(a\)) at the same time. It reduces the complexity of decoder, and improve the compression gain.
## 4 Frequency Partitioning
In this section, we further validate the efficiency of frequency partitioning in the proposed framework.
### Correlation Across Frequencies
First, we compare the compression performance with three grouping strategies. The first 45 channels in the arranged DCT coefficients \(x\) are utilized in this experiment. As shown in Table 1, the 45 channels are split into 1/2/5 groups at different refinement levels. The best compression performance is achieved when the 45 channels are split into 2 groups. The reasons for that are of two aspects. First, the DCT coefficients are locally correlated. Second, the average intensities of different channels are distinguished. For
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline channels & bpsp & channels & bpsp & channels & bpsp \\ \hline \multirow{4}{*}{\([0,45)\)} & \multirow{4}{*}{\(0.8186\)} & \multirow{4}{*}{\([0,9)\)} & \multirow{4}{*}{**0.2559**} & [0, 3) & 0.1227 \\ \cline{3-4} & & & & [3, 9) & 0.1481 \\ \cline{3-4} & & & & [9, 18) & 0.1822 \\ \cline{3-4} & & & & [18, 30) & 0.1937 \\ \cline{3-4} & & & & [30, 45) & 0.1810 \\ \hline \hline & 0.8186 & & **0.8012** & & 0.8277 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The bits for grouped channels of kodim01. The number of _channels_\([m,n)\) is the index of \(192\) frequency bands from \(m\) to \(n\). And _bpsp_ is the bit per sub-pixel. The last line is the sum of each component.
example, the average intensity of channel 0 (DC) can be hundreds of times higher than channel 30-45, which may hinder the network from finding an optimum.
Moreover, we visualize the DCT coefficients with binary map of Y, Cb, Cr color planes to depict the correlation among low frequency components in Figure 5. Figure 5 is the most significant bit (MSB) map of 0-3 channels of the first 45 channels, which suggests high correlation in channel 0 across different color planes and some implicit correlation among other low frequency channels.
### Frequency Partitioning Strategy
Based on the experiments and channel correlation introduced in Section 4.1, we split the \(192\) channels into three groups. To illustrate the physical meaning of such partitioning strategy, we illustrate it in the original block DCT domain in Figure 4. For gray-scale image with only luminance (Y) plane, the 64 channels that indexed in Zig-Zag order are split into 3 groups: \([0,3),[3,15),[15,64)\). As for color images, the channels
Figure 4: An example of frequency partitioning proposed in this paper. The left of this figure shows four adjacent blocks of \(8\times 8\) DCT coefficients of luminance (Y plane), where the indices of coefficients follow the Zig-Zag scanning of JPEG. In this example, the 64 coefficients are partitioned into 3 groups according to frequency. In each group, the spatial position of the adjacent blocks is maintained, while coefficients of different coefficients is arranged across channels. Thus, the three groups in this figure have the shapes \(2\times 2\times 3\), \(2\times 2\times 12\), and \(2\times 2\times 49\).
Figure 5: The most significant bit map of channel 0-3 of \(Y\), _Cb_ and _Cr_ planes.
Figure 3: The structure of _Weight-Shared Block_. Then _Res_Block_ includes _deConv_ and _Leaky_ReLU_. And _deConv_ is the convolution layer with \(3\times 3\) kernel, \(384\) channels and \([1,1]\) stride. _Leaky_ReLU_ is the activation function. \(+\) mean addition element by element. Then \(a\) is the weight-shared network, \(b\) is the serial connection of three \(Res_{B}lock\). When \(a\) has three iterations, \(a\) has the same structure with \(b\), but three blocks in \(a\) share the same parameters.
Figure 6: The ablation experiment of Weight-Shared Block on Kodak. The left y axis is _bpsp_, and the right is the compression ratio. Then red lines are the model with weight-shared block, while the blue dotted lines are the model without weight-shared block.
are grouped to \([0,9),[9,45),[45,192)\), where the color planes are ordered as Y\(\rightarrow\)Cb\(\rightarrow\)Cr\(\rightarrow\)Y\(\rightarrow\)Cb\(\rightarrow\)Cr\(\cdots\).
## 5 Experiments
### Dataset
To train and test our neural network, we process the training and testing data according to the Figure 1. Moreover, We must obtain DCT coefficients from JPEG images, here we follow the code1 to get the quantization table and coefficients. Then the training dataset is DCT coefficients of Flickr data [4]. Firstly we crop these JPEG images with \(256\times 256\), and then the input is \(32\times 32\) since the DCT is \(8\times 8\). Besides the training image batch is \(4\). And the testing dataset is Kodak [3] and Set5 [13]. We just transfer lossless format PNG as JPEG by controlling the quality factor with library "jpeg-9c"2. The set of quality factor is \(50,60,70,80,90\), and \(50\) means the worst quality and \(90\) otherwise. It covers the commonly used quality for JPEG. Moreover the main memory cost of cloud and local machine is mainly from the JPEG images with high quality. And then we process testing data with the method same as the training data.
Footnote 1: [https://github.com/dwogon/jpegio](https://github.com/dwogon/jpegio)
Footnote 2: [https://www.ijg.org/](https://www.ijg.org/)
### Training Strategies
We utilize Adam [27] optimizer to train our whole network with one GPU for two days. From the loss function in Equation (2), we train our neural network by balancing the rate of DCT coefficients and extracted features. To obtain higher compression ratio, we adjust the vaule of \(\lambda\) during training instead of a constant value. Firstly we set small value for it, like \(0.001\), then when training at 1M steps, we set a bit higher value, like \(0.01\). At last, we set \(\lambda=1\) for the last 1M step. Meanwhile, we adjust the learning rate by the exponential decay with \(0.9\) until 10M steps.
### Visualization of Features
To verify the frequency learning for lossless compression, we visualize the features that neural network learn from DCT coefficients. From the Figure 7, different frequency bands obtain different features. Neural network learns about the smoothed pixels from low frequecny as in _Group1_ in Figure 7 and learns about dramatically changing pixels as in _Group3_, like the edge and contours. Besides, since larger part of high frequency of DCT coefficients is zero, there exist some redundant feature channels in _Group2_ and _Group3_. It is the truth that the neural network can learn form DCT coefficients as the original RGB images, since DCT is reversible. Namely, CNN can learn from the inverse DCT. So we can infer the learning process, firstly it transforms the DCT coefficients into RGB images, and then learn the image pattern as the previous network. More importantly, neural network can learn from part frequency bands instead of all DCT coefficients. Since here is the lossless compression, we must process all frequency bands.
### Results
We achieve more than 20% compression gain for JPEG images, as shown in Tables 2 and 3. Our compression performance outperforms _LZMA_ about \(20\%\) and _mozjpeg_ about \(15\%\), and it is comparable to _Brunsli_ and _Lepton_. Besides, our method even has higher compression gain for the dataset _Kodak_80_ with \(20.447\%\) to \(20.286\%\). And we have higher compression ratio on Set5 dataset even for Brunsli and Lepton with \(2.378\%\) and \(1.437\%\) gain.
### Ablation Studies
#### 5.5.1 DCT vs. RGB
To verify the efficiency of learning in frequrency domain, we conduct the experiments with the RGB input. And Figure 8 shows all leaning in RGB has higher bpsp than the original images. Namely, they have no compression gain for JPEG images, though L3C3 has better performance in image with lossless format (BPG, JPEG2000 and so on). Besides we train our structure with the JPEG images, and test on Kodak dataset, its performance is far more worse than JPEG. To achieve lossless compression, neural network learn from each pixel and allocate its probability. Thus neural network can not identify whether the input image is lossless format. And that is our key motivation for learning in frequency domain.
Footnote 3: [https://github.com/fab-jul/L3C-PyTorch/](https://github.com/fab-jul/L3C-PyTorch/)
#### 5.5.2 Number of Groups
To explore the variable about the number of group, we set the following experiments. We split the whole \(192\) channels into \(1,2,3,4\) groups. When it is \(1\) group, we have the whole frequency bands \([0,192)\) to jointly process these DCT coefficients. And experiments with \(2\) groups has \([0,9),[9,192)\) frequency bands, and \(3\) groups has \([0,9),[9,45),[45,192)\), and \(4\) groups has \([0,9),[9,45),[45,108),[108,192)\), respectively. We split these groups according to the Zig-Zag order of \(8\times 8\) DCT, and it corresponds to super low, low, middle and high frequency. From the Figure 9 a single model can not achieve best performance for all qualities. More groups also do not have better compression ratio as the model with \(4\) groups is worse than model with \(2\) group for all qualities. To be mentioned, model with \(2\) groups has the best performance at quality \(90\) with regard to other models. We finally use model with three groups for testing because it has the best average performance.
#### 5.5.3 Weight-Shared Block
We conduct the experiments to verify the effectiveness of weight-shared block and the results are shown in Figure 6. Specifically, we disconnect the back flow in Figure 3(_a_) to train the whole network with the same configuration. And it actually improves the performance by 2% for images across all qualities.
## 6 Conclusion and Discussion
As far as we know, we are the first to utilize deep learning method on frequency domain for lossless compression. The DCT coefficients are partitioned into several groups for efficient compression, which is based on the observation that DCT coefficients have implicit local correlation. Thus, joint processing adjacent channels can improve the lossless compression performance. Besides, different from the autoencoder with the symmetrical structure, we introduce the extra module for the decoder, such as weight-shared block, because the pattern of DCT coefficient is hard to capture, especially the altering coefficients. Finally we achieve the comparable performance to other traditional non-learned methods. Different from the rate-distortion loss \(R+\lambda D\) for lossy compression, lossless compression has no distortion. In this paper, we just optimize the rate of input DCT coefficients and the extracted priors. However we also introduce the Lagrange factor \(\lambda\) to balance the rate of each part.
|
2308.00303
|
Diffusion Model for Camouflaged Object Detection
|
Camouflaged object detection is a challenging task that aims to identify
objects that are highly similar to their background. Due to the powerful
noise-to-image denoising capability of denoising diffusion models, in this
paper, we propose a diffusion-based framework for camouflaged object detection,
termed diffCOD, a new framework that considers the camouflaged object
segmentation task as a denoising diffusion process from noisy masks to object
masks. Specifically, the object mask diffuses from the ground-truth masks to a
random distribution, and the designed model learns to reverse this noising
process. To strengthen the denoising learning, the input image prior is encoded
and integrated into the denoising diffusion model to guide the diffusion
process. Furthermore, we design an injection attention module (IAM) to interact
conditional semantic features extracted from the image with the diffusion noise
embedding via the cross-attention mechanism to enhance denoising learning.
Extensive experiments on four widely used COD benchmark datasets demonstrate
that the proposed method achieves favorable performance compared to the
existing 11 state-of-the-art methods, especially in the detailed texture
segmentation of camouflaged objects. Our code will be made publicly available
at: https://github.com/ZNan-Chen/diffCOD.
|
Zhennan Chen, Rongrong Gao, Tian-Zhu Xiang, Fan Lin
|
2023-08-01T05:50:33Z
|
http://arxiv.org/abs/2308.00303v2
|
# Diffusion Model for Camouflaged Object Detection
###### Abstract
Camouflaged object detection is a challenging task that aims to identify objects that are highly similar to their background. Due to the powerful noise-to-image denoising capability of denoising diffusion models, in this paper, we propose a diffusion-based framework for camouflaged object detection, termed diffCOD, a new framework that considers the camouflaged object segmentation task as a denoising diffusion process from noisy masks to object masks. Specifically, the object mask diffuses from the ground-truth masks to a random distribution, and the designed model learns to reverse this noise process. To strengthen the denoising learning, the input image prior is encoded and integrated into the denoising diffusion model to guide the diffusion process. Furthermore, we design an injection attention module (IAM) to interact conditional semantic features extracted from the image with the diffusion noise embedding via the cross-attention mechanism to enhance denoising learning. Extensive experiments on four widely used COD benchmark datasets demonstrate that the proposed method achieves favorable performance compared to the existing 11 state-of-the-art methods, especially in the detailed texture segmentation of camouflaged objects. Our code will be made publicly available at: [https://github.com/ZNan-Chen/diffCOD](https://github.com/ZNan-Chen/diffCOD).
## 1 Introduction
Camouflage is to use any combination of coloration, illumination, or materials to hide organisms in their surroundings, or disguise them as something else, for deception and paralysis purposes. Camouflaged object detection (COD) [13], that is, segmenting camouflaged objects from the background, is a challenging vision topic that has emerged in recent years, due to the high similarity of camouflaged objects to the background. COD has also attracted growing research interest from the computer vision community, because of its wide range of real-world applications, such as agricultural pest detection [30], medical image segmentation [34], and industrial defect detection [51].
With the advent of large-scale camouflaged object detection datasets in recent years, such as CAMO [31] and COD10K [13] datasets, numerous deep learning-based methods have been proposed and achieved great progress. Some methods are inspired by human visual mechanisms and adopt convolutional neural networks to imitate predation behavior, thus designing a series of models for COD, such as search identification network [12], positioning and focus network [37], zoom in and out [41], and PreyNet [61]. Some methods adopt auxiliary cues to improve network discrimination, or branch tasks to jointly learn camouflage features. The former typically employ frequency domain [63], edge/texture [24, 65], or motion information [5] to improve feature representation, and the latter usually introduces boundary detection [50], classification [31], fixation [36], or saliency detection [32] for multi-task collaborative learning. More recently, to improve global contextual exploration, transformer-based approaches have also been proposed, such as HitNet [22] and FSPNet [23]. Although these methods have greatly improved the performance of camouflaged object detection, the existing methods still struggle to achieve accurate location and segmentation in most complex scenarios, due to the interference of highly similar backgrounds and the complexity of the appearance of camouflaged objects.
In recent years, diffusion models [20] have demonstrated impressive performance in the generative modeling of images and videos [10], opening up a new era of generative models. Diffusion models are a class of generative models that consist of Markov chains trained using variational inference, to denoise noisy images blurred by Gaussian noise via learning the reverse diffusion process. Because of its powerful noise-to-image denoising pipeline, the computer vision community is curious about its variants for discriminative tasks [8]. More recently, diffusion models have been found to be highly effective in other computer vision tasks, such as image editing [19], super-resolution [33], instance segmentation [17], semantic segmentation [3, 4] and medical image segmentation [43, 53]. However, despite their great potential, diffusion models for challenging camouflaged object detection have still not been well explored.
In this paper, we propose to formulate the camouflaged object de
Figure 1: (a) The current mainstream COD paradigm inputs images into the network for prediction in a single direction, generating a deterministic segmentation mask. (b) Our proposed diffCOD provides a novel paradigm that decomposes COD into a series of forward-and-reverse diffusion processes.
tection as a generative task, through a denoising diffusion process from the noisy mask to the object mask in the image. Specifically, in the training stage, Gaussian noise is added to the ground-truth masks to obtain noisy masks, and then the model learns to reverse this noising process. In the inference stage, the model progressively refines a set of randomly generated noisy masks from the image through the learned denoising model, until they perfectly cover the targeted object without noise. We can see that the denoising diffusion model is the process of recovering the ground-truth mask from the random noisy distribution to the learned distribution over object masks. As shown in Figure 1, unlike previous deterministic network solutions that produce a single output for an input image, we decouple the detection of the object into a novel noise-to-mask paradigm with a series of forward-and-reverse diffusion steps, which can output masks from single or multi-step denoising, thereby generating multiple object segmentation masks from a single input image.
To this end, we propose a denoising diffusion-based model, termed diffCOD, which approaches camouflaged object tasks from the perspective of the noise-to-mask denoising diffusion process. The proposed model adopts a denoising network conditioned on the input image prior. The semantic features extracted from the image by a Transformer encoder are integrated into the denoising diffusion model to guide the diffusion process at each step. To effectively bridge the gap between the diffusion noise embedding and the conditional semantic features, an injection attention module (IAM) is designed to enhance the denoising diffusion learning by aggregating conditional semantic features and diffusion model encoder through a cross-attention mechanism. Our contributions are summarized as follows:
* We extend the denoising diffusion models to the task of camouflaged object detection, and propose a diffusion-based object segmentation model, called diffCOD, a novel framework that views camouflaged object detection as a denoising diffusion process from noisy masks to object masks.
* We design an injection attention module (IAM) to model the interaction between noise embeddings and image features. The proposed module adopts the cross-attention mechanism to integrate the conditional semantic feature extracted from the image into the diffusion model encoder to guide and enhance denoising learning.
* Extensive quantitative and qualitative experiments demonstrate that the proposed diffCOD achieves superior performance over the recent 11 state-of-the-art (SOTA) methods by a large margin, especially in object detail texture segmentation, indicating the effectiveness of the proposed method.
## 2 Related Work
### Camouflaged Object Detection
Existing COD methods [11, 12, 13] are based on a non-generative approach to segment the objects from the background. The approaches in COD can be broadly categorized into the following strategies: a) Introducing additional cues to facilitate the exploration of camouflage features. BGNet [50] uses edge semantic information to enable the model to extract features that highlight the structure of the object and thus pinpoint the object boundary. TINet [65] designs a texture label to find boundaries and texture differences through progressive interactive guidance. FDCOD [63] incorporates frequency domain features into CNN models to better detect objects from the background. DGNet [24] utilizes gradient edge information to facilitate the generation of contextual and texture features. b) Multi-task learning strategies are used to improve segmentation capabilities. ANet [31] proposed joint learning of classification and segmentation tasks to help the model improve recognition accuracy. UJSC [32] detects both salient and camouflaged objects to improve the model performance. Rank-Net [36] proposes to use the localization model to find the obvious discriminative region of the camouflaged object, and the segmentation model to segment the full range of the camouflaged object. c) Coarse-to-fine feature learning strategy is utilized to explore and integrate multi-scale features. SegMoR [27] uses multi-stage detection to focus on the region where the goal is located. ZoomNet [40] learns multi-scale semantic information through multi-scale integration and hierarchical hybrid strategies to promote models that produce predictions with higher confidence. PreyNet [61] imitates the predation process for stepwise aggregation and calibration of features. PFNet [37] mimics nature's predation process by first locating potential targets from a global perspective and then gradually refining the fuzzy regions. SINet [13] is designed to improve segmentation performance by locating the object first and then differentiating the details. \(C^{2}\)FNet [49] proposes to use global contextual information to fuse on high-level features in a cascading manner to obtain better performance. Hitset [22] and FSPNet [23] propose to explore global context cues by transformers. In this paper, we introduce generative models, _i.e._, denoising diffusion models, into the COD task to gradually refine the object masks from the noisy image, which achieve excellent performance, especially for objects with fine textures.
### Diffusion Model
The diffusion model [20, 47] is a generative model that uses a forward Gaussian diffusion process to sample a noisy image, and then iteratively refines it using a backward generative process to obtain a denoised image. Diffusion models have shown strong potential in several fields, such as image synthesis [10, 20], image editing [19], and image super-resolution [9]. Moreover, the learning process of diffusion models is able to capture high-level semantic information that is valuable for segmentation tasks [3], which has led to a growing interest in diffusion models for image segmentation including medical image segmentation [53, 54], semantic segmentation [4, 26, 55, 57], and instance segmentation [1, 17]. MedSegDiff [53] proposes the first DPM-based medical segmentation model, and MedSegDiff-V2 [54] further improves the performance based on it using transformer. DDeP [4] finds that pre-training a semantic segmentation model as a denoising self-encoder is beneficial for performance improvement. DDP [26] designs a dense prediction framework with stepwise denoising refinement guided by image features. ODISE [57] combines a trained text image diffusion model with a discriminative model to achieve open-vocabulary panoptic segmentation. DiffMuMask [55] uses a model for the automatic generation of image and pixel-level semantic annotations, and it also shows superiority in open vocabulary segmentation. DiffusionInst [17] proposes the first instance segmentation model based on a diffusion process to achieve global instance mask reconstruction. Segdiff [1] uses a diffusion probabilistic approach to design an end-to-end segmentation model that does not rely on a pre-trained backbone. However, there are no studies that demonstrate the effectiveness of diffusion models in COD tasks. In this work, we present the first diffusion model for the COD segmentation task.
## 3 Methodology
In this section, we first review the diffusion model (Sec. 3.1). Then we introduce the architecture of diffCOD (Sec. 3.2). Finally, we describe the specific process of training and inference of diffCOD (Sec. 3.3 & Sec. 3.4).
### Diffusion Model
The diffusion probability model has reaped plenty of attention due to its simple training process and excellent performance. It is mainly divided into forward process and reverse process. In the forward process, noise is added to the target image to make it closer to the Gaussian distribution. The reverse process learns to map the noise to the real image.
The forward process refers to the gradual incorporation of Gaussian noise with variance \(\beta_{t}\in(0,1)\) into the original image \(x_{0}\sim p\left(x_{0}\right)\) at time \(t\) until it converges to isotropic Gaussian distribution. The forward process is described by the formulation:
\[q\left(x_{t}\mid x_{t-1}\right)=\mathcal{N}\left(x_{t};\sqrt{1-\beta_{t}}x_{t -1},\beta_{t}\mathbf{I}\right) \tag{1}\]
where \(t\in[1,T]\). We can obtain the latent variable \(x_{t}\) directly by using \(x_{0}\) by the following equation:
\[q\left(x_{t}\mid x_{0}\right)=\mathcal{N}\left(x_{t};\sqrt{\bar{\alpha}_{t}}x _{0},\left(1-\bar{\alpha}_{t}\right)\mathbf{I}\right) \tag{2}\]
where \(\alpha_{t}:=1-\beta_{t}\), \(\bar{\alpha}_{t}:=\prod_{s=0}^{t}\alpha_{s}\) and \(\epsilon\sim\mathcal{N}(0,\mathbf{I})\).
The reverse process converts the latent variable distribution \(p(x_{T})\) to \(p(x_{0})\) through a Markov chain, and the reverse process can be denoted as follows:
\[p_{\theta}\left(x_{t-1}\mid x_{t}\right)=\mathcal{N}\left(x_{t-1};\mu_{\theta }\left(x_{t},t\right),\Sigma_{\theta}\left(x_{t},t\right)\right) \tag{3}\]
The combination of \(q\) and \(p\) is a variational auto-encoder, and the variational lower bound (VLB) is defined as follows:
\[L_{\text{vlb}}:=L_{0}+L_{1}+\ldots+L_{T-1}+L_{T} \tag{4}\]
\[L_{0}:=-\log p_{\theta}\left(x_{0}\mid x_{1}\right) \tag{5}\]
\[L_{t-1}:=D_{KL}\left(q\left(x_{t-1}\mid x_{t},x_{0}\right)\parallel p_{\theta} \left(x_{t-1}\mid x_{t}\right)\right) \tag{6}\]
\[L_{T}:=D_{KL}\left(q\left(x_{T}\mid x_{0}\right)\parallel p\left(x_{T}\right)\right) \tag{7}\]
### Architecture
As shown in Figure 2, the proposed diffCOD aims to solve the COD task by the diffusion model. The denoising network of diffCOD is based on the UNet architecture [44]. To get effective conditional semantic features, we obtain multi-scale features by ViT-based backbone and feature fusion (FF) to yield features containing rich multi-scale details. In addition, to let the texture patterns and localization information in the conditional semantic features guide the denoising process, we propose an injection attention module (IAM) based on cross-attention. This allows the network to reduce the difference between diffusion features and image features and to combine the advantages of both.
**Feature Fusion (FF).** Given an initial input image \(x_{o}\in\mathbb{R}^{H\times W\times 3}\), we adopt the top-three high-level features of the visual backbone as our multi-scale backbone features, denoted as \(\mathcal{X}_{l}^{p}\), \(i\in\{1,2,3\}\) whose resolution is \(\frac{H}{k}\times\frac{W}{k}\), \(k\in\{8,16,32\}\). Here we use PVTv2 [52] as the backbone. Then FF is used to aggregate these multi-scale features. Specifically, FF contains three branches to process \(\mathcal{X}_{i}^{p}\), each branch uses two convolution operations with 3\(\times\)3 kernel for feature enhancement, and finally the three branches are coalesced by a single convolution to obtain \(\mathcal{F}\in\mathbb{R}^{\frac{H}{22}\times\frac{W}{32}\times C}\).
**Injection Attention Module (IAM).** To introduce texture and location information of the original features in the noise prediction process, we employ a cross-attention-based IAM, which is embedded in the middle of the UNet-based denoising network. Given the multiscale fusion feature \(\mathcal{F}\) from FF and the deepest feature \(\mathcal{D}\in\mathbb{R}^{\frac{H}{32}\times\frac{W}{32}\times C}\) from the diffusion model as the common input to the IAM. Specifically, \(\mathcal{D}\) is transformed by linear projection to generate
Figure 2: Our proposed diffCOD framework for COD, which feeds a given image into a denoising diffusion model with UNet architecture as the core component for denoising. An injection attention module (IAM) is designed to implicitly guide the diffusion process with the conditional semantic features that have gone through the backbone and feature fusion module (FF), allowing the model to take full advantage of the correspondence between image features and diffusion information.
the query \(\mathbf{Q}^{\mathbf{D}}\), the key \(\mathbf{K}^{\mathbf{D}}\) and the value \(\mathbf{V}^{\mathbf{D}}\). \(\mathcal{F}\) generates \(\mathbf{P}^{\mathbf{F}}\), \(\mathbf{V}^{\mathbf{F}}\) by linear projection, and it is noteworthy that \(\mathcal{F}\) does not generate the query and the key for similarity comparison, but uses the generated \(\mathbf{P}^{\mathbf{F}}\) to act as an intermediary for similarity comparison with \(\mathcal{D}\). This process is defined as follows:
\[\mathbf{Q}^{\mathbf{D}}=\mathcal{D}\cdot\mathcal{W}_{\mathcal{Q}}^{\mathcal{D} },\quad\mathbf{K}^{\mathbf{D}}=\mathcal{D}\cdot\mathcal{W}_{\mathcal{K}}^{ \mathcal{D}},\quad\mathbf{V}^{\mathbf{D}}=\mathcal{D}\cdot\mathcal{W}_{ \mathcal{V}}^{\mathcal{D}} \tag{8}\]
where \(\mathcal{W}_{\mathcal{Q}}^{\mathcal{D}}\), \(\mathcal{W}_{\mathcal{K}}^{\mathcal{D}}\), \(\mathcal{W}_{\mathcal{V}}^{\mathcal{D}}\), \(\mathcal{W}_{\mathcal{V}}^{\mathcal{F}}\), \(\mathcal{W}_{\mathcal{V}}^{\mathcal{F}}\in\mathbb{R}^{d\times d}\). \(d\) is the dimensionality.
Thus the IAM operation is defined as follows:
\[\mathbf{M}_{1}^{att}=\mathrm{Softmax}\left(\frac{\mathbf{Q}^{\mathbf{D}}\cdot \left(\mathbf{P}^{\mathbf{F}}\right)^{T}}{\sqrt{d}}\right) \tag{9}\]
\[\mathbf{M}_{2}^{att}=\mathrm{Softmax}\left(\frac{\mathbf{K}^{\mathbf{D}}\cdot \left(\mathbf{P}^{\mathbf{F}}\right)^{T}}{\sqrt{d}}\right) \tag{10}\]
\[O^{I}=\mathbf{M}_{1}^{att}\cdot\mathbf{M}_{2}^{att}\cdot(\mathbf{V}^{D}+ \mathbf{V}^{F}) \tag{11}\]
where \(\mathbf{M}_{1}^{att}\) and \(\mathbf{M}_{2}^{att}\) represent the attention maps of \(\mathbf{Q}^{\mathbf{D}}\)-\(\mathbf{P}^{\mathbf{F}}\) and \(\mathbf{K}^{\mathbf{D}}\)-\(\mathbf{P}^{\mathbf{F}}\), respectively. \(O^{I}\in\mathbb{R}^{\frac{H}{32}\times\frac{\mathrm{W}}{32}\times C}\) denotes the final generated cross-attention fusion feature.
### Training
In the forward process, the Gaussian noise \(\epsilon_{t}\) is added to the ground truth \(y_{0}\) to obtain the noise mapping \(y_{t}\sim q\left(y_{t}\mid y_{0}\right)\) by \(T\)-steps. The intensity of the noise is controlled by \(\alpha_{t}\) and conforms to the standard normal distribution. This process can be defined as follows:
\[y_{t}=\sqrt{\alpha_{t}}y_{t-1}+\left(1-\alpha_{t}\right)\epsilon_{t} \tag{12}\]
where \(t=\left[1,\cdots,T\right]\) and \(\epsilon_{t}\sim\mathcal{N}(0,\mathbf{I})\).
By iterative computation, we can directly obtain \(y_{t}\). This process can be further marginalized as:
\[y_{t}=\sqrt{\bar{\alpha}_{t}}y_{0}+\left(1-\bar{\alpha}_{t}\right)\epsilon_{t} \tag{13}\]
where \(\bar{\alpha}_{t}=\prod_{i=1}^{t}\alpha_{i}\).
In the reverse process, we map from \(y_{t}\) to \(y_{t-1}\) until the segmented image is acquired step by step. The mathematics is defined as follows:
\[y_{t-1}=\mu_{\theta}\left(y_{t},t,x_{o}\right)+\Sigma_{\theta}\left(y_{t},t,x _{o}\right)\epsilon_{t} \tag{14}\]
We train a denoising UNet model to predict \(\epsilon_{\theta}\left(y_{t},t,x_{o}\right)\):
\[\mu_{\theta}\left(y_{t},t,x_{o}\right)=\frac{\left(y_{t}-\left(\frac{1-\alpha _{t}}{\sqrt{1-\bar{\alpha}_{t}}}\right)\epsilon_{\theta}\left(y_{t},t,x_{o} \right)\right)}{\sqrt{\alpha_{t}}} \tag{15}\]
We follow the improved DDPM [39] to simplify Eq. (4)-(7) to define the hybrid objective \(L_{\mathrm{hybrid}}=L_{\mathrm{simple}}+L_{\mathrm{vlb}}\). \(L_{\mathrm{vlb}}\) learns the term \(\Sigma_{\theta}\left(y_{t},t,x_{o}\right)\). Furthermore, inspired by [54], we use FF and a convolution layer to provide an initial static way \(y_{m}\) to reduce the diffusion variance, and its mean square loss is defined as \(L_{\mathrm{static}}\). Total loss function \(L_{total}\) is defined as follows:
\[\left\{\begin{array}{ll}L_{\mathrm{simple}}&=\mathbb{E}_{t\sim\left[1,T \right],y_{0}\sim q\left(y_{0}\right),\epsilon}\left\|\epsilon-\epsilon_{ \theta}\left(y_{t},t,x_{o}\right)\right\|^{2}\\ L_{\mathrm{static}}&=\mathbb{E}_{y_{0}\sim q\left(y_{0}\right),y_{m}}\left\|y_{0 }-y_{m}\right\|^{2}\\ L_{\mathrm{total}}&=L_{\mathrm{simple}}+L_{\mathrm{vlb}}+L_{\mathrm{static}} \end{array}\right. \tag{16}\]
Algorithm 1 provides the training procedure for diffCOD.
```
deftraining_loss(images, masks): """images:[b,h,w,3],masks:[b,h,w,1]"""
# Encodeimages X_p=ViT(images) F=FF(X_p)
#corruptgroundtruth t=uniform(0,1) eps=normal(mean=0,std=1) mask_crpt=sqrt(gamma(t))*masks+ sqrt(1-gamma(t))*eps
#predictandbackward D=UNet_l(images,mask_crpt,t) O=IM(F,D) preds=UNet_2(O)
#computecloss loss loss=loss_function(preds,masks) returnloss
```
**Algorithm 1**diffCOD Training
### Inference
In the inference stage, we step-by-step apply Eq. (14) to sample a pure Gaussian noise \(y_{t}\sim\mathcal{N}(0,I)\). In addition, we add conditional information related to the image features to guide the inference process. After performing \(T\) iterations, we can obtain the segmentation image of the camouflaged object. Using the setting of [39] for the sampling, the inference process of diffCOD is shown in Algorithm 2.
```
definference(images,steps): """images:[b,h,w,3],steps:samplesteps"""
#Encodeimages X_p=ViT(images) F=FF(X_p) m_t=normal(mean=0,std=1)
#timeintervals forstepinrange(steps): out=p_sample(images,F,m_t,step) returnout
```
**Algorithm 2**diffCOD Inference
## 4 Experiments
### Experimental Setup
**Datasets.** We conduct experiments on four widely used benchmark datasets of COD task, _i.e._, CAMO, CHAMELON, COD10K and NC4K. The details of each dataset are as follows:
* CAMO contains 1,250 camouflaged images and 1,250 non-camouflaged images, covering eight categories.
* CHAMELON has a total of 76 camouflaged images.
* COD10K consists of 5,066 camouflaged, 1,934 non-camouflaged, and 3,000 background images. It is currently the largest dataset which covers 10 superclasses and 78 subclasses.
* NC4K is a newly published dataset that has a total of 4,121 camouflaged images.
Following the standard practice of COD tasks, we use 3,040 images from COD10K and 1,000 images from CAMO as the training set and the remaining data as the test set.
**Evaluation metrics.** According to the standard evaluation protocol of COD, we employ the five common metrics to evaluate our model, _i.e._, structure-measure (\(S_{\alpha}\)), weighted F-measure (\(F_{\beta}^{\omega}\)), mean F-measure (\(F_{m}\)), mean E-measure (\(E_{m}\)) and mean absolute error (\(MAE\)). The purpose of structure-measure (\(S_{\alpha}\)) is to evaluate the structural information of the result and ground truth, including object perception and region perception. Weighted F-measure \(F_{\beta}^{\omega}\) is the weighted information of the mean F-measure (\(F_{m}\)) metric, and these two metrics are a combined assessment of the accuracy and recall of the result. Mean E-measure (\(E_{m}\)) is able to perform both pixel-level matching and image-level statistics, and is used to calculate the overall and local accuracy of the segmentation results. The mean absolute error (\(MAE\)) metric is often used to evaluate the average pixel-level relative error between the result and ground truth.
**Implementation details.** The proposed method is implemented with the PyTorch toolbox. We set the time step as \(T\) = 1000 with a linear noise schedule for all the experiments. We use Adam as our model optimizer with a learning rate of 1e-4. The batch size is set to 64. During the training, the input images are resized to 256\(\times\)256 via bilinear interpolation and augmented by random flipping, cropping, and color jitterting.
**Baselines.** Our diffCOD is compared with 11 recent state-of-the-art methods, including CPD [56], EGNet [62], SINet [13], MINet [41], PraNet [15], PFNet [37], LSR [36], ERRNet [25], NCHIT [60], CubeNet [66], CRNet [18]. For a fair comparison, all results are either provided by the authors or reproduced by an open-source model re-trained on the same training set with the recommended setting.
### Quantitative Evaluation
The quantitative comparison of our proposed diffCOD with 11 state-of-the-art methods is shown in Table 1. Our method achieves superior performance over other competitors, indicating that our model can generate high-quality camouflaged segmentation masks compared to previous methods. For the largest COD10K dataset, our method shows a substantial performance jump, with an average increase of 4.8%, 12.8%, 9.5%, 6.4% and 19.1% for \(S_{\alpha}\), \(F_{\beta}^{\omega}\), \(F_{m}\), \(E_{m}\) and \(MAE\), respectively. For another recent large-scale NC4K dataset, diffCOD also outperforms all methods, increasing by 3.4%, 7.1%, 6.1%, 4.0% and 14.8% on average for \(S_{\alpha}\), \(F_{\beta}^{\omega}\), \(F_{m}\), \(E_{m}\) and \(MAE\), respectively. In addition, the most significant increases in the CAMO dataset were seen in the \(F_{\beta}^{\omega}\) and \(MAE\), with improvements of 10.2% and 11.3%, respectively. CHAMELEON is the smallest COD dataset, therefore most of the methods perform inconsistently on this dataset, our method increases 3.0%, 6.2%, 4.0%, 2.6% and 21.2% for \(S_{\alpha}\), \(F_{\beta}^{\omega}\), \(F_{m}\), \(E_{m}\) and \(MAE\), respectively.
### Qualitative Evaluation
Figure 4 shows a comprehensive visual comparison with current state-of-the-art methods. It can be found that our method achieves competitive visual performance in different types of challenging scenarios. Our diffCOD is able to guarantee the integrity and correctness
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c c c c|c c c c c} \hline \multirow{2}{*}{**Method**} & \multicolumn{4}{c|}{**COD10K**} & \multicolumn{4}{c|}{**NCAK**} & \multicolumn{4}{c|}{**CAMO**} & \multicolumn{4}{c}{**CHAMELEON**} \\ \cline{2-13} & \(S_{\alpha}\) & \(F_{\beta}^{\omega}\) & \(F_{m}\) & \(F_{m}\) & \(MAE\) & \(\downarrow\) & \(S_{\alpha}\) & \(F_{\beta}^{\omega}\) & \(F_{\alpha}\uparrow\) & \(F_{m}\uparrow\) & \(MAE\) & \(\downarrow\) & \(S_{\alpha}\) & \(F_{\beta}^{\omega}\) & \(F_{\alpha}\uparrow\) & \(F_{m}\uparrow\) & \(MAE\) & \(\downarrow\) \\ \hline
2019 CFD [56] & 0.736 & 0.547 & 0.607 & 0.801 & 0.033 & 0.769 & 0.652 & 0.713 & 0.822 & 0.072 & 0.688 & 0.552 & 0.623 & 0.728 & 0.114 & 0.876 & 0.809 & 0.821 & 0.914 & 0.036 \\
2019 EGNet [62] & 0.746 & 0.560 & 0.591 & 0.789 & 0.053 & 0.804 & 0.727 & 0.731 & 0.834 & 0.066 & 0.730 & 0.579 & 0.693 & 0.762 & 0.104 & 0.851 & 0.705 & 0.747 & 0.869 & 0.049 \\
2020 SNNet [13] & 0.772 & 0.543 & 0.640 & 0.810 & 0.810 & 0.658 & 0.741 & 0.841 & 0.066 & 0.753 & 0.602 & 0.674 & 0.747 & 0.867 & 0.727 & 0.792 & 0.889 & 0.044 \\
2020 Mbitr [41] & 0.780 & 0.628 & 0.677 & 0.338 & 0.640 & 0.810 & 0.717 & 0.764 & 0.856 & 0.857 & 0.741 & 0.629 & 0.642 & 0.783 & 0.096 & 0.853 & 0.768 & 0.803 & 0.902 & 0.035 \\
2020 PshNet [15] & 0.800 & 0.656 & 0.699 & 0.869 & 0.041 & 0.826 & 0.739 & 0.780 & 0.878 & 0.056 & 0.769 & 0.664 & 0.716 & 0.812 & 0.091 & 0.870 & 0.790 & 0.816 & 0.915 & 0.039 \\
2021 PFNet [37] & 0.797 & 0.656 & 0.698 & 0.875 & 0.039 & 0.826 & 0.743 & 0.783 & 0.884 & 0.054 & 0.774 & 0.683 & 0.737 & 0.832 & 0.087 & 0.889 & 0.823 & **0.840** & **0.946** & **0.030** \\
2021 LSE [36] & 0.805 & 0.606 & 0.703 & 0.876 & 0.039 & 0.832 & 0.743 & 0.785 & 0.888 & 0.053 & 0.793 & 0.703 & 0.758 & 0.850 & 0.808 & 0.890 & 0.824 & 0.834 & 0.922 & 0.034 \\
2022 ERRNet [25] & 0.730 & 0.623 & 0.679 & 0.867 & 0.044 & — & — & — & — & — & — & 0.761 & 0.660 & 0.719 & 0.817 & 0.088 & 0.877 & 0.805 & 0.821 & 0.927 & 0.036 \\
2022 NCHIT [60] & 0.790 & 0.608 & 0.689 & 0.817 & 0.946 & — & — & — & — & — & 0.780 & 0.671 & 0.733 & 0.803 & 0.088 & 0.874 & 0.793 & 0.812 & 0.891 & 0.041 \\
2022 Cuhei [66] & 0.795 & 0.644 & 0.681 & 0.864 & 0.041 & — & — & — & — & — & — & 0.788 & 0.682 & 0.743 & 0.838 & 0.085 & 0.873 & 0.787 & 0.823 & 0.928 & 0.037 \\
2023 CRNet [18] & 0.733 & 0.576 & 0.627 & 0.832 & 0.049 & — & — & — & — & — & 0.735 & 0.641 & 0.702 & 0.815 & 0.092 & 0.818 & 0.744 & 0.756 & 0.897 & 0.046 \\ \hline diffCOD & **0.812** & **0.684** & **0.723** & **0.892** & **0.806** & **0.857** & **0.761** & **0.802** & **0.891** & **0.851** & **0.795** & **0.704** & **0.758** & **0.852** & **0.802** & **0.893** & **0.826** & 0.837 & 0.933 & **0.830** \\ \hline \end{tabular}
\end{table}
Table 1: Quantitative comparisons of our proposed method and other 11 state-of-the-art methods on four widely used benchmark datasets.
of recognition even under difficult conditions, such as single object (_e.g._, row 1-4), multi-objects (_e.g._, row 5-8), small object (_e.g._, row 9-11). Nature's camouflaged organisms often have strange traits, such as tentacles, tiny spikes, etc. Past models have blurred the recognition of edge parts even if the location of the target is correctly targeted. However, we are surprised by the advantages of diffCOD in terms of detailed textures. As shown in Figure 3, our method is able to accurately identify every subtlety, and it can depict the textures of the object in extremely fine detail, solving the blurring problem of segmentation masks in other methods.
### Ablation Studies
**Overview.** We perform ablation studies on key components to verify their effectiveness and analyze their impacts on performance, as shown in Table 2. Experimental results demonstrate that our designed Injection Attention Module (IAM), Feature Fusion (FF), and ViT can improve detection performance. When they are combined to build diffCOD, significant improvements in all evaluation metrics are observed. Note that the Baseline refers to the standard diffusion model,
Figure 4: Qualitative comparison of our proposed method and other representative COD methods. Our method provides better performance than all competitors for camouflaged object segmentation in various complex scenes.
**Effectiveness of IAM.** As can be seen in Table 2, the presence or absence of IAM plays a key role in the performance improvement of the model. Compared to the experiments without this key component, the average improvement of #2 with IAM over #1 for \(S_{\alpha}\), \(F_{\beta}^{\omega}\), \(F_{m}\), \(E_{m}\) and \(MAE\) on the three datasets is 3.0%, 4.3%, 4.7%, 2.1% and 7.7%, respectively. Furthermore, #Our accuracy improvement over #5 is significant, with an average increase of 6.0% in \(MAE\) metric on the three datasets. This is a good indication that IAM integrates diffusion features and texture features from the backbone perfectly.
**Effectiveness of FF.** The main role of FF is to aggregate the multi-scale features. As shown in Table 2, compared to No. #2, No. #3 has an average improvement of 2.2%, 5.0%, 3.8%, 2.5% and 6.0% for \(S_{\alpha}\), \(F_{\beta}^{\omega}\), \(F_{m}\), \(E_{m}\) and \(MAE\) on the three datasets, respectively. The performance of #Ours on \(S_{\alpha}\), \(F_{\beta}^{\omega}\), \(F_{m}\) and \(E_{m}\) is 3.2%, 1.0%, 0.7% and 0.3% higher than that of No. #4.
**Effectiveness of ViT.** To obtain the location information and texture information of the objects in the original features, we use a ViT as a backbone to assist the diffusion process. From Table 2, we can learn that #Ours containing rich original features has an average improvement of 2.1%, 4.5%, 3.8%, 2.1% and 6.3% over #3 for \(S_{\alpha}\), \(F_{\beta}^{\omega}\), \(F_{m}\), \(E_{m}\) and \(MAE\) on the three datasets, respectively. #2, which contains no original features at all, has an average of 4.0%, 7.5%, 6.6%, 3.9% and 10.6% lower than #4 for \(S_{\alpha}\), \(F_{\beta}^{\omega}\), \(F_{m}\), \(E_{m}\) and \(MAE\) on the three data sets, respectively. In addition, to further demonstrate the significance of conditional semantic features to guide the diffusion process, we visualize the sampling process of diffCOD. From Figure 5, we can see that our model learns part of the location information and texture patterns of the camouflaged objects at the early stage of denoising, and the subsequent inference process gradually refines the final mask by training out the denoising model on this basis. This shows that the key clues extracted by ViT are perfectly integrated into the diffusion process with the help of FF and IAM.
## 5 Conclusion
In this paper, we propose a diffusion-based framework for camouflaged object detection, which changes the previous detection paradigm of the COD community by using a generative model for the segmentation of camouflaged objects to achieve significant performance gains. To the best of our knowledge, this is the first framework that employs a denoising diffusion model for COD tasks. Our approach decouples the task of segmenting camouflaged objects into a series of forward and reverse diffusion processes, and integrates key information from conditional semantic features to guide this process. Extensive experiments show the superiority over 11 other state-of-the-art methods on four datasets. As a new paradigm for camouflaged object detection, we hope that our proposed method will serve as a solid baseline and encourage future research.
\begin{table}
\begin{tabular}{c|c c c c|c c c c|c c c c c|c c c c} \hline \hline \multirow{2}{*}{No.} & \multicolumn{4}{c|}{**Component**} & \multicolumn{4}{c|}{**CODile**} & \multicolumn{4}{c|}{**NC4K**} & \multicolumn{4}{c}{**CAMO**} \\ \cline{2-13} & Baseline & IAM & FF & ViT & \(S_{\alpha}\uparrow\) & \(F_{\beta}^{\omega}\uparrow\) & \(F_{m}\uparrow\) & \(E_{m}\uparrow\) & \(MAE\downarrow\) & \(S_{\alpha}\uparrow\) & \(F_{\beta}^{\omega}\uparrow\) & \(E_{m}\uparrow\) & \(MAE\downarrow\) & \(S_{\alpha}\uparrow\) & \(F_{\beta}^{\omega}\uparrow\) & \(F_{m}\uparrow\) & \(E_{m}\uparrow\) & \(MAE\downarrow\) \\ \hline \#1 & \(\checkmark\) & & & & 0.761 & 0.604 & 0.657 & 0.845 & 0.046 & 0.781 & 0.687 & 0.712 & 0.841 & 0.061 & 0.731 & 0.607 & 0.664 & 0.790 & 0.097 \\ \#2 & \(\checkmark\) & \(\checkmark\) & & & 0.788 & 0.638 & 0.687 & 0.861 & 0.041 & 0.805 & 0.711 & 0.747 & 0.863 & 0.056 & 0.749 & 0.631 & 0.694 & 0.805 & 0.093 \\ \#3 & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & & & 0.801 & 0.662 & 0.709 & 0.876 & 0.039 & 0.823 & 0.731 & 0.772 & 0.876 & 0.054 & 0.770 & 0.664 & 0.718 & 0.829 & 0.087 \\ \#4 & \(\checkmark\) & \(\checkmark\) & & \(\checkmark\) & & 0.809 & 0.677 & 0.719 & 0.888 & 0.036 & 0.835 & 0.758 & 0.798 & 0.889 & 0.051 & 0.792 & 0.693 & 0.751 & 0.849 & 0.083 \\ \#5 & \(\checkmark\) & & \(\checkmark\) & \(\checkmark\) & & 0.799 & 0.657 & 0.708 & 0.868 & 0.039 & 0.820 & 0.727 & 0.770 & 0.872 & 0.054 & 0.772 & 0.663 & 0.722 & 0.831 & 0.086 \\ \hline \#OUR & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & & **0.812** & **0.684** & **0.723** & **0.892** & **0.036** & **0.837** & **0.761** & **0.802** & **0.891** & **0.051** & **0.795** & **0.704** & **0.758** & **0.852** & **0.082** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation studies of our diffCOD. The best results are marked in **bold**.
Figure 5: Visual results of the sampling process. (c)-(g) is the diffCOD sampling process. The time step is 200, 400, 600, 800, and 1000, respectively.
|
2308.14376
|
Are Existing Out-Of-Distribution Techniques Suitable for Network
Intrusion Detection?
|
Machine learning (ML) has become increasingly popular in network intrusion
detection. However, ML-based solutions always respond regardless of whether the
input data reflects known patterns, a common issue across safety-critical
applications. While several proposals exist for detecting Out-Of-Distribution
(OOD) in other fields, it remains unclear whether these approaches can
effectively identify new forms of intrusions for network security. New attacks,
not necessarily affecting overall distributions, are not guaranteed to be
clearly OOD as instead, images depicting new classes are in computer vision. In
this work, we investigate whether existing OOD detectors from other fields
allow the identification of unknown malicious traffic. We also explore whether
more discriminative and semantically richer embedding spaces within models,
such as those created with contrastive learning and multi-class tasks, benefit
detection. Our investigation covers a set of six OOD techniques that employ
different detection strategies. These techniques are applied to models trained
in various ways and subsequently exposed to unknown malicious traffic from the
same and different datasets (network environments). Our findings suggest that
existing detectors can identify a consistent portion of new malicious traffic,
and that improved embedding spaces enhance detection. We also demonstrate that
simple combinations of certain detectors can identify almost 100% of malicious
traffic in our tested scenarios.
|
Andrea Corsini, Shanchieh Jay Yang
|
2023-08-28T07:49:01Z
|
http://arxiv.org/abs/2308.14376v1
|
# Are Existing Out-Of-Distribution Techniques
###### Abstract
Machine learning (ML) has become increasingly popular in network intrusion detection. However, ML-based solutions always respond regardless of whether the input data reflects known patterns, a common issue across safety-critical applications. While several proposals exist for detecting Out-Of-Distribution (OOD) in other fields, it remains unclear whether these approaches can effectively identify new forms of intrusions for network security. New attacks, not necessarily affecting overall distributions, are not guaranteed to be clearly OOD as instead, images depicting new classes are in computer vision. In this work, we investigate whether existing OOD detectors from other fields allow the identification of unknown malicious traffic. We also explore whether more discriminative and semantically richer embedding spaces within models, such as those created with contrastive learning and multi-class tasks, benefit detection. Our investigation covers a set of six OOD techniques that employ different detection strategies. These techniques are applied to models trained in various ways and subsequently exposed to unknown malicious traffic from the same and different datasets (network environments). Our findings suggest that existing detectors can identify a consistent portion of new malicious traffic, and that improved embedding spaces enhance detection. We also demonstrate that simple combinations of certain detectors can identify almost 100% of malicious traffic in our tested scenarios.
## I Introduction
Network Intrusion Detection Systems (NIDS) monitor the network traffic for signs of potential threats with various techniques, including signature detection, anomaly detection, and behavioral analysis [1, 2]. Network traffic can be analyzed either at a Packet Capture or Network Flow [3] (NetFlow) level, though packet inspection has become less common due to encryption and the massive size of modern traffic. We focus on NetFlow inspection, where packets relating to a single communication [3] are analyzed by measuring aggregated features such as idle times and the amount of exchanged data.
Recently, Machine Learning has gained popularity in NIDS [4] as it enables automatic extraction of complex detection patterns, quick adaptation to changing environments [5], and easy personalization without expensive human expertise. However, ML-based solutions have limitations such as lacking interpretability and requiring well-crafted training data. Although a large body of research is addressing these issues [4], we focus on another drawback: _ML-based NIDSs always provide a response regardless of whether they recognize (are trained with) the input data pattern_.
This issue is particularly relevant since network traffic tends to have _dynamic and non-stationary distributions_, either caused by normal behavioral shifts or adversaries, which can cause degradation in NIDS performance: a problem known as _concept drift_[6]. Another inherent problem of dynamic distributions is Out-Of-Distribution data, which is _unusual traffic markedly different from a reference distribution_ not necessarily affecting the overall data distribution. In general, concept drift and OOD data are both caused by shifts in feature distributions, label distributions, or both [6, 7].
Based on our analysis, an ML-based NIDS may be affected in various ways after deployment. In Fig. 1, we present 4 exemplar cases in a NIDS trained to detect Botnet and SQL injections. Normally, the NIDS is expected to work as in Case 1, where new traffic is well represented by training data (i.i.d. assumption). However, it is normal for traffic to shift over time and this may affect NIDSs depending on the shift extent and
Fig. 1: Different situations in the decision space of a deployed ML-based NIDS. Case 1 is the expected situation where the new traffic respects the i.i.d. assumption. Case 2 depicts traffic that is (gradually) shifting due to changes in malicious and benign behaviors towards relatively known (usual) regions. Case 3 shows new unknown traffic (of potentially different classes) falling into unusual regions of the decision space. Case 4 describes a challenging situation where a new attack is crafted so that it is misclassified by the NIDS.
direction. For instance, the shift to SQL traffic in Case 2 does not compromise the NIDS, but the same is not true for benign traffic. Moreover, new traffic may also be adversarially crafted (Case 3) and be potentially mistaken as in Case 4.
We argue that shifts as those in Fig. 1 happen more in NetFlow features (_covariate shift_[7, 8]) rather than in labels (_actual or semantic shift_[6, 7]). As in Case 2, malicious traffic remains malicious if its features are adversarially crafted to evade detection, while shifts in normal user behaviors should not transform benign traffic into malicious. Even in cases of new attacks (Case 3), it might be possible to experience shifts in feature distributions [7]. Additionally, we argue that OOD techniques sensitive to small feature perturbations can also serve as drift detectors by monitoring the volume of alerts over time either with standard statistics or existing methods [6, 8, 9]. Therefore, in this work, we adapt and evaluate techniques to detect shifts primarily affecting features.
A perfect OOD detector should trigger an alert and ask the expert knowledge for further investigation in every situation but Case 1. However, Case 4 is extremely difficult to detect without any additional information besides the ML-based NIDS and training data. Whereas well-designed OOD detectors should identify cases like 2 and 3. We thus investigate whether traffic generated by new attacks, either similar to those in training or completely different, can be detected as OOD by existing techniques from other ML fields. Note that it is not guaranteed that effective techniques in other fields are suitable for network intrusion. As an example, well-working detectors on data like images with bounded and discrete domains might prove ineffective on NetFlows, where features are generally a mix of continuous unbounded and discrete.
Therefore, we select detection techniques of different natures from other ML fields and evaluate whether such techniques can identify NetFlows of unknown attacks. As baseline model, we consider a standard FeedForward Neural Network, and we also assess the effect of different training regimes on the quality of detection techniques. Specifically, we train models in _binary_ (different attacks in the same class) and _multi-class_ (each attack makes a class) settings, with and without the aid of a simple Contrastive Learning approach: _Center-Loss_[10]. We expose various combinations of models and detection techniques to malicious traffic generated from attack types not seen in training, where such traffic may come from the same dataset (same network environment) and a different dataset (different network environment). Finally, we evaluate two ensembles of OOD techniques to enhance detection and further explore the complementarity of these techniques, providing guidelines for practical applications. All our code and the numerical results are freely available at [https://github.com/AndreaCorsini1/CyberOOD](https://github.com/AndreaCorsini1/CyberOOD)
The contributions of this paper include:
* We investigate the effectiveness of treating the identification of unknown intrusions as an OOD detection problem and explore the applicability of existing OOD techniques.
* We identify the most effective techniques for detecting new intrusions and explore their potential for combination to enhance detection. We also discuss limitations of some techniques, providing insights for further development.
* We emphasize the significance of improving the model embeddings to achieve better detection, highlighting that:
* _Contrastive Learning_, specifically the use of _Center-Loss_, enables the creation of embeddings that improve OOD techniques and their ensemble.
* _Multi-class_ training allows making semantically richer embeddings, which offer advantages over binary ones for OOD techniques and their ensembles.
The remainder is organized as follows: Sec. II presents existing OOD literature; Sec. III describes key concepts for our work; Sec. IV outlines our methodology; Sec. V describes the experimental setup; Sec. VI presents results; and Sec. VII closes with limitations and potential future directions.
## II Related Works
In this section, we present various Out-Of-Distribution detection techniques and we review recent proposals to identify and react to shifts in the NIDS literature.
### _Out-Of-Distribution in Machine Learning_
Machine learning models are trained under the closed-world assumption, where test data is drawn i.i.d. from the same distribution as the training data. However, this assumption is often violated and several ML fields try to address the issue of identifying unknown/anomalous/out-of-distribution data:
* _Anomaly detection_[11] aims to detect anomalous inputs that deviate from normality, whether in features or labels. Anomaly detection assumes there might be abnormal data in the training set [12] and treat data as a whole, thus it does not strictly require the correct classification of inputs.
* _Novelty detection_ is similar to anomaly detection, but assumes the presence of only normal data in the training set and focuses on inputs affected by semantic shift [7], hence not falling into any of the training classes. In addition, novel inputs are not treated as erroneous and are typically prepared for retraining and future constructive procedures.
* _Open Set Recognition_[13] goes beyond novelty detection and also requires the correct classification of in-distribution (ID) data. The goal is to detect inputs belonging to new classes and correctly classify those from known classes. Open Set Recognition is usually focused on semantic shifts.
* _Outlier detection_ identifies inputs in a dataset that markedly differ from others. Outlier detection is a pre-processing step and is not applied during inference or training.
As introduced in Sec. I, the NIDS setting requires the classification of known traffic and the detection of shifts caused either by modifications in known traffic or the appearance of unknown traffic. This setting resembles the Open Set Recognition one, but it additionally comprises shifts not implying the appearance of new classes. Therefore, we speak of Out-Of-Distribution detection in general terms.
**Confidence-based** detectors use estimates derived from a model to quantify the level of certainty or trust in its predictions as an indicator of ID-ness. In [14], the authors observed
that well-trained models assign lower confidence scores to OOD data. Subsequent studies [15, 16, 17] have proposed techniques to enhance confidence estimation, while others have introduced modifications to the model architecture and training objectives [18, 19]. Although confidence is not always a reliable OOD indicator [20, 21], due to their simplicity and clarity, confidence-based detectors are commonly used in practice and serve as a baseline for OOD detection.
**Density-based** detectors explicitly model the distribution of ID data, either raw or latent features, and flag samples falling into low-density regions as OOD. In multi-class tasks, class-conditional distribution estimators are often employed so that the OOD samples can be identified based on their likelihood [22, 23]. To model the class-conditional distribution of ID data, parametric and non-parametric models such as a simple Mixture of Gaussian, Kernel Density Estimation, and deep generative models [7] are frequently used. However, modeling the distribution of complex data and estimating the likelihood may be challenging [23], imply a-priori assumptions that need validation, and do not always scale well like in kernel estimators. Therefore, we prefer to avoid these detectors and leave their evaluation to future work.
**Distance-based** detectors are based on the idea that the OOD samples should be relatively far away from centroids or prototypes of ID classes. Once a prototype is extracted for each training class, a distance metric like Mahalanobis, Euclidean, or Cosine can be used to estimate the class similarity and flag samples that are not similar enough to any of the prototypes [7, 22]. Recently, even a class-conditioned K-Nearest Neighbor approach [24] has been adopted to detect OOD samples based on the distance from the k-nearest neighbor.
### _Out-Of-Distribution in Network Intrusion_
A significant portion of the network intrusion literature on ML applications focuses on anomaly detection [25, 26] and concept drift [27, 28, 6, 29]. Anomaly techniques, such as autoencoders [30], have gained interest due to their ability to detect unknown attacks using only normal traffic and without requiring labels. However, these methods often suffer from a high number of false alarms as they flag any anomalous sample as an attack [4]. In contrast, concept drift and OOD detectors are generally more effective but typically require labeled data [6, 7]. Therefore, recent works proposed ML-based solutions that ease the need for labels without increasing false alarms by leveraging anomaly detection techniques. For instance, [31] proposed an efficient and online ensemble of autoencoders that utilizes an ad-hoc feature extraction module to differentiate normal and abnormal patterns in packets. Similarly, [31] introduced an adaptive ensemble system that incorporates a packet-based feature extraction method and a sub-classifier generation module to create ensemble models from drifted data chunks and ground truth labels. [32] modified the extreme gradient boosting algorithm to detect and adapt to drifts in the presence of a large number of features. [29] employed active learning, label estimation, and an explainable ML framework to respectively update the model, reduce labeling overhead, and interpret model reactions to shifts.
In a context akin to ours, [5] utilized a contrastive loss signal alongside a distance function capturing instance and class-level fidelity to recursively update the encoder network. Similarly, [27] employed Contrastive Learning to create a compressed representation of training data which is used to detect drifting samples with class centroids. Both these works use a contrastive signal that pulls embeddings of the same class together and pushes those of different classes apart, we instead rely on Center-Loss [10]. Moreover, these works use autoencoders while we adopt a FeedForward Network.
## III Gradient Detection & Contrastive Learning
This section presents key components of our study, specifically gradient-based detectors and Contrastive Learning. We represent an ML-based NIDS with a parameterized model \(f\) that maps NetFlows \(x_{i}\in\mathbb{R}^{d}\) into a class \(\overline{y}=\arg\max_{j\in C}z_{j}\), where \(C\) is the set of training classes and \(z_{j}=f_{j}(x_{i})\) is the logit (pre-softmax) score produced by \(f\) for class \(j\in C\). Additionally, we suppose the model gives in output the embedded representation \(e_{i}\in\mathbb{R}^{w}\) of \(x_{i}\) constructed after the last embedding layer, i.e., the one before the classifier layer. Refer to the left part of Fig. 2 for a graphical representation.
### _Gradient-based Detection: ODIN and Mahalanobis_
Most OOD detectors rely on information extracted from models to derive OOD scores, disregarding information on the gradient. In [15], the authors observed that adding a fixed perturbation to samples in the direction of the gradient amplifies the gap between ID and OOD softmax scores. Thus, the idea behind Out-of-DIstrubtion detectioN (ODIN) [15] is to jointly apply _temperature scaling_[33] and a _controlled perturbation_ to detect OOD data. ODIN consists of the following steps:
1. **Temperature Scaling**: divides the logits \(z_{j}\) by a temperature \(T\) that reduces the sharpness of the softmax distribution and makes the model less confident.
2. **Perturbation**: involves adding a perturbation \(\epsilon\) to \(x_{i}\) in the direction given by the sign of the gradient: \[\widehat{x_{i}}=x_{i}-\epsilon\text{sign}(\nabla p_{j}^{*})\] (1) where \(p_{j}^{*}=\max_{j\in C}p_{j}\) is the maximum softmax score for \(x_{i}\) after temperature scaling. This perturbation pushes samples toward their nearest class.
3. **Detection**: computes an ID score by feeding \(\widehat{x_{i}}\) into the model again; if this score is above a threshold, \(x_{i}\) is ID.
Another similar gradient-based method is the Mahalanobis Detector (MD) [12], where the Mahalanobis distance is used to measure how "typical" a point is with respect to a learned latent distribution. The Mahalanobis distance requires an estimate of the mean \(\mu\) and covariance matrix \(\Sigma\) for each ID class, which are normally extracted from the training set. After having these parameters, MD applies a controlled perturbation as in ODIN, but without temperature scaling and where the
gradient is computed with respect to the distance between \(e_{i}\) and the nearest class distribution (\(dist_{MD}(\cdot)\)):
\[\widehat{x_{i}}=x_{i}-\epsilon\text{sign}(\nabla dist_{MD}(e_{i})) \tag{2}\]
The final OOD detection is similar to ODIN: a threshold is first extracted from the validation, and every perturbed \(\widehat{x_{i}}\) with a distance higher than this threshold is labeled as OOD.
### _Contrastive Learning and Center Loss_
Contrastive Learning is a self-supervised technique designed to learn meaningful embedding representations. It achieves this by bringing similar input samples closer together in the learned embedding space while pushing dissimilar apart [34]. By doing so, Contrastive Learning encourages the model to capture discriminative features that can be useful for various downstream tasks. In a typical contrastive framework, each sample in a batch is augmented through ad-hoc transformations (such as random cropping and flipping for images) into new samples called the positives, while the original sample is referred to as the anchor. The objective is to maximize the similarity between the anchor and the positives while minimizing the similarity between the anchor and other batch samples.
One of the precursor techniques to Contrastive Learning is Center-Loss [10] (CL). CL encourages a model to learn discriminative embeddings \(e_{i}\) that cluster around their class centers. It accomplishes this by defining a center \(c_{j}\in\mathbb{R}^{w}\) for each class \(j\in C\) and introducing an additional term to the standard cross-entropy loss. This additional term minimizes the distance between the embeddings and their corresponding class centers, which are determined by the ground-truth labels. The class centers are learned alongside the model's parameters by minimizing the additional Center-Loss term:
\[L_{CL}=\frac{1}{2}\sum_{i\in B}||e_{i}-c_{j}^{*}||^{2} \tag{3}\]
where the sum is over the batch samples \(B\) and \(c_{j}^{*}\) is the ground-truth center of each input. The overall loss is thus a linear combination of Cross-Entropy (\(L_{CE}\)) and Center-Loss: \(L=L_{CE}+\lambda L_{CL}\), where \(\lambda\) is a hyperparameter that controls the weight of \(L_{CL}\).
## IV Methodology & Design Choices
Herein, we present our model architecture, the adapted Center-Loss for our settings, and the selected OOD detectors.
### _The Model Architecture_
Although it might be possible to design ad-hoc architectures for OOD detection tasks [19, 35], we prefer to avoid them and make no particular assumption about the model. We only require for a NetFlow \(x_{i}\in\mathbb{R}^{d}\) to have access to its pre-softmax score \(z_{i}\) and to an embedded representation \(e_{i}\in\mathbb{R}^{w}\) produced within the model, like the one generated before the classification layer. Therefore, the architecture can comprise any layer like convolutional, linear, and recurrent ones [30].
We logically divide our model into two parts: (i) an encoder that transforms NetFlows \(x_{i}\) into embeddings \(e_{i}\), and (ii) a classifier that uses \(e_{i}\) to produce a softmax score for each class. The proposed encoder is composed of four linear layers of decreasing size, each activated through a LeakyReLU non-linearity with a slope of 0.15. We also apply dropout after the first three layers. The classifier is a single linear layer that has as many neurons as the number of output classes. We will refer to such a model as Feedforward Neural Network (FNN) and provide in the left part of Fig. 2 a visual representation.
### _Improving the Model Embedding_
Recently, Contrastive Learning has been widely adopted to improve the performance of ML in different tasks [34]. In our work, we propose to use Contrastive Learning to make embeddings learned by our FNN more discriminative, improving classification tasks [10, 34, 36] and potentially enhancing the effectiveness of OOD detectors. As an example, refer to the two plots on the right of Fig. 2, which represent the embedding spaces produced by our FNN encoder when trained with and without a contrastive learning signal, respectively. It is immediate to see that the projected NetFlows of individual attacks are less scattered and more separated in the Center-Loss plot. These discriminative embeddings may benefit detectors like distance-based ones that assume normality or well-representative prototypes to detect OOD.
Fig. 2: On the left, the architecture of the considered FNN and the holistic view of where the different OOD detectors act. On the right, the 2D embeddings (\(e_{i}\)) created by the encoder inside the decision space of the FNN trained with and without Center-Loss on four traffic types. The lines highlight points of the decision space where the softmax scores produced by the classifier (i.e., the FNN confidence) change.
Although many contrastive methods exist [10, 34, 36], most of them are primarily designed for other ML fields and rely on positive samples and ad-hoc augmentations [34], vague concepts in the NIDS literature. Therefore, we prefer to employ a simpler and more straightforward method: _Center-Loss_ (described in Section III-B). The application of Center-Loss to our setting does not require any particular modifications; however, we need to account for the unique aspects of NIDSs such as heavily imbalanced training sets and noisy data.
To mitigate the effect of imbalanced sets, we adopt a combination of over- and under-sampling as further described in Sec. V-B. This is particularly important because CL works locally on batches, and with heavy unbalancing it is likely to have batches with only NetFlows of the majority benign class, thus focusing too much on improving their embeddings and its center. In addition, we propose to apply the CL term of Eq. 3 only on samples correctly classified by the model. This helps in mitigating the effect of noisy labels and data during training, which are common issues in NIDSs [37].
### _Adopted OOD Detectors_
In this work, we consider OOD detectors of different natures that work beside classification models (pre-trained and not) and can be applied to any architecture. Our rationale for selecting detectors reviewed in Sec. II is to choose popular ones in related ML fields whose complexity (theoretical and implementation) is as low as possible. Wherever possible and not penalizing in terms of performance, we prefer to evaluate detectors as originally proposed.
Regarding confidence-based detectors, we adopt the baseline approach proposed in [38] (CONF). This straightforward solution involves applying a threshold to the softmax scores and labeling as OOD all the NetFlows with a score below this threshold. We also adopt Monte Carlo Dropout [17] (MCD) in a similar manner. Instead of relying on a single confidence estimate for a NetFlow \(x_{i}\), we leverage MCD with a switch-off probability of 0.4 to obtain multiple softmax scores. Then, all those \(x_{i}\) for which the standard deviation of their softmax scores exceeds a predefined threshold are flagged as OOD. This allows a less biased estimate about \(x_{i}\).
In addition, we adopt two cutting-edge gradient-based detectors in computer vision which are ODIN[15] and Malanobis [12] (MD), introduced in Sec. III-A. Although there exist improvements over these proposals (see e.g. [19]), we prefer to keep them as originally proposed to avoid potential biases introduced by the assumptions of such improvements. As we demonstrate later, gradient-based detection seems less effective on NetFlows compared to images.
Lastly, we include two distance-based detectors. The first one (SIM) uses the simplified Silhouette [39] to measure the distance between test and training data. For each class \(j\in C\), SIM first extracts a center by averaging the embeddings \(e_{i}\) of training data labeled with \(j\). Then, it uses these centers to compute Silhouette values for testing NetFlows by flagging as OOD those having a maximum value below a threshold. Note that the simplified Silhouette is adopted here to reduce the computational complexity of the standard Silhouette [39]. The second detector is based on the K-Nearest Neighbor (KNN) proposal in [24], where a separate KNN model is fitted on the embeddings \(e_{i}\) of training classes and used to measure Euclidean distances at inference time. This detector works similarly to SIM, but it selects the KNN model to query for measuring the distance from the k\({}^{th}\) nearest neighbor based on the class predicted by the FNN. If such distance is above a threshold, the NetFlow is OOD. After preliminary analysis, we set \(k=25\) and \(\alpha=100\%\) in all our experiments. We refer the reader to [24] for more detailed explanations.
All these detectors rely on thresholds extracted on ID NetFlows, except ODIN and MD, which also require OOD data. Details on threshold extraction are provided in Sec. V-B.
## V Experimental Setup
### _Datasets and Preprocessing_
**Datasets.** In our experiments, we train models on benign traffic and specific attacks from one dataset. Then, we evaluate such models on remaining attacks from the same dataset as well as attacks from another one. Thus, we selected two similar labeled datasets: _IDS2017_[40] comprises synthetic traffic and common attacks like DoS (D) and DDoS (DD), while _IDS2018_[40] contains more attack variants and is created in a larger network. You can refer to Tab. IV for the list of their attacks. The traffic of these datasets is transformed into NetFlows with the CICFlowMeter [41], where each NetFlow is described by a set of more than 80 features. We purposely chose these datasets as they contain roughly the same attack families and their traffic comes from consecutive years, hence should not differ much. By training on some attacks and testing on all the others from both datasets, we can logically simulate all the cases described in Fig. 1. With a single dataset, it is hard to cover situations like those described in Case 2 of Fig 1, as inducing shifts in known training attacks requires artificial manual crafting of NetFlows. Contrary, with a dataset comprising the same attacks, we can try to simulate situations of Case 2 without explicit manual intervention. As an example, we are going to use the D-hulk traffic of IDS2018 for training and test detectors on the "shifted" D-hulk traffic of IDS2017. Lastly, note that solely including more diverse datasets does not help in better modeling Case 2.
**Preprocessing.** We have established with a simple feature selection procedure a common set of 20 features for both our datasets from the 80+ generated by the CICFlowMeter. Before applying our procedure, we log-scale continuous features, leave unaltered integer ones, and encode in one-hot port numbers by considering three intervals: well-known, registered, and ephemeral ports. Then, our feature selection procedure starts by considering each dataset per se and identifies the most important features with a Random Forest analysis [42]. On each dataset, we apply the following steps:
1. Remove IPs and quasi-constant (variance <0.05) features.
2. Keep an arbitrary feature between ones having a Pearson correlation coefficient higher than 0.8.
3. Fit a large Random Forest (200 trees with 20 as maximum depth) on the remaining features and evaluate Gini and Permutation importance [42].
4. Rank the features based on the normalized sum of Gini and Permutation importance.
After having the features ranked by their importance, our procedure automatically selects those that are among the 20 most important in both datasets. To ensure a satisfactory detection performance, we additionally ensure that the top-7 features on each dataset are selected. The final set of features is reported in Tab. I. Note that for open-source datasets not having all our features, Zeek with a customized script can be used to generate the required NetFlow features.
### _Model, Training, and Tuning details_
**Architecture.** In our FNN, we use linear layers of decreasing size in the encoder, the first contains 128 neurons, the second 64, the third 32, and the fourth 2. All dropout layers switch off neurons with a probability of 0.3. Whereas the classifier contains as many neurons as classes in the training set. Note that we restrict the encoder to produce embeddings in a 2D space to easily plot them. We offline verified that this restriction does not limit the model classification performance, as theoretically stated in the universal approximation theorem [30].
**Training.** We train our FNN on scenarios extracted from IDS2018, the larger and more comprehensive dataset, where a scenario comprises all the benign traffic and three attacks (4 classes). Refer to Tab. II for the list of the scenarios and their attacks. Every training scenario is split 70/30 in a stratified manner. We use 70% of the traffic for training two separate models - one with Center-Loss and one with Cross-Entropy. The remaining 30% is used for validation purposes and detector tuning. All the models are trained for \(25\) epochs with the Adam optimizer [30], batch size of 512, and learning rate at 0.0005. The model producing the best F1-score on the validation traffic (always above 99% in all our scenarios) is saved for testing. Regarding Center-Loss, we use a separate Adam optimizer, a learning rate of 0.0001, and a weighting factor \(\lambda=1\) (see Sec. III-B). In addition, we use over- and under-sampling to make batches with roughly the same amount of NetFlows for each class. This is achieved by sampling with repetition a NetFlow with probability inversely proportional to the frequency of its class in the training set.
**Metrics and Detector Tuning.** Since our objective is to determine whether unknown malicious traffic can be identified as OOD, we evaluate detectors based on the True Positive Rate (TPR). In this context, a true positive refers to a NetFlow of an unknown attack labeled as OOD. We specifically avoid using the F1-score because our detectors are tuned to maintain a low False Positive Rate (FPR) of 5% on ID traffic. However, we do utilize the F1-score when assessing the performance of detector combinations, as the rejected ID traffic may exceed 5%. All the detectors selected in Sec. IV-C rely on pre-defined rejection thresholds. To set such thresholds, we followed a common practice in the literature, ensuring that 95% of the ID validation traffic (malicious included) is not rejected [12, 15, 19, 24]. The only exceptions are ODIN and MD, for which we also used OOD attacks to extract the threshold and select \(\epsilon\in\{0.0001,0.001,0.005,0.01,0.05,0.1,0.5\}\) with \(T=20\). Specifically, we took advantage of attacks not used in our evaluation, like infiltration and attacks with a few NetFlows, and used them along with validation traffic to tune as in [15]. Note that we exclude infiltration as it is not well classified by the FNN nor well detected by OOD techniques with our features, i.e., an example of Case 4 in Fig. 1. We also see that using hard-to-discriminate attacks improves the detection capability of ODIN and MD. For other parameters such as mean and covariance matrix in MD, centers in SIM, and KNN models, we extracted them from the training set.
## VI Results & Analysis
In this section, we evaluate the chosen detectors in Sec. IV-C to identify previously unknown intrusions as OOD. Remember that these detectors work beside the ML-based NIDS, i.e., the FNN described in Sec. IV-A, that is trained to classify attacks of specific scenarios. These scenarios comprise only a few training attacks and are designed to test detectors under different circumstances, such as those presented in Fig. 1. The specific scenarios adopted are outlined in Tab. II. Although it is hard to precisely pinpoint which situation of Fig. 1 is in each scenario, we tried to design them to logically contain all. Every scenario comprises training attacks characterized by distinctive aspects. Within the pool of unknown testing attacks, encompassing all attacks not encountered during training, there are fairly similar and dissimilar ones. These testing attacks should end up in different regions of the FNN's decision space, effectively simulating the situations depicted in Fig. 1.
### _Detecting Unknown Attacks with OOD detectors_
We begin by assessing detectors and their combination when applied to models trained in a _multi-class_ setting both with and without Center-Loss (CL).
**Single Detector Results.** We first examine the performance of individual detectors. Tab. III presents in each horizontal section the TPR of detectors (columns) on a distinct scenario. Each cell contains the TPR of a detector on an unknown attack when applied to the FNN trained with and without Center Loss (TPR\({}_{CL}\) / TPR\({}_{CE}\)). The last row of a section (Total TPR) reports the global TPR, irrespective of the attack types.
Overall, we observe that all the unknown attacks are detected to some extent in their traffic. The best OOD detector appears to be KNN, followed by MCD and CONF, while other detectors exhibit lower average performance. Specifically, we see that ODIN and MD have generally lower TPRs than confidence-based detectors, contrary to what was discovered in computer vision [12, 15, 19]. This suggests that controlled perturbations rigidly derived from the gradient do not always benefit detection as expected. We suspect that NetFlow features, which do not have a bounded and discrete domain as pixels, may require more flexible per-feature perturbations that better conform to the domain of features. This might help in pushing ID NetFlows toward their class, better enlarging the gap between ID and OOD scores as in computer vision.
Furthermore, we find that applying CL to multi-class models does not always improve OOD detection. Although the embeddings produced with CL are in general more discriminative, this benefits detectors such as CONF, MD, and SIM, but not as much KNN. Our explanation is that a multi-class FNN has already semantically rich embeddings, reducing the effect of CL. In addition, training with CL may sometimes force the model to produce embeddings closer to known classes, which would not be without CL (refer to the right of Fig. 2 for a graphical comparison). This may benefit the assumptions of certain detectors like the representativeness of mean and covariance matrix in MD and the centers of SIM. However, tighter neighborhoods may negatively impact the performance of KNN in certain situations. Consequently, we conclude that CL is a simple method to enhance OOD detection in NIDSs [24, 27], although its effectiveness may vary on certain detectors.
**Ensembles Results.** We proceed by aggregating detectors into two ensembles to improve overall performance and evaluate their complementarity. For this analysis, we use the previously considered scenarios and assess the ensembles' performance on unknown attacks also from the IDS2017 dataset.
To measure the maximum amount of unknown traffic that can be rejected, we use a simple ensemble (ENS\({}_{1}\)) that flags a NetFlow as OOD if at least one detector predicts it as such. This ensemble comprises all the detectors applied to the FNN trained with and without CL, resulting in a total of 12 combinations. The second ensemble (ENS\({}_{2}\)) consists of three detectors and flags a NetFlow as OOD if at least one predicts OOD. We use the CONF detector applied to the CL-trained FNN, along with the KNN and ODIN detectors applied to the FNN trained with Cross-Entropy. The goal of ENS\({}_{2}\) is to prove that it contains complementary detectors. We remark that these ensembles have been specifically designed to increase the detection (TPR) of unknown attacks.
Tab. IV presents the TPR of the ensembles on attacks from the two datasets (horizontal sections). The last two rows report the total TPR and total F1-Score on all the attacks from
both datasets. Remember that true positives refer to unknown attacks labeled as OOD while false positives correspond to benign NetFlows mistakenly marked as OOD. We exclude training attacks as the FNN detects them correctly.
We first highlight that ENS\({}_{1}\) achieves almost perfect TPRs in both datasets, indicating there are complementary detectors in our set. However, this ensemble strategy also increases the false positives, as remarked by the consistent gap between total TPR and F1-Score in all three scenarios. In fact, the false positive rate on benign validation traffic goes from 5% of single detectors (as resulting from the tuning described in Sec. V-B) up to 36% with the ensemble.
On the other hand, ENS\({}_{2}\) demonstrates similar total TPRs compared to ENS\({}_{1}\) but consistently achieves better F1-Scores. This improvement is attributed to significantly reduced false positive rates, which are halved compared to those of ENS\({}_{1}\). We observed that CONF and KNN contribute the most to this ensemble, aligning with the findings in Tab. III, while ODIN gives a smaller nevertheless important contribution. Overall, ENS\({}_{2}\) proves to be a superior ensemble that incorporates complementary detectors. This highlights the relevance of combining detectors of different natures (e.g., confidence-, distance-, and gradient-based) applied to models trained with different strategies. By doing this it is possible to fortify defense against the situations described in Fig. 1.
Additionally, we remark that detecting attacks from other datasets appears to be a relatively easier task, indicating experimental bias [43]. Although we verified the similarity of individual feature distributions between datasets, patterns extracted from IDS2018 differ from those of IDS2017. This is evident from the almost perfect rejection of IDS2017 attacks, the rejection of attack types included in the training set from the 2018 data (such as FTP and D-hulk in Scenario 1), and the high rejection rates (above 70%) for IDS2017 benign traffic. Note that the benign traffic from IDS2017 is for a different network and is expected that detectors will reject benign traffic incurred on a different network. In general, there is a need for a methodology that enables better integration of traffic from different networks (datasets) for the purpose of OOD in NIDSs, a topic we will cover in the future.
Finally, we also conducted experiments by training on IDS2017 attacks and testing on IDS2018. In these regards, we only observed lower detection rates on certain unknown attacks of IDS2017 like Bot in both individual detectors and ensembles. However, the performance on attacks from IDS2018 (Bot included) was almost perfect. This discrepancy vouches once again for the necessity of a better integration methodology. Due to space limitations, we do not report such extensive results.
### _Better Embedding, Better Detection_
Many supervised datasets in network intrusion detection contain information about the specific type of attack each NetFlow belongs to. Typically, this information is ignored as the task is treated as a binary classification one. However, we do demonstrate herein that leveraging the richer semantics of multi-class models can improve OOD detection and that Contrastive Learning can serve a similar goal. To this end, we compare the overall _multi-class_ results from the previous section with those of the same detectors applied to models trained in a _binary task_, which is obtained by grouping NetFlows of training attacks into a single malicious class. We retrain a binary FNN with and without CL for each scenario of Tab. II, and evaluate OOD detectors on unknown attacks.
**Single Detector Comparison.** We first compare the results of individual detectors on unknown attacks of IDS2018. In Tab. V, the top section displays the total TPR of detectors applied to binary models, with and without CL (TPR\({}_{CL}\) / TPR\({}_{CE}\)). Whereas the bottom section presents the reduction in the rejections computed by subtracting the multi-class total TPRs from binary ones, with and without CL.
The results in the top section highlight that CL consistently improves all the TPRs in the binary case, resulting more effective than in multi-class settings. Therefore, the importance of CL and Contrastive Learning is more pronounced for ML-based NIDSs trained on binary tasks, since multi-class trainings already make embeddings more discriminative.
In the bottom section, we generally observe that detectors applied to the binary FNN without CL have significantly reduced \(\downarrow\) TPRs compared to the multi-class case. This underlines that semantic information induced by multi-class training enables to make better embedding spaces for detecting unknown attacks. Whereas detectors applied to the binary FNN with CL have either similar detection rates or less pronounced reductions, suggesting that CL roughly gives the same enhancement despite the training regimes of the model.
Therefore, we conclude that better embeddings, such as those obtained from multi-class models or Contrastive Learning methods, enhance OOD detection.
**Ensemble Comparison.** Lastly, we present the overall results of the two ensembles described in Sec. VI-A in the binary case. Remember that \(\text{ENS}_{1}\) comprises all the combinations of binary models and detectors, while \(\text{ENS}_{2}\) combines CONF with the CL-trained FNN, as well as KNN and ODIN coupled with the FNN trained without CL. For this comparison, we consider the two ensembles made with the binary FNNs and also those created with the multi-class FNNs. Fig. 3 plots for each scenario of Tab. II the total TPR and F1-Score on unknown attacks from both IDS2018 and IDS2017.
Overall, we observe that ensembles of detectors applied to binary models still yield superior detection, but not as much as in the multi-class case. This suggests that combining OOD detectors is more effective when applied to models with semantically richer embeddings, such as those produced in multi-class settings. Furthermore, the better F1-scores of \(\text{ENS}_{2}\) with respect to \(\text{ENS}_{1}\) in both the binary and multi-class cases indicate that \(\text{ENS}_{2}\) detectors positively complement each other. This demonstrates again the importance of leveraging OOD detectors of different natures, as they enable a broader coverage of unusual and potentially harmful regions of the model's decision space (see Fig. 1).
## VII Conclusion
In this work, we analyzed the ability of existing OOD techniques to detect traffic of unknown intrusions. We use a standard FeedForward Neural Network as ML-based NIDS and trained it on subsets of attacks in a binary and multi-class setting, by also applying a Contrastive Learning signal. Then, we use these models along with a set of six OOD techniques relying on different strategies to identify unknown attacks extracted from the same and a separate dataset (network).
Our findings reveal that existing OOD detectors constitute a valid means to identify portions of unknown attacks, although their effectiveness varies compared to other ML fields. Furthermore, we highlighted that employing training strategies such as multi-class supervision and Contrastive Learning improves the performance of most tested OOD detectors. Lastly, we demonstrated that combining detectors relying on different strategies leads to superior performance, especially when applied to differently trained models.
While our study has provided some insights into the potential of adopting OOD techniques for network intrusion detection, we acknowledge that there is still much to cover. Notably, one of the limitations of our work is the lack of a methodology that allows a more realistic integration of unknown attacks extracted from diverse datasets (networks). As many datasets offer only limited coverage of cyberattacks, this methodology is of utmost importance to comprehensively assess OOD techniques. Additionally, we recognize the prospective value of a visualization tool derived from our plotting strategy used for Fig. 2 to inspect models' decision spaces. Such a tool could prove beneficial for network inspection in practical use cases and aid the categorization of attacks in the context of Fig. 1.
Therefore, in future works, we will focus on these points and also explore the influence of different features on the efficacy of OOD detectors. Furthermore, we intend to improve less effective detectors, like ODIN and MD, and evaluate others.
|
2301.00608
|
Sustained heating of the chromosphere and transition region over a
sunspot light bridge
|
Sunspot light bridges (LBs) exhibit a wide range of short-lived phenomena in
the chromosphere and transition region. In contrast, we use here data from the
Multi-Application Solar Telescope (MAST), the Interface Region Imaging
Spectrograph (IRIS), Hinode, the Atmospheric Imaging Assembly (AIA), and the
Helioseismic and Magnetic Imager (HMI) to analyze the sustained heating over
days in an LB in a regular sunspot. Chromospheric temperatures were retrieved
from the the MAST Ca II and IRIS Mg II lines by nonlocal thermodynamic
equilibrium inversions. Line widths, Doppler shifts, and intensities were
derived from the IRIS lines using Gaussian fits. Coronal temperatures were
estimated through the differential emission measure, while the coronal magnetic
field was obtained from an extrapolation of the HMI vector field. At the
photosphere, the LB exhibits a granular morphology with field strengths of
about 400 G and no significant electric currents. The sunspot does not
fragment, and the LB remains stable for several days. The chromospheric
temperature, IRIS line intensities and widths, and AIA 171 \AA and 211 \AA
intensities are all enhanced in the LB with temperatures from 8000 K to 2.5 MK.
Photospheric plasma motions remain small, while the chromosphere and transition
region indicate predominantly red-shifts of 5-20 km/s with occasional
supersonic downflows exceeding 100 km/s. The excess thermal energy over the LB
is about 3.2x10^26 erg and matches the radiative losses. It could be supplied
by magnetic flux loss of the sunspot (7.5x10^27 erg), kinetic energy from the
increase in the LB width (4x10^28 erg), or freefall of mass along the coronal
loops (6.3x10^26 ,erg).
|
Rohan E. Louis, Shibu K. Mathew, A. Raja Bayanna, Christian Beck, Debi P. Choudhary
|
2023-01-02T11:47:07Z
|
http://arxiv.org/abs/2301.00608v1
|
# Sustained heating of the chromosphere and transition region over a sunspot light bridge
###### Abstract
Sunspot light bridges (LBs) exhibit a wide range of short-lived phenomena in the chromosphere and transition region. In contrast, we use here data from the Multi-Application Solar Telescope (MAST), the Interface Region Imaging Spectrograph (IRIS), Hinode, the Atmospheric Imaging Assembly (AIA), and the Helioseismic and Magnetic Imager (HMI) to analyze the sustained heating over days in an LB in a regular sunspot. Chromospheric temperatures were retrieved from the the MAST Ca ii and IRIS Mg ii lines by nonlocal thermodynamic equilibrium inversions. Line widths, Doppler shifts, and intensities were derived from the IRIS lines using Gaussian fits. Coronal temperatures were estimated through the differential emission measure, while the coronal magnetic field was obtained from an extrapolation of the HMI vector field. At the photosphere, the LB exhibits a granular morphology with field strengths of about 400 G and no significant electric currents. The sunspot does not fragment, and the LB remains stable for several days. The chromospheric temperature, IRIS line intensities and widths, and AIA 171 A and 211 A intensities are all enhanced in the LB with temperatures from 8000 K to 2.5 MK. Photospheric plasma motions remain small, while the chromosphere and transition region indicate predominantly red-shifts of 5--20 km s\({}^{-1}\) with occasional supersonic downflows exceeding 100 km s\({}^{-1}\). The excess thermal energy over the LB is about \(3.2\times 10^{26}\) erg and matches the radiative losses. It could be supplied by magnetic flux loss of the sunspot (\(7.5\times 10^{27}\) erg), kinetic energy from the increase in the LB width (\(4\times 10^{28}\) erg), or freefall of mass along the coronal loops (\(6.3\times 10^{26}\) erg).
Sunspots(1653) -- Solar magnetic fields (1503) -- Solar photosphere(1518) -- Solar chromosphere (1479) -- Solar corona (1483) +
Footnote †: journal: ApJ
0000-0002-8070-7883]Rohan E. Louis
0000-0002-4070-0708]Shibu K. Mathew
0000-0002-4880-7880]A. Raja Bayanna
0000-0002-4880-7880]Christian Beck
0000-0002-0788-0880]Debi P. Choudhary
## 1 Introduction
Light bridges (LBs) are bright, extended structures seen in the umbral core of sunspots and pores. They exhibit a variety of morphologies resembling umbral dots, penumbral filaments, or quiet Sun granules, depending on their evolutionary phase (Muller, 1979; Katsukawa et al., 2007; Louis et al., 2012). Penumbral LBs consist of small-scale barbs close to the edges of the filament (Rimmele, 2008; Louis et al., 2008), while granular LBs exhibit a dark lane along the central axis (Sobotka et al., 1994; Berger & Berdyugina, 2003; Lites et al., 2004).
LBs are conceived to be manifestations of large-scale magneto-convective structures (Rimmele, 1997, 2004), while Parker (1979) and Choudhuri (1986) claim LBs to be field-free intrusions of hot plasma into the gappy umbral magnetic field. Rueedi et al. (1995), Lites et al.
(1991), and Leka (1997) have shown that the magnetic field within LBs is typically weaker and more inclined in comparison to the adjacent umbra. Jurcak et al. (2006) suggested that the intrusion of hot, weakly magnetized plasma would force the adjacent umbral magnetic field to form a canopy over the LB that could be a source for electric currents. Such a stressed magnetic topology in LBs has often been cited as the driver for a number of reconnection-associated phenomena such as small-scale jets (Louis et al., 2014; Tian et al., 2018), surges (Roy, 1973; Asai et al., 2001; Toriumi et al., 2015; Robustini et al., 2016), strong brightenings and/or ejections (Louis et al., 2008; Shimizu et al., 2009; Louis et al., 2009), as well as flares (Berger & Berdyugina, 2003; Louis & Thalmann, 2021). Enhanced chromospheric activity, which is primarily transient in nature, appears to be an important characteristic of LBs (Louis, 2016).
It has been shown that the fragmentation of sunspots often occurs along LBs (Garcia de La Rosa, 1987; Louis et al., 2012). While LBs signify convective disruption within sunspots with close similarities to the quietSun at the photosphere (Sobotka, 1989; Sobotka et al., 1994; Rouppe van der Voort et al., 2010), their properties in the chromosphere and transition region appear distinct with enhanced emission and broad line widths (Rezaei, 2018). It has also been shown that LBs anchored to the penumbra can suppress the formation of coronal loops (Miao et al., 2021), which suggest their possible association to the large-scale magnetic topology of the active region. Recently, Louis et al. (2020) reported the formation of an LB through the large-scale emergence of a nearly horizontal magnetic structure within a regular sunspot. This emergence, which lasted about 13 hr, was accompanied by strong temperature enhancements in the lower chromosphere all along the LB, which were produced by electric currents through ohmic dissipation (Louis et al., 2021). It is, however, unknown if there are other mechanisms that can heat the upper atmosphere of an LB over several days, particularly if the underlying structure has evolved sufficiently enough to facilitate vigorous convection similar to the quietSun. We address the above issue in this article by investigating the source of sustained heating in the chromosphere and the transition region over a granular LB in a regular sunspot over a duration of more than 48 hr. Section 2 describes the observations used. The data analysis is explained in Section 3. The results are presented in Section 4 and discussed in Section 5, while Section 6 provides our conclusions.
## 2 Observations
We study the leading sunspot in NOAA active region (AR) 12741 on 2019 May 14 and 15 when it was at a heliocentric angle of about 17\({}^{\circ}\). We combine observations from several sources, which are described below.
### MAST Data
We utilize imaging spectroscopic observations using the narrowband imager (Mathew, 2009; Raja Bayanna et al., 2014; Mathew et al., 2017) on the 50 cm Multi-Application Solar Telescope (MAST; Venkatakrishnan et al., 2017). The narrowband imager comprises two lithium niobate-based Fabry-Perot (FP) etalons, which are tuned by a kilovolt power supply along with a 16 bit digital-to-analog converter. Both FPs are housed in temperature-controlled ovens that provide a thermal stability of \(\pm 0.2^{\circ}\)C. The FPs have a diameter of 60 mm, a thickness of 226 \(\mu\)m and 577 \(\mu\)m, a reflectivity \(>\)93%, and are polished to an accuracy of \(\lambda/100\). At present, this instrument is being used to observe the photospheric Fe i line at 617.3 nm as well as the Ca ii line at 854.2 nm, although any other wavelength can be observed by using an appropriate prefilter. In order to scan the above lines simultaneously, a dichroic beam splitter is placed after the low-resolution etalon so that only one FP is used for the Ca ii line while both FPs are used for the Fe i line. This arrangement provides a full width at half-maximum (FWHM) and a free spectral range of 170 mA and 7 A, which is sufficient for the broad Ca ii line. A prefilter with an FWHM of 3 A is used to suppress the secondary transmission peaks of the FP. The 2k\(\times\)2k filtergrams have a spatial sampling of about 0\(\farcs\)11 px\({}^{-1}\) across a 200'' field of view (FOV).
On 2019 May 14, we made three spectral scans of AR 12741 in the Ca ii line at 04:20:23 UT, 06:02:06 UT, and 10:37:11 UT with 81 wavelength points covering about \(\pm\)1 A around the line center. The wavelength step was about 25.4 mA and at each step 20 images were acquired. The total scan lasted about 4 min. We only analyzed the first scan as the seeing was variable during the second and third scans.
The filtergrams were corrected for darks, flats, and the field-dependent blue shift caused by the collimated mounting of the FP etalons (see Sect. 2.1 of Cavallini, 2006), which was about 27.4 mA from the center to the edge of the FOV. The instrumental profile along with the prefilter curve was then determined by convolving the reference spectrum from the Fourier Transform Spectrometer (FTS; Kurucz et al., 1984) atlas and matching it to the observed mean quiet Sun spectrum. For this study we select the central 1720\(\times\)1720 pixel region for analysis. In addition to the narrowband filtergrams, G-band and H\(\alpha\) filtergrams with a spatial sampling of
\(0\farcs 2\,\mathrm{px}^{-1}\) were also acquired from time to time. The H\(\alpha\) filtergrams were obtained using a 0.5 A wide Halle filter.
### IRIS Data
Raster scans and slit jaw (SJ) images from the Interface Region Imaging Spectrograph (IRIS; De Pontieu et al., 2014) were also used along with the MAST Ca ii narrowband filtergrams. We primarily use the raster scans taken in the Mg ii k & h lines at 280 nm, the C ii 1334 A and 1335 A lines, and the Si iv 1394 A and 1403 A lines. The Mg ii, C ii, and Si iv lines form at a temperature of 10,000 K, 30,000 K, and 65,000 K, respectively (Tian et al., 2018). The Mg ii k & h lines correspond to the near-ultra violet (NUV) region, while the C ii, and Si iv lines correspond to the far ultra violet (FUV) region. The details of the IRIS datasets are summarized in Table 1. IRIS raster scans use a \(0\farcs 35\) wide slit, a spatial sampling of \(0\farcs 33\,\mathrm{px}^{-1}\) along the slit, and a spectral sampling of 50.9 mA and 25.8 mA in the NUV and FUV, respectively. The SJ images have a spatial sampling of \(0\farcs 33\,\mathrm{px}^{-1}\).
### Hinode Data
The vector magnetic field of the AR was obtained from observations made by the spectropolarimeter (SP; Lites et al., 2001; Ichimoto et al., 2008) of the Solar Optical Telescope (Tsuneta et al., 2008) on board Hinode (Kosugi et al., 2007). Using the fast mode with 4.8 s at each slit position, the SP mapped the AR from 17:49:02-18:21 UT on May 14 and 11:50:05-12:22:24 UT on May 15. The four Stokes parameters of the Fe i lines at 630 nm were recorded by the SP with a spectral sampling, step width, and spatial sampling along the slit of 21.5 mA, \(0\farcs 29\), and \(0\farcs 32\,\mathrm{px}^{-1}\), respectively. The SP FOV was \(152\arcsec\times 164\arcsec\). Routines of the SolarSoft package (Lites & Ichimoto, 2013) were used to reduce the observations to yield Level-1 data. Level-2 data were used for this study that comprise two-dimensional (2D) maps of the magnetic field strength, inclination, azimuth, and line-of-sight (LOS) velocity. These products were obtained from an inversion of the Stokes profiles using the MERLIN1(Lites et al., 2007) inversion code. The \(180\arcdeg\) azimuth disambiguation was carried out using the AMIBIG2 code (Leka et al., 2009) based on the Minimum Energy Algorithm of Metcalf (1994). Following the disambiguation, the inclination and azimuth were transformed to the local reference frame. Table 1 summarizes the parameters from the various instruments.
Footnote 1: MERLIN inversion products are provided by the Community Spectro-polarimetric Analysis Center at the following link – [http://www.csac.hao.ucar.edu/csac](http://www.csac.hao.ucar.edu/csac)
Footnote 2: Code available at www.cora.nwra.com/AMBIG
### Solar Dynamics Observatory Data
The data from the Solar Dynamics Observatory (SDO; Pesnell et al., 2012) consist of images from the Atmospheric Imaging Assembly (AIA; Lemen et al., 2012). We chose the images at a reduced cadence of 10 minutes in the 171 A and 211 A extreme ultra violet (EUV) channels that have a maximum temperature response from the transition region and corona, respectively. In addition, we also utilize observations of the vector magnetic field from SDO's Helioseismic and Magnetic Imager (HMI; Schou et al., 2012) with a cadence of 12 minutes and a spatial sampling of about \(0\farcs 5\,\mathrm{px}^{-1}\).
## 3 Data Analysis
The strategies for inferring the chromospheric temperature and other diagnostics are summarized below.
\begin{table}
\begin{tabular}{c|c|c|c c} \hline \hline \multicolumn{1}{c|}{ Characteristics} & \multicolumn{1}{c|}{MAST} & \multicolumn{1}{c|}{HINODE} & \multicolumn{1}{c}{IRIS} \\ \cline{3-5} & & & \multicolumn{1}{c|}{Raster} & \multicolumn{1}{c}{SJ} \\ \hline \hline \multirow{4}{*}{Date \& Time [UT]} & 2019 May 14 & 2019 May 14 & 2019 May 14 \\ & 04:20:23 & 17:49:02–18:21:21 & 00:24:50–01:51:15 \\ & – & 2019 May 15 & 2019 May 15 \\ & – & 11:50:05–12:22:24 & 11:57:46–12:46:43 \\ \hline \multirow{2}{*}{Lines/Wavelength [nm]} & \multirow{2}{*}{Ca ii/854.2} & \multirow{2}{*}{Fe i/630} & Mg ii/280 & 279.6 \\ & & & C ii/133 & 133.0 \\ & & & Si iv/140 & 140.0 \\ \hline FOV \(x-y\)[′′] & 190–190 & 152–164 & 112–175 & 167–175 \\ Spatial Sampling \(x/y\)[′′ px\({}^{-1}\)] & \((0.11)^{2}\) & 0.29/0.32 & 0.35/0.33 & \((0.33)^{2}\) \\ Spectral Sampling [pm px\({}^{-1}\)] & 2.5 & 2.15 & 5.09/2.58 & – \\ No. of scans/images & 1 & 1 & 1 & 80 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of observations from different instruments.
### Nicole Inversions
The Ca ii spectra from the MAST narrowband imager were inverted using the nonlocal thermodynamic equilibrium (NLTE) Inversion COde based on the Lorien Engine (NICOLE; Socas-Navarro et al., 2015). NICOLE inversions were carried out with two cycles, with a maximum of 25 iterations per cycle and using the FALC model (Fontenta et al., 1993) as the initial guess atmosphere. The physical parameters resulting from the first cycle were used as inputs for the second cycle. In the first cycle of the inversion, temperature and LOS velocity, were perturbed with two nodes each with height-independent micro- and macro-turbulence. In the second cycle, the number of nodes for temperature and LOS velocity were changed to eight and four, respectively, with two nodes for micro-turbulence.
### IRIS2 inversions
To infer the temperature in the upper photosphere and chromosphere from the IRIS Mg ii k & h lines, we used the IRIS Inversion based on Representative profiles Inverted by STiC (IRIS2; Sainz Dalda et al., 2019). IRIS2 recovers the thermodynamic and kinematic properties of the solar chromosphere by comparing the observed spectra to a database of representative profiles, which are averages of profiles sharing the same shape as a function of the wavelength. The atmospheric parameters for these representative profiles have in turn been derived from the STiC code (de la Cruz Rodriguez et al., 2019), which synthesizes spectral lines in NLTE along with partial redistribution of scattered photons. The IRIS2 database incorporates all the observational variants such as location on the solar disk, exposure time, and spatial sampling. The routines for recovering the physical parameters from a given raster scan are made available through the IDL distribution of SolarSoft.
Footnote 2: The IRIS data are available at [http://irsa.ipac.caltech.edu/irsa/irsa/irsa.ipac.caltech.edu/irsa.html](http://irsa.ipac.caltech.edu/irsa/irsa/irsa.ipac.caltech.edu/irsa.html)
### Parameter Maps from IRIS
In addition to the IRIS2 inversions we carried out single and double Gaussian fits to the Mg ii k line, C ii line at 1334 A and the Si iv line at 1394 A. These fits are used to derive the peak intensity, Doppler shift, and line width over the 2D FOV. The rest wavelength was determined from the average line profile in the smaller umbral core where the peak intensity was less than three times the root-mean-square noise level.
### Magnetic Field Extrapolation
We used a non-force-free field (NFFF) extrapolation technique (Hu et al., 2010) to infer the magnetic connectivity in and around the sunspot. This method is well suited to the high plasma-\(\beta\) photospheric boundary (Gary, 2001) and has been successfully used in recent studies (Yalim et al., 2020; Louis et al., 2021).
### Chromospheric Radiative Loss
The excess radiative loss in the LB over the quiet Sun (QS) was calculated using the procedure described in Rezaei & Beck (2015, their Sect. 4.7) for the MAST Ca ii line and the IRIS Mg ii k & h lines. Following Neckel & Labs (1984), the intensity of the QS at disk center at the respective central wavelengths was calculated and normalized by the factor arising from the heliocentric angle. The spectra in the QS were then integrated over a 1 A band around the line core and subtracted from the same in the LB to yield the excess radiative loss.
In order to calculate the total radiative loss in the chromosphere across the full spectrum, we scaled the values obtained from the Ca ii 854.2 nm line to the remaining lines in the Ca ii IR triplet, the Ca ii K & H lines as well as hydrogen Lyman \(\alpha\), similar to the procedure described in Rezaei & Beck (2015). The scaling factors were chosen from Yadav et al. (2022, their Table 1) for a region over the polarity inversion line, which is quite similar to the conditions in the LB under study. We assume that the radiative loss in the Ca ii IR line at 854.2 nm is a third of the loss of the whole Ca ii IR triplet.
For a comparison to the observations we also synthesized3 the spectral lines of Ca ii IR, H & K, Mg ii h & k, and Ly\(\alpha\) for a characteristic temperature stratification from a location inside the LB and the QS stratification with the Lightweaver code (Osborne & Milic, 2021). Radiative losses for the synthetic spectra were calculated by the same approach as the difference between LB and QS.
Footnote 3: Calculation courtesy of J. Jenkins/KUL.
## 4 Results
### Evolution of Sunspot in AR 12741
Figure 1 shows the formation and evolution of the LB in the leading sunspot in AR 12741. The leading sunspot appeared on the Eastern limb close to the end of May 6 as a regular, unipolar spot without any conspicuous umbral intrusions. The formation of the LB began in the early part of May 11 with one arm extending from the western section of the umbra-penumbra boundary, eventually reaching the eastern umbral-penumbra boundary in about 8 hr. The onset of convection in the LB was seen in the latter part of May 13, with the subsequent formation of a three-arm structure by May 15, which
remained stable until the sunspot traversed the western limb. During this period the sunspot did not fragment or decay into smaller pores/spots and neither were there any significant changes in the global topology of the AR. The vertical component of the magnetic field in the figure shows that the LB stands out in the umbral background with a relatively weaker amplitude. At HMI's spatial resolution we do not find any indication of opposite polarities in the LB during the lifetime of the sunspot.
### Structure of Magnetic Field in LB
Figure 2 shows the Hinode continuum image of the sunspot wherein the LB is seen to comprise bright, small-scale grains in all three arms with an absence of filamentary structures. At most locations, the continuum intensity in the LB is comparable to that in the quiet Sun. The magnetic field is weakest along the axes of the LB, with minimum values ranging between 400 G and 600 G in the three arms. On the other hand, the field strength along the edges of the LB is about 1800-2000 G, while in the umbra it ranges from about 2200 G to 2700 G. The magnetic field inclination increases as one moves from the edge to the axis of the LB with typical values in the interior ranging of about 35\({}^{\circ}\)-45\({}^{\circ}\) in
Figure 1: Evolution of NOAA AR 12741 as seen from HMI maps of the continuum intensity (top row) and the vertical component of the magnetic field (bottom row). The maps correspond to 00:00 UT.
Figure 3: Magnified view of the LB in the SP data in the continuum intensity (top row) and the vertical component of electrical current density (bottom row) for the FOV marked by the white box in Figure 2.
Figure 2: Hinode maps of the leading sunspot in AR 12741 on 2019 May 14 (left) and May 15 (right). Top to bottom: continuum intensity \(I_{c}\), field strength \(B\), inclination \(\gamma\), and azimuth \(\phi\). The white rectangle corresponds to the FOV shown in Figure 3.
all three arms. As stated earlier, there are no indications of opposite polarities in the LB. The bottom row of Figure 2 reveals that for the most part the azimuth has a smooth radial arrangement in the sunspot, except in the proximity of the LB, which renders three azimuth centers in the respective umbral cores. As the sunspot is of negative polarity, the horizontal magnetic field is predominantly divergent along the LB axis and oriented toward the nearest umbral core.
Figure 3 shows a magnified view of the LB in the continuum intensity and the vertical component of the current density (\(J_{z}\)). The latter is extremely small in the LB, except at the locations along the axis of the LB, where the horizontal magnetic field appears to diverge into its respective umbral core. These locations are confined to individual pixels where \(J_{z}\) can reach up to 0.15-0.25 A m\({}^{-2}\), but these comprise only 3% of the area of the LB. For the majority of the LB, however, the average values of \(J_{z}\) are about 0.02 A m\({}^{-2}\) and do not stand out in the same manner as the currents in the penumbra that are relatively stronger by a factor of six. This obvious absence of strong electric currents in the LB is also seen in the Hinode map acquired on May 15. The absence of currents implies that heating by ohmic dissipation cannot be effective.
### Enhanced, Persistent Brightness over LB
Figure 4 shows the temperature maps of the LB as a function of height as derived from the spectral inversions of the MAST Ca ii and IRIS Mg ii lines on May 14. The enhanced temperature in the LB appears over an extended height range of \(-6.0<\log\tau<-3.5\) with the central junction of the LB being the hottest location. The average temperature at the central junction of the LB is 4960 K, 6430 K, and 7940 K at \(\log\tau=-4\), \(-5\), and \(-6\), respectively, as estimated from the Ca ii line. These values in the LB exceed that of the umbra by 840 K, 1165 K, and 1315 K at the above heights, respectively. The temperature maps from the Mg ii line exhibit similar values at \(\log\tau=-4\) and \(-5\) while at \(\log\tau=-6\) the temperature is enhanced to 8300 K. The 2D vertical cuts across the LB (panels i and j) show that the thermal enhancement in the LB extends down to \(\log\tau=-2\) in the Ca ii line while in the Mg ii line it is relatively higher at around \(\log\tau=-3.5\). The temperature enhancement in the LB arises due to the reduced/suppressed absorption in the Ca ii line with the line core intensity being only 20% smaller than the line wing intensity at about 1 A (Figure 16 in the Appendix). On the other hand, the Mg ii k & h lines comprise strong, compact emission features where the central reversals k3 and h3 are nearly as high as the k2 and h2 emissions.
Figure 5 shows the temperature maps from the IRIS Mg ii line on May 14 and 15 along with the peak intensity and line width of the C ii and Si iv lines in the LB FOV. The temperature enhancement over the LB persists over 36 hr and coincides with the underlying photospheric morphology. A similar characteristic is seen in the peak intensity of the IRIS Si iv line while in the C ii line the LB is more diffused on May 14 than it is on May 15. The intensity in the LB is about 80% of the emission in the opposite polarity network flux region as seen in the Mg ii and Si iv lines while in the C ii it is about 24% and 53% on May 14 and May 15, respectively. The peak intensity of the Si iv line in particular also exhibits structures in the proximity of the LB
Figure 4: Comparison of temperature stratifications derived from the Ca ii and Mg ii lines on May 14. Top row, from left to right: HMI continuum intensity (panel a), temperature derived from the MAST Ca ii line at different heights from \(\log\tau=-3\) to \(-6\) (panels b–h), temperature stratification along cut A and cut B in a 2D \(x\)–\(\log\tau\) display (panels i and j). The black lines in panels i and j correspond to the photospheric continuum intensity along cuts A and B, respectively. Bottom row: same as above but for the temperature stratification derived from the IRIS Mg ii lines. The temperature maps have been scaled to their respective color bars shown above the panels where the values in parentheses correspond to the Mg ii lines.
that are associated with loops rooted at/close to the LB (Figure 17 in the Appendix). The line widths estimated from the single Gaussian fit to the IRIS lines clearly trace the structure of the LB with values of 50 km s\({}^{-1}\), 25 km s\({}^{-1}\), and 30 km s\({}^{-1}\) in the Mg ii, C ii, and Si iv lines, respectively, on May 14. These values nearly remain the same on May 15 for the Mg ii line while there is a marginal increase of about 5 km s\({}^{-1}\) for the C ii and Si iv lines. The enhanced, persistent brightness in the LB can also be seen in the AIA 171 A and 211 A images, which reflect conditions at transition region and coronal temperatures. The enhanced intensity and line widths of the IRIS lines is also observed the following day on May 16, at 03:00 UT. The photospheric LB morphology can thus be traced up to the transition region and corona.
### Velocities with Height in LB
Figure 6 shows the velocity in the LB in the photosphere, chromosphere, and transition region. At the photosphere, the granular LB does not exhibit any strong red- or blueshifts with velocity values ranging between \(\pm 0.35\) km s\({}^{-1}\). The velocities in the penumbra, however, are much stronger with the Evershed flow reaching 2 km s\({}^{-1}\). The chromospheric velocities obtained from the inversion of the MAST Ca ii and IRIS Mg ii k & h lines show that the LB is weakly red-shifted by a few km s\({}^{-1}\) which nearly remains the same
Figure 5: Maps of the temperature, peak intensity, and line width in the LB as a function of height. Top row, from left to right: HMI continuum intensity (panel a), temperature derived from the MAST Ca ii line at \(\log\tau=-4\), \(-5\), \(-6\) (panels b–d), and temperature derived from the IRIS Mg ii line at \(\log\tau=-4\), \(-5\), \(-6\) (panels e–g). The temperature maps have been scaled to the corresponding color bars above the respective panels, with the numbers from the top to the bottom row below the color bar representing \(\log\tau=-6\), \(-5\), and \(-4\), respectively. The temperature color bar for the IRIS Mg ii line is similar, with the numbers in the parentheses corresponding to the observations on 2019 May 15 at 11:57 UT for the maps in the second row. Second row: the same as above on 2019 May 15 at 11:57 UT. Third row: maximum line intensity from the IRIS Mg ii line, C ii line, Si iv line (panels h–j), line width from the IRIS Mg ii line, C ii line, Si iv line (panels k–m), and AIA intensity at 171 Å(panel n) on 2019 May 14. Bottom row: the same as above on 2019 May 15 at 11:57 UT. The black contours correspond to the HMI continuum intensity and outline the LB.
at heights of \(\log\tau<-5\). The velocities obtained from the Gaussian fits to the IRIS lines (panels j-l) reveal that the LB is predominantly redshifted with values of 0.5 km s\({}^{-1}\), 2 km s\({}^{-1}\), and 10 km s\({}^{-1}\) in the Mg ii, C ii, and Si iv lines, respectively, on May 14. The Mg ii line was fitted with a double Gaussian while the C ii, and
Figure 6: Maps of LOS velocity in the LB as a function of height. Top row, from left to right: Hinode continuum intensity (panel a), Hinode LOS velocity (panel b), LOS velocity derived from the MAST Ca ii line at \(\log\tau=-4\), \(-5\), \(-6\) (panels c–e), MAST Ca ii Dopplergram (panel f), LOS velocity derived from the IRIS Mg ii line at \(\log\tau=-4\), \(-5\), \(-6\) (panels g–i), Doppler shift from the IRIS Mg ii line, C ii line, Si iv line (panels j–l) and AIA intensity at 171Å(panel m) on 2019 May 14. The velocity maps have been scaled to the corresponding color bars above the respective panels. The black contours, which outline the LB, correspond to the Hinode continuum intensity. Bottom row: the same on 2019 May 15.
Figure 7: Maps of LOS velocity in AR 12741 as a function of height. From left to right: Hinode continuum intensity (panel a), Hinode LOS Velocity (panel b), MAST Ca ii Dopplergram (panel c), Doppler shift from the IRIS Mg ii line, C ii line, Si iv line (panels d–f) and AIA intensity at 171Å(panel g). The velocity maps have been scaled to the corresponding color bars above the respective panels. The black contours, which outline the LB, correspond to the Hinode continuum intensity. The black squares in panels 1g and 2g indicate the possible location of the outer footpoints of coronal loops that connect to the surroundings of the LB.
Si iv lines were fitted with a single Gaussian. While these redshifts in the IRIS lines persist in the LB on May 15, the values are increased to \(1.5\,\mathrm{km\,s^{-1}}\), \(5\,\mathrm{km\,s^{-1}}\), and \(15\,\mathrm{km\,s^{-1}}\), respectively. The umbra in particular exhibits the saw-tooth pattern typically associated with shocks as seen in the Mg ii and C ii lines.
While the single Gaussian fits to the IRIS lines only show weak redshifts in the LB, an inspection of the spectra emanating from the region in and around the LB reveal supersonic redshifts of about \(150\,\mathrm{km\,s^{-1}}\) at the extended structure just south of the LB or next to it on May 14. These strong redshifts are associated with the large-scale loops ending in the sunspot (panels 7B-7E of Figure 17 in the Appendix). These spectra have been fitted with a double Gaussian to all the lines, which are also shown in Figure 17. The high-speed downflows are primarily observed in the Si iv line and to very small extent in the C ii line (panel 4C). However, the downflows do not appear to persist in time and are greatly reduced as evident in the raster scans on May 14 at 13:21 UT as well as on May 15 at 11:57 UT. Figure 17 also shows the enhanced line width over the LB in all the IRIS lines, which remains a characteristic feature over the course of \(36\,\mathrm{hr}\).
Figure 7 shows the spatial distribution of velocities over the FOV of the AR. As stated earlier, the strongest velocities in the sunspot at the photosphere are associated with the Evershed flow, while in the chromosphere the inverse Evershed flow is observed in the superpenumbral region with values of about \(2\,\mathrm{km\,s^{-1}}\) and \(-1.3\,\mathrm{km\,s^{-1}}\) in the center-side and limb-side regions, respectively (panel 1c). In the IRIS C ii and Si iv the FOV is dominated by redshifts particularly in the network flux region and the arch-filament systems around the leading sunspot with values in the Si iv being the strongest and reaching about \(20\,\mathrm{km\,s^{-1}}\). On the other hand, blueshifts appear sporadically in patches across the arch-filament systems as well as in the large filament east of the sunspot that extends southward, where the velocities are about \(-3\,\mathrm{km\,s^{-1}}\) and \(-10\,\mathrm{km\,s^{-1}}\), respectively, as estimated from the Mg ii line (panel 2d).
Figure 8: Photospheric, chromospheric, transition region, and coronal morphology in and around AR 12741 from 2019 May 14–16. From left to right: G-band, H-\(\alpha\) images from MAST, AIA images in the 171 Å, and 211 Å channels. The white and black arrows represent arch-filament systems and the AR filament, respectively.
We also visually identified the footpoints of the AIA 171 A loops that begin from the LB and the sunspot and possibly end in the opposite polarity network flux region (squares in the panel).
The majority of the footpoints are dominated by redshifts or extremely weak blueshifts apart from square A in panel 1d with a blueshift of about \(-5\) km s\({}^{-1}\). The transition region lines C ii and Si iv show dominantly redshifts all the time. The same trend is seen in the velocity maps on May 15. However, unlike on May 14, the visible loops that begin at or close to LB do not appear to terminate at the opposite polarity network flux region.
### Global Topology and Morphology of AR
We now discuss the large-scale structures in the chromosphere, transition region, and corona in the context of the temporal stability of the LB. Figure 8 shows that there are several, small arch-filament systems north of the leading sunspot that connect to the following polarity. In addition, a large filament located east of the sunspot extends southward beyond the MAST H\(\alpha\) FOV and arches around to the west back toward the AR. These arch-filaments systems as well as the large AR filament also remain stable over a period of 36 hr. The figure also shows that while the LB remains persistently bright in the AIA 171 A and 211 A images, there are large-scale loops that are always rooted at one end of the LB or close to it, which is seen from May 14 to May 16. In addition, these large-scale loops extend over and above the AR filament as observed on May 14.
The global topology of the AR is further demonstrated from the extrapolation of the photospheric magnetic field using the NFFF technique as shown in Figure9 on May 14. The sunspot magnetic field is consistent with a simple bipolar structure without any discernible signatures of twist. Field lines from the eastern penumbral region of the sunspot connect to the opposite polarity network flux region (black dotted rectangle in the figure) while those from the umbra and the western part
Figure 10: Magnified view of magnetic topology around the sunspot LB.
Figure 9: Global magnetic topology of NOAA AR 12741 from a NFFF extrapolation method using the HMI vector magnetic field on May 14 at 04:24 UT. The bottom boundary from left to right corresponds to the vertical component of the magnetic field from HMI, the GONG H-\(\alpha\) filtergram, and an AIA 171Å, image, respectively. The black dotted rectangle encloses magnetic field lines from the sunspot that connect to the network region of opposite polarity.
of the sunspot comprise open field lines. The average height of the closed field lines connecting the sunspot to the following polarity is about 12.3 Mm, while field lines starting from the inner penumbra can reach heights of up to 30 Mm. An estimation of the loop height was independently made when the sunspot was very close to the western limb on May 19. An unsharp masked AIA 193 A image provided a side view of the loops along the sky plane from which a height of about 13 Mm was estimated (see Figure 18 in the Appendix), which is in good agreement with those calculated from the extrapolated field lines. A zoom-in of the LB FOV (Figure 10) shows the field being nearly vertical at the central part while toward the western end the field lines fan out with height. The LB thus seems to be related to features of the AR magnetic topology that govern its large-scale shape and
Figure 11: Evolution of the LB as seen in the transition region and corona. The top, middle, and bottom panels correspond to the HMI continuum intensity and AIA 171Å and 211Å channels, respectively. The small white square in the middle of the FOV represents the LB whose magnified image is shown in the inset in the top right corner of the panel. The larger AIA images are clipped between 20 and 1500 counts in both AIA channels. The insets on the other hand are scaled between 100–900 and 100–1000 counts in the 171 and 211Å channels, respectively.
evolution, at least in the sense of having their apparent footpoints in its vicinity.
Figure 11 shows the temporal evolution of the corona above the AR from May 13 to May 15. On May 13, a B3.5 class flare occurred at 15:02 UT, with the peak at 15:52 UT. The flare involved the eruption of the large AR filament south of the sunspot that was associated with a mass ejection that had a linear speed of 312 km s\({}^{-1}\), a position angle of 234\({}^{\circ}\) and was seen in the Large Angle Spectroscopic Coronagraph (LASCO; Brueckner et al., 1995) C2 FOV at 17:48 UT as obtained from the Cactus (Robbrecht et al., 2009) catalog4. The erupting filament and the associated coronal dimming can be seen in the lower part of panel 2 of Figure 11. The bright ribbon from the flare stretches all along the LB and stands out from the rest of the sunspot (panels 2b and 2c of Figure 11). Similarly, the post flare loops from the ensuing eruption are rooted along the extended network flux region in the following group of sunspots of the AR while the other end is more confined and located at the eastern end of the LB (panel 3 of Figure 11). There were no flares in the AR on May 14; however small-scale coronal activity was observed over the LB later in the day at around 17:30 UT (panel 8). There were four flares on May 15, which included a C2.0 class flare and three weak B-class flares. However, none of these flares were eruptive and primarily involved the arch filament close to the northeastern periphery of the sunspot where one of the flares ribbons was seen. The other set of ribbons were located in the opposite polarity network flux region.
Footnote 4: [https://wwwbis.sidc.be/cactus/](https://wwwbis.sidc.be/cactus/)
The AIA images clearly show the enhanced intensity over the LB as well as the presence of loops at or close to it, both of which persist over a duration of 3 days. Especially in panels 8-12 of Figure 11, both AIA channels at 171A and 211A closely mimic the photospheric shape of the LB but at transition region heights.
### Energy Budget from Various Mechanisms
In this section we compare the energetics from various mechanisms/processes that could contribute to the sustained heating over the LB for a duration of the observations.
Figure 12: Emission Measure estimated from the AIA images using the 171, 211, 193, 335, 131, 94 Å channels at different temperatures. The top and bottom panels correspond to 2019 May 14 May at 00:24 UT and 2019 May 15 at 11:57 UT. The magnified view of the light bridge is shown in the inset in the top left corner. The thin black contours correspond to the HMI continuum intensity.
_(i) Thermal energy in the EUV:_ The thermal energy emanating in the LB at EUV wavelengths can be estimated as
\[E_{th}=3n_{e}k_{b}Tl^{3}, \tag{1}\]
where \(n_{e}\), \(k_{b}\), \(T\), and \(l\) are the electron density, Boltzmann constant, temperature, and length scale over which the brightening in the LB is observed, respectively. To ascertain the temperature in the LB, we calculate the Differential Emission Measure (Cheung et al., 2015) from the various AIA channels, namely, 171, 211, 193, 335, 131, 94 A. Figure 12 shows the emission measure (EM) at various temperatures, where the LB clearly stands out between \(6.2\leq\log T\leq 6.4\). The electron density \(n_{e}\) can then be estimated as \(n_{e}\approx\sqrt{EM/l}\). The value of the length scale \(l\) is estimated from the volume, using the area of the LB (\(25.8\) Mm\({}^{2}\) and \(35.7\) Mm\({}^{2}\) on May 14 and 15, respectively) and a vertical height of 4 Mm. The electron density \(n_{e}\) was computed using the mean DEM over the LB area and averaged over a temperature range of \(\log T=6.2\) to 6.4. With \(T=2.5\) MK, and \(n_{e}=1.8\times 10^{9}\) cm\({}^{-3}\) (\(2.5\times 10^{9}\) cm\({}^{-3}\)), we obtain \(E_{th}=1.9\times 10^{26}\) (\(3.7\times 10^{26}\) erg) in the LB on May 14 (15).
_(ii) Thermal Energy in the Visible and NUV:_ The enhancement in internal energy can be computed from the temperature stratification derived from the inversion of the MAST Ca ii and IRIS Mg ii lines following Beck et al. (2013b).
\[\Delta E_{int}=\frac{R}{\mu(\gamma-1)}\Delta A\sum_{i=20}^{z1}\rho_{i}\Delta z _{i}\sum_{j,k}\left(T_{i,j,k}^{lb}-\overline{T}_{i}^{umb}\right), \tag{2}\]
where \(R=8.31\) J mol\({}^{-1}\) K\({}^{-1}\), \(\mu=1.3\) g mol\({}^{-1}\), \(\gamma=5/3\), \(\rho\) is the gas density, \(\Delta A\) is the area of the pixel, \(T^{lb}\) is the temperature in the LB, and \(\overline{T}^{umb}\) is the average temperature in the umbra. The summation with index \(i\) is carried out from \(z0\) to \(z1\), which are the geometric heights at \(\log\tau=-4\) and \(\log\tau=-6\), respectively. The values of the geometrical height and gas density \(\rho_{i}\) at different optical depth points were taken from the Harvard Smithsonian Reference Atmosphere (HSRA; Gingerich et al., 1971). Indices \(j\) and \(k\) correspond to the spatial domain, while \(\Delta z_{i}\) is the geometric height spacing between adjacent optical depth points. Equation 2 yields \(4.7\times 10^{25}\) erg and \(8.6\times 10^{25}\) erg for the thermal energy from the Ca ii and Mg ii lines, respectively. \(E_{th}\) is the enhancement in thermal energy over the surroundings of the LB, not the total energy. As it persists for days in a stationary way and the chromospheric relaxation time is on the order of a few minutes, the excess energy losses that lead to the enhancement must be replenished all the time by a continuing heating process.
_(iii) Kinetic energy associated with LB expansion:_ Figure 13 shows the temporal evolution of the sunspot flux and area, both of which decrease nearly linearly with time. The area of the LB on the other hand shows an increase with time, with a linear fit yielding a value of \(5.7\) Mm\({}^{2}\)day\({}^{-1}\). Using the width of the LB of \(3.3\) Mm on May 15, we can associate the linear expansion speed (\(v_{frag}\)) of the LB to the kinetic energy as
\[E_{kin}=0.5Ad\rho v_{frag}^{2}, \tag{3}\]
where \(\rho\) is the photospheric density, \(A\) is the area of the LB, and \(d\) is the depth to which the convective structure extends to, which we assume to be 6 Mm (Rempel et al., 2009). Using the above area increase and width of the LB, we obtain a value of \(20\) m s\({}^{-1}\) for \(v_{frag}\). We express the area and the density as a function of depth, namely, \(\rho(z)=\rho_{s}\exp\left(z/\tau_{\rho}\right)\) and \(A(z)=A_{s}\exp\left(-z/\tau_{B}\right)\), where the suffix \(s\) stands for the surface/photosphere and the scale heights in the two expressions correspond to the density and magnetic field. The values for \(\tau_{\rho}\) and \(\tau_{B}\) are 0.5 Mm and 2 Mm, respectively, while \(\rho_{s}\) and \(A_{s}\) are \(10^{-7}\) g cm\({}^{-3}\) and \(35.7\) Mm\({}^{2}\), respectively. The kinetic energy can then be expressed as
\[E_{kin}=\int_{0}^{d}0.5A(z)\rho(z)v_{frag}^{2}dz, \tag{4}\]
Figure 13: Temporal evolution of the magnetic flux and area of the sunspot and light bridge. The y-axes on the left and right correspond to the flux and area, respectively. The light bridge flux and area have been enhanced by a factor 5 for better visibility.
Using the above values, we obtain \(E_{kin}=3.9\times 10^{28}\) erg. For comparison, the convective energy of solar granulation with an _rms_ velocity of 0.5 km s\({}^{-1}\)(Beck et al., 2009, 2013) over the same area as the LB and using Eqn. 4, is about \(2.4\times 10^{31}\) erg, which is nearly 3 orders of magnitude larger than the one from the expansion in the LB.
_(iv) Energy related to magnetic flux loss/gain:_ As seen earlier, the sunspot loses magnetic flux at a rate of \(\phi_{t}=1.8\times 10^{19}\) Mx hr\({}^{-1}\), which was derived from a linear fit to the flux curve in Figure 13. Similarly, the rate of flux increase in the LB is about \(3.1\times 10^{18}\) Mx hr\({}^{-1}\). The magnetic flux in the LB increases because its area increases and the HMI data show a non-zero magnetic flux at those places. The energy related to the loss of flux can be expressed as
\[E_{flux}=\frac{1}{8\pi}\frac{(\phi_{t}t)^{2}}{h}, \tag{5}\]
where \(t\) is the time scale over which the flux lost/gained can supply the energy (\(\sim\)10 min) and \(h\) is the chromospheric heating height scale (\(\sim\)500 km; Chitta et al., 2018). The loss of flux in the sunspot provides an energy of \(7.5\times 10^{27}\) erg, while that gained in the LB is about \(2.1\times 10^{26}\) erg.
_(v) Free fall energy:_ The free fall energy of plasma draining down a loop from a height \(h\) in the corona can be expressed as
\[E_{fall}=\rho Ag_{\odot}h^{2}, \tag{6}\]
where \(A\) is the area of the LB, and \(\rho\) is the coronal gas density. The loop height \(h\) as derived from the extrapolations is about 12.3 Mm. Using values of \(\rho\) of \(5\times 10^{-11}\) kg m\({}^{-3}\), \(g_{\odot}\) as 273.7 m s\({}^{-2}\) and \(A\) of 25.8 Mm\({}^{2}\) and 35.7 Mm\({}^{2}\) on May 14 and 15, respectively, \(E_{fall}\) is estimated to be about \(6.3\times 10^{26}\) erg and \(8.7\times 10^{26}\) erg on May 14 and 15, respectively;
The energy estimates from the different physical mechanisms that could be the sources are summarized in Table 2.
### Estimates of Radiative Losses
The top panel of Figure 14 shows Ca ii spectra and their synthetic fits from different regions in the FOV, i.e., the umbra, LB, QS, and magnetic network. The excess radiative loss in the LB with respect to the QS is about 0.14 kW m\({}^{-2}\) ster\({}^{-1}\) as derived from the calculation described in Sect. 3.5. In comparison, the ex
\begin{table}
\begin{tabular}{c|c c} \hline \hline \multirow{2}{*}{Mechanism} & \multicolumn{2}{c}{Energy (erg)} \\ \cline{2-3} & May 14 & May 15 \\ \hline \hline Thermal Energy in EUV & \(1.9\times 10^{26}\) & \(3.7\times 10^{26}\) \\ \hline Thermal Energy in UV \& & \(8.6\times 10^{25}\) & – \\ Visible & \(4.7\times 10^{25}\) & – \\ \hline Total Thermal Energy & \(3.2\times 10^{26}\) & – \\ \hline Total Chromospheric & & \\ Radiative Loss\({}^{\rm a}\) & & \\ \hline \hline Kinetic Energy & \(3.9\times 10^{28}\) & – \\ \hline Magnetic Flux & \(7.5\times 10^{27}\) [S] & – \\ & \(2.1\times 10^{26}\) [LB] & \\ \hline Freefall & \(6.3\times 10^{26}\) & \(8.7\times 10^{26}\) \\ \hline \hline \end{tabular}
* See Sect. 4.7
\end{table}
Table 2: Summary of energy estimates from different mechanisms for May 14 and May 15.
Figure 14: Top panel: MAST Ca ii spectra from different locations in the FOV. The symbols and the solid lines correspond to the observed and synthetic profiles, respectively. Bottom panel: observed IRIS Mg ii k & h lines for the same locations. The dashed vertical lines mark the wavelength region within which the excess radiative losses were calculated. Their values for the network (NW) and the LB are given inside the panels.
cess loss in the network region is about 1.6 times higher at 0.23 kW m\({}^{-2}\) ster\({}^{-1}\). The excess radiative losses in the LB over the QS as estimated from the Mg ii k & h lines are 0.37 kW m\({}^{-2}\) ster\({}^{-1}\) and 0.28 kW m\({}^{-2}\) ster\({}^{-1}\), respectively (bottom panel of Figure 14). The spectral synthesis of the LB and QS temperature stratifications yields excess radiative losses in the LB that are a factor of 2-3 smaller than for the observations (second row in Table 3) with, e.g., 0.1 kW m\({}^{-2}\) ster\({}^{-1}\) for Mg ii h and 0.09 kW m\({}^{-2}\) ster\({}^{-1}\) for Ca ii IR at 854 nm. The difference to the observations is presumably caused by the assumed density stratification in the synthesis and the lack of 3D radiative transfer. The direct observations can be taken to be more accurate in this context.
The factors for the radiative loss in the LB over the QS for the Ca ii K & H lines and Ly\(\alpha\) are 0.65, 0.46, and 0.08 times the Ca ii IR triplet (3\(\times\)Ca ii IR at 854 nm), respectively, using the values from the third row of Table 3. Adding the contributions from all the spectral lines in the first row of Table 3 and integrating over the solid angle of 4\(\pi\), the total chromospheric radiative loss in the LB is about 19.7 kW m\({}^{-2}\) in excess of the QS. In terms of energy the above value translates into 3\(\times\)10\({}^{26}\) erg using the LB area of 25.8 Mm\({}^{2}\) and a time scale of 1 min, where the latter takes the chromospheric relaxation time (Beck et al., 2008) into account, given that the heating in the LB is persistent over days and thus needs to be replenished continuously. For comparison, the total thermal energy in the EUV (1.9\(\times\)10\({}^{26}\) erg), UV (8.6\(\times\)10\({}^{25}\) erg), and visible (4.7\(\times\)10\({}^{25}\) erg) on May 14 (refer Table 2), is about 3.2\(\times\)10\({}^{26}\) erg together, which gives a close match to the energy in the radiative losses that are a sink of energy.
Figure 15 shows representative spectra of the Ca ii 854.2 nm line from a few other phenomena such as an Ellerman bomb (EB), a flare ribbon, and a case of ohmic heating in an LB for comparison, whose radiative losses are also listed in Table 3. The radiative loss in an EB is about 1.5 kW m\({}^{-2}\) ster\({}^{-1}\)(Rezaei & Beck, 2015), while that of a flare ribbon is about 0.89 kW m\({}^{-2}\) ster\({}^{-1}\). Similarly, ohmic dissipation in an LB comprises about 0.35 kW m\({}^{-2}\) ster\({}^{-1}\)(Louis et al., 2021), which is about 2.5 times greater than the value obtained for the LB under investigation. The current case of continuous long-term heating is thus at the lower end of the energy range of more short-lived and dynamic events.
## 5 Discussion
### LB Properties and Evolution
The granular LB that formed in a regular, unipolar sunspot remained stable until the AR traversed the solar limb. During this time, the host sunspot did not fragment nor were there any large-scale changes in the magnetic structure of the AR. Measurable changes were seen in the flux of the sunspot and the area of the LB, which decreased and increased by \(1.7\times 10^{21}\) Mx and 22 Mm\({}^{2}\), respectively, over the course of four days. While overturning convection is prevalent in the LB at photospheric heights, the LB is heated over a large temperature range from 8000 K to 2.5 MK spanning the chromosphere to the low corona that is sustained for more than two days. The signatures of this heating are seen in the temperature maps of the chromospheric Ca ii and Mg ii spectral lines, the peak amplitude and line width of the IRIS C ii, Si iv lines that form at a temperature of 30,000 K and 65,000 K, respectively, and the emission measure using the different AIA channels. The enhanced intensities and line widths of the IRIS lines in the LB are in good agreement with Rezaei (2018). The persistent heating over the LB is counterintuitive as the underlying structure would radiate the majority, if not all, of its energy once having evolved to a strongly convective region inside the sunspot. We now discuss the possible mechanisms that can provide the necessary energy to sustain the enhanced temperature over the LB for more than two days.
### LB Energy Budget
_Thermal Energy and Radiative Losses_--An estimate of the thermal energy in the visible, NUV, and EUV yields about \(3.2\times 10^{26}\) erg using the values on May 14 as shown in Table 2. The chromospheric temperature in the LB is enhanced compared to its immediate umbral surroundings and even with respect to QS conditions. The en
Figure 15: Illustrative spectra of different phenomena (solid lines). The black dashed line represents the QS profile. The vertical red dotted lines at \(\pm 2\) Å are the wavelength regions within which the radiative loss for the EB was calculated, while the vertical black dotted lines correspond to the range used for the flare ribbon and ohmic dissipation.
-hanced temperature persists over a few days in a similar way. The heating process has thus lifted the temperature to a new, higher energetic equilibrium that is maintained in time.
The estimate of the chromospheric instantaneous excess energy losses over the LB area yields \(3\times 10^{26}\) erg, which gives a close match to the predicted losses from the temperature excess. This confirms the presence of a new stationary equilibrium at a higher energy level than, e.g, in the QS. In comparison to typical values for other chromospheric heating events such as ohmic dissipation (Louis et al., 2021), flare ribbons (Yadav et al., 2022) or an Ellerman bomb (Rezaei and Beck, 2015) the current radiative losses are found to be at the low end of the range. Apart from ohmic dissipation, the other types of heating events are generally impulsive and short-lived with time scales of only a few tens of minutes with spatially localized heating sources from reconnection (Georgoulis et al., 2002) or particle beams (Kleint et al., 2016). While flare ribbons can cover similar areas as the current LB, the heating process that causes its enhancement must be both spatially extended and long-lived, albeit at a 6-10 times lower heating rate than for the more impulsive events.
LB Expansion and Freefall AccelerationThe kinetic energy associated with the LB expansion is 1-2 orders of magnitude higher than the net thermal energy. This mechanism is related to photospheric dynamics that can lead to the buildup of energy in the corona. Mackay et al. (2011) showed that photospheric footpoint motions in a decaying AR could store enough free magnetic energy in the corona to compensate the radiative losses. Using idealized numerical models, Hurlburt et al. (2002) showed that by directly coupling compressible magneto-convection at the bottom layer to a low plasma-\(\beta\) region above, the overlying corona could be heated by the Poynting flux emerging from the upper boundary. The LB expansion continues on the same temporal and spatial scales as the heating process over days. The heating seems to be localized above the LB area also in the higher atmosphere, which indicates a correlation to the photospheric spatial pattern. It is unclear, however, how the photospheric mechanical energy would be transported upward and deposited locally in the chromosphere and transition region in the current case.
Freefall acceleration of plasma along loops (Schad et al., 2021) connecting the opposite polarity network flux to the LB or close to it in the sunspot could also provide sufficient energy to sustain the temperature over the LB. The long-lived presence of loops at or near the LB and the persistent brightness in the AIA channels suggest that this could also be a likely possibility. However, with the exception of a few locations in the umbra next to the LB we do not see any strong redshifts in the C ii or Si ii lines that would indicate plasma draining down the loop from a height of 12 Mm. Coronal rain was also found to be often rather intermittent in nature without continuous flows (Antolin et al., 2012; Li et al., 2021).
Furthermore, we do not unambiguously detect blueshifts at the other end of the loops, which would provide evidence for a siphon flow (Cargill and Priest, 1980; Straus et al., 2015; Prasad et al., 2022). The LB along with the network flux region are redshifted, which raises questions on how any flow would be sustained for a period of days. While the Doppler shift measurements in the LB do not reveal high-speed downflows all the time, the enhanced intensity as well as line width are maintained in the LB for over two days.
Magnetic Flux Losses and Field-related HeatingThe energy corresponding to the magnetic flux loss of the sunspot or the apparent gain of magnetic flux above the LB does also match the energy requirements from
\begin{table}
\begin{tabular}{c|l|c|c c c c c c c} \hline No. & Type & \(\Theta\) (\({}^{\circ}\)) & Reference & \multicolumn{6}{c}{Excess Radiative Loss \(\Phi_{rad}\) (kW m\({}^{-2}\) ster\({}^{-1}\))} \\ \cline{4-10} & & & Mg ii k & Mg ii h & Ca ii K & Ca ii H & Ca ii IR & L\(\alpha\) & H\(\alpha\) \\ \hline \multirow{2}{*}{1.} & Sustained Heating in LB & \multirow{2}{*}{17} & \multirow{2}{*}{Current Study} & 0.37\({}^{\rm o}\) & 0.28\({}^{\rm o}\) & 0.27\({}^{\rm s}\) & 0.19\({}^{\rm s}\) & 0.14\({}^{\rm o}\) & 0.03\({}^{\rm s}\) \\ & Spectral Synthesis in LB & & & 0.1 & 0.08 & 0.1 & 0.1 & 0.09 & 0.01 \\ \hline
2. & Polarity Inversion Line & 52 & Yadav et al. (2022) & 0.28 & 0.25 & 0.6 & 0.43 & 0.31 & 0.07 \\ \hline
3. & Ohmic Dissipation in LB & 13 & Louis et al. (2021) & & & & & 0.35 & \\ \hline
4. & Flare Ribbon & 52 & Yadav et al. (2022) & 2.23 & 1.98 & 1.76 & 1.34 & 0.78 & 0.57 & \\
5. & Flare Ribbon & 29 & IBIS DST 2014/10/24 & & & & 0.89 & & 2.17 \\ \hline
6. & Ellerman Bomb & 70 & Rezaei and Beck (2015) & & & & 11 & 1.5 & 4 \\ \hline \end{tabular} \({}^{\rm o}\): derived from observations \({}^{\rm\ 8}\): derived from scaling factors from the second row
\end{table}
Table 3: Estimates of radiative losses in different phenomena in excess of the QS. Ca ii IR refers to the spectral line at 854.2 nm.
the radiative losses and the thermal energy. They occur continuously on the same time scale as the persistent LB heating. The total loss of magnetic flux of the sunspot would, however, not match the required localized heating with a preferred occurrence above the LB area, while the apparent gain of magnetic flux above the LB area would require a nearly complete conversion of magnetic to thermal energy to balance the radiative losses. Harra and Abramenko (2012) showed that magnetic flux dispersal at the photosphere is important for the release of nonthermal energy in the corona for a decaying AR that was devoid of flux emergence. The photospheric interactions of a bipole containing a flux of \(2\times 10^{18}\) Mx with an overlying field via cancellation, emergence, and relative photospheric motions could dissipate about \(1.3\)-\(3.2\times 10^{26}\) erg in the corona over a time interval of 100 minutes (Meyer et al., 2012). Using the magnetofrictional approach of Yang et al. (1986), Meyer et al. (2013) evolved the corona through a sequence of nonpotential, quasi-static equilibria using photospheric LOS magnetograms at the bottom boundary. They found that the storage of energy and its subsequent dissipation in the quiet corona occurred at a mean rate of \(8.7\times 10^{4}\) erg cm\({}^{-2}\) s\({}^{-1}\), and produced dark and bright features similar to those in EUV images. The so-called braiding of magnetic flux by random photospheric motions (Parker, 1972; Peter et al., 2004) could set in at the boundary layer between the presumably field-free overturning convection in the LB and the surrounding umbral magnetic fields. For the case of the current LB, the resulting heating would, however, have to set in very low in the atmosphere starting at chromospheric levels.
_Wave Heating--_As the LB exhibits convective motions that are possibly rooted quite deep, magneto-acoustic waves could dissipate a part of their energy in the higher atmospheric layers (Ulmschneider et al., 1978; Kalkofen, 2007; Khomenko and Cally, 2012; Kayshap et al., 2018). The acoustic energy flux estimated from a number of chromospheric lines shows that at least in the quiet Sun, it is able to balance the radiative losses at heights between 900 and 2200 km (Abbasvand et al., 2020, 2020). However, in active regions the acoustic flux balances only 10-30% of the radiative losses (Abbasvand et al., 2021). This is also in agreement with previous studies by Beck et al. (2009). Based on the above, and coupled with the lack of time series IRIS observations with high temporal cadence, one can argue that the residual acoustic flux would only have a minor, if not negligible, contribution in heating the LB to transition region and coronal temperatures. On the other hand, Alfven waves have been proposed as possible energy transporters that could heat the upper atmospheric layers (Osterbrock, 1961; Stein, 1981; van Ballegooijen et al., 2011; Sakaue and Shibata, 2020) with direct evidence for an energy deposit in the chromosphere in Grant et al. (2018). Thus, we cannot rule out the possibility of Alfven waves heating the chromosphere and transition region above the LB, with the caveat that the nearly vertical smooth magnetic field would make a mode conversion and subsequent energy deposit difficult.
_Ohmic Dissipation--_Ohmic dissipation in the LB could arise from electric currents due to the presence of weak magnetic fields inside the sunspot. Recently, Louis et al. (2020) reported the bodily emergence of horizontal magnetic fields along an LB that comprised strong blueshifts all along the LB lasting for a period of 13 hr. The emergence of flux rendered strong electric currents leading to ohmic dissipation that was accompanied by large temperature enhancements in the chromosphere above the LB (Louis et al., 2021). A similar observation of blueshifts and chromospheric emission was seen during the emergence of a small-scale, bipolar loop in a granular LB (Louis et al., 2015). However, the granular LB analyzed here did not comprise any strong or significant velocities in the photosphere, and the photospheric currents were very weak or negligible. For ohmic dissipation to play a role in the chromosphere and above, the currents at the bottom boundary have to be the strongest for them to be significant in the higher layers, which is not the case here.
_Transient Chromospheric Events--_LBs are known to exhibit a range of transient phenomena, including jets (Louis et al., 2014), brightenings (Louis et al., 2008), and flares (Louis and Thalmann, 2021). There were several confined, however weak, flares originating in the AR on May 15 and an eruptive flare on May 13 which resulted in one of the flare ribbons, although compact, to be co-spatial with the LB. However, there were no flares associated with the LB or the AR on May 14. The spatial coincidence of one of the flare ribbons along the LB and the ensuing post flare loops extending to the LB indicate the connectivity of the LB to the large-scale topology of the AR, which was destabilized by the erupting filament below. The association of an LB with the large-scale topology of an AR has also been observed by Guo et al. (2010), where repetitive surges from an LB led to the eruption of an adjacent filament. While the flares could play an important role in depositing energy in the higher layers of the LB, which would subsequently heat the lower layers, the strength of the flares, and the rapid radiative cooling would not explain the persistent brightness of the LB over 48 hr. In addition, the IRIS SJ images do not indicate any discernible, small-scale,
reconnection-driven events during the raster scans, although we do not rule out the possibility that these could have been missed during the data gap. Even if there were small-scale ejections, they would be localized and would not explain the heating over the entire extent of the LB.
The case of sustained heating over a granular LB is not uncommon. Berger & Berdyugina (2003) reported a constant brightness enhancement over a granular LB using the 1600 A channel of the Transition Region and Coronal Explorer (TRACE; Handy et al., 1999). A C2.0 flare was also observed wherein one of the ribbons was co-spatial with the LB similar to the observations reported in this study. The authors attributed the persistent brightness to the stressed magnetic configuration at the LB that could lead to reconnection and energy dissipation. As stated above, the lack of electric currents rules out ohmic dissipation as the source(s) of heating over the LB.
### Possible Heating Process(es)
The energy estimates associated with the loss/gain of magnetic flux, increase in the LB area, and freefall acceleration exceed the thermal energy in the LB that matches the radiative losses. However, it remains unclear if one or a combination of the above processes are the primary source of heating over the LB. Only some of the possible processes match the necessary temporal (long duration) and spatial patterns (concentration on LB area). The heating rate is found to be lower than for other impulsive chromospheric heating events. A process related to the photospheric mechanical energy from either the LB expansion or the overturning convection inside the LB and its interaction with bordering magnetic fields seems to be more likely because a continuous driver over days is needed.
As pointed out by Rezaei (2018), LBs are multithermal structures, with diverse heating mechanisms supplying momentum and energy to different layers of the solar atmosphere. It remains an open question whether such a persistent heating over a large height range in a granular LB is indeed a generic phenomenon. In the current study, we could especially not determine whether the energy that heats the LB region at different heights is converted from other energy sources and deposited locally, or results from either an upward or downward energy transfer through the outer solar atmosphere.
## 6 Conclusions
The LB under investigation evolved sufficiently to exhibit overturning convection without fragmenting the regular, unipolar sunspot. Despite the absence of any large-scale flux emergence or apparent changes in the magnetic topology of the AR, the LB was associated with strong heating spanning a temperature range of 8000 K to 2.5 MK, which was maintained for more than two days. In addition to the persistent brightness, large-scale coronal loops are always rooted at or close to the LB. The estimated thermal energy from the EUV, NUV, and visible spectral regions is about \(3.2\times 10^{26}\) erg and lines up with estimates of the chromospheric radiative losses. The continued heating could be accounted for by one, or a combination of the following processes, namely, loss of magnetic flux, kinetic energy from the lateral expansion of the LB or overturning convection inside it, and freefall acceleration of plasma along coronal loops. The absence of strong electric currents in the LB rules out heating by ohmic dissipation. Further studies are needed to determine if such sustained heating is a general characteristic of sunspot LBs or whether the LB in this study is a rare exception.
###### Acknowledgements.
_The 0.5 m Multi-Application Solar Telescope is operated by the Udaipur Solar Observatory, Physical Research Laboratory, Dept. of Space, Govt. of India. IRIS is a NASA small explorer mission developed and operated by LMSAL with mission operations executed at NASA Ames Research Center and major contributions to downlink communications funded by ESA and the Norwegian Space Centre. Hinode is a Japanese mission developed and launched by ISAS/JAXA, collaborating with NAOJ as a domestic partner, NASA and STFC (UK) as international partners. Scientific operation of the Hinode mission is conducted by the Hinode science team organized at ISAS/JAXA. This team mainly consists of scientists from institutes in the partner countries. Support for the post-launch operation is provided by JAXA and NAOJ(Japan), STFC (U.K.), NASA, ESA, and NSC (Norway). The data in this article are courtesy of NASA/SDO and the AIA science team. SDO is a mission for NASA's Living With a Star (LWS) Program. Hinode SOT/SP Inversions were conducted at NCAR in the framework of the Community Spectro-polarimetric Analysis Center (CSAC; http:www.csac.hao.ucar.edu). We thank Ms. Bireddy Ramya and Ms. Anisha Kulhari at Udaipur Solar Observatory for assisting in the telescope operation and image acquisition. R.E.L. would like to thank Graham Barnes at NWRA for providing the latest version of the AMBIG code and Avjicet Prasad at the University of Oslo for carrying out the NFFF extrapolation. We thank J. Jenkins for help with the synthesis. One of the authors Debi Prasad Choudhary was supported by the National Science Foundation with grant number AGS 2050340. We would like to thank the referee for his/her suggestions.
|
2307.15835
|
Mean Estimation with User-level Privacy under Data Heterogeneity
|
A key challenge in many modern data analysis tasks is that user data are
heterogeneous. Different users may possess vastly different numbers of data
points. More importantly, it cannot be assumed that all users sample from the
same underlying distribution. This is true, for example in language data, where
different speech styles result in data heterogeneity. In this work we propose a
simple model of heterogeneous user data that allows user data to differ in both
distribution and quantity of data, and provide a method for estimating the
population-level mean while preserving user-level differential privacy. We
demonstrate asymptotic optimality of our estimator and also prove general lower
bounds on the error achievable in the setting we introduce.
|
Rachel Cummings, Vitaly Feldman, Audra McMillan, Kunal Talwar
|
2023-07-28T23:02:39Z
|
http://arxiv.org/abs/2307.15835v1
|
# Mean Estimation with User-level Privacy under Data Heterogeneity
###### Abstract
A key challenge in many modern data analysis tasks is that user data are heterogeneous. Different users may possess vastly different numbers of data points. More importantly, it cannot be assumed that all users sample from the same underlying distribution. This is true, for example in language data, where different speech styles result in data heterogeneity. In this work we propose a simple model of heterogeneous user data that allows user data to differ in both distribution and quantity of data, and provide a method for estimating the population-level mean while preserving user-level differential privacy. We demonstrate asymptotic optimality of our estimator and also prove general lower bounds on the error achievable in the setting we introduce.
## 1 Introduction
Many practical problems in statistical data analysis and machine learning deal with the setting in which each user generates multiple data points. In such settings the distribution of each user's data may be somewhat different and, furthermore, users may possess vastly different numbers of samples. This issue is one the key challenges in federated learning (Kairouz et al., 2021) leading to considerable interest in models and algorithms that address this issue.
As an example, consider the task of next-word prediction for a keyboard. Different users typing on a keyboard may have different styles of writing or focus on different topics, leading to different distributions. There are aspects of the language that are common to all users, and likely additional aspects of style that are common to large groups of users. Thus while each user has their own data distribution, there are commonalities between the distributions, and additional commonalities amongst distributions corresponding to particular subsets of users. Modeling and learning such relationships between users' distributions is crucial for building a better global model for all users, as well as for personalizing models for users.
The focus of this work is on differentially private algorithms for such settings. We assume that there is an unknown global meta-distribution \(\mathcal{D}\). For each user \(i\), a personal data distribution \(\mathcal{D}_{i}\) is chosen randomly from \(\mathcal{D}\) (for example, by sampling a set of parameters that define \(\mathcal{D}_{i}\)). Each user then receives some number \(k_{i}\) of i.i.d. samples from \(\mathcal{D}_{i}\). The goal is to solve an analysis task relative to \(\mathcal{D}\), with an eye towards better modeling of each \(\mathcal{D}_{i}\) even when \(k_{i}\) is small. This abstract setting can model many practical settings where the relationships between the \(\mathcal{D}_{i}\)'s take different forms. Indeed the standard loss in federated learning is the (unweighted) average over users of a per-user loss function (Kairouz et al., 2021, Sec. 3.3.2), which corresponds to learning when the underlying distribution is \(\mathcal{D}\). Little theoretical work has been done in this setting and even the most basic statistical tasks are poorly understood. Thus we start by focusing on the fundamental problem of mean estimation. Specifically, in our model, \(\mathcal{D}\) is a distribution on the interval \([0,1]\) with unknown mean \(p\) and unknown variance \(\sigma_{p}^{2}\). Further, we assume that \(\mathcal{D}_{i}\) is simply a Bernoulli distribution with mean \(p_{i}\sim\mathcal{D}\).
While the general \(\mathcal{D}_{i}\) setting is of interest, the Bernoulli case captures a variety of interesting use cases. For example, each sample from the Bernoulli distribution could represent whether or not the user has clicked
on an ad. Another common example is model evaluation, where the user produces a Bernoulli sample by engaging or not engaging with a feature (e.g., phone keyboard next word suggestion, crisis helpline link, search engine knowledge panels, sponsored link in search results, etc.). As a concrete example, a language model is used to make the next word suggestions on a phone keyboard. A new version of this model would be first tested to measure the average suggestion acceptance rate over users. Each user would thus generate a set of independent Bernoulli r.v.'s with each individual mean \(p_{i}\) corresponding to the model accuracy for the specific user. Heterogeneity comes from different users typing differently (and hence model accuracy varying across users) and using the keyboard with different frequency. Note that the distribution of model accuracies among users is the meta distribution \(\mathcal{D}\) in our work. More generally, measuring the average accuracy of a classification model among a large group of users is an important task in itself. Such models are deployed in privacy-sensitive applications such as health and finance. The resulting statistics may need to be shared with third parties or other teams within a company, raising potential user privacy concerns.
Our main contribution is a differentially private algorithm that estimates the mean of \(\mathcal{D}\) in this heterogeneous setting. We first study this question in an idealized setting, where the variance of \(\mathcal{D}\) is known, and no privacy constraints. Here the optimal non-private estimator for \(p_{i}\) is simple and linear: it is a weighted linear combination of the individual user means with weights that depend on the \(k_{i}\)'s and on \(\sigma_{p}\). The variance of this estimate is \(\sigma_{ideal}^{2}\approx(\sum_{i}\min(k_{i},\sigma_{p}^{-2}))^{-1}\). This expression has a natural interpretation: this is the variance from using \(\min(k_{i},\sigma_{p}^{-2})\) samples from user \(i\) and averaging all the Bernoulli samples thus obtained. We then design a differentially private estimator for \(p\). We show that under mild assumptions, there is no asymptotic price to privacy (and to not knowing \(\sigma_{p}\)). That is, our differentially private estimator has variance \(\tilde{O}(\sigma_{ideal}^{2})\). For some intuition, note that the restriction on using at most \(\sigma_{p}^{-2}\) samples from each user ensures that the estimator is not too affected by their individual mean \(p_{i}\). Interestingly, the estimator achieving this bound in the private setting is non-linear. Further, we show that \(\sigma_{ideal}^{2}\) is close to the best achievable variance, under some mild technical conditions.
Our technical results highlight several of the challenges associated with ensuring user-level privacy when data is heterogeneous. For example, in the heterogeneous setting, the optimal choice of weights for each user contribution depends on properties of \(\mathcal{D}\) that also need to be estimated from the data. Further, we show a novel approach to proving lower bounds for private statistical estimation in the heterogenous setting. Our approach builds on the proof of the Cramer-Rao lower bound in statistics, and we show how privacy terms can be incorporated in this approach to show near optimality of our algorithms for nearly every setting of \(k_{i}\)'s. These tools and insights should be useful for modeling and designing algorithms for more involved data analysis tasks.
We note that the optimal algorithm for this problem was not known prior to this work, even in the special case where all \(\mathcal{D}_{i}\)'s are identical (or, equivalently, \(\sigma_{p}^{2}=0\)) but users hold different numbers of samples. In the absence of privacy constraints, this setting poses no additional complexity over the case where each user has a single data point, since the data points all come from the same distribution. However, with the requirement of user-level differential privacy, even this special case appears to require many of the technical tools developed in this work (see Section 4.3 for a detailed discussion).
We aim to help foster similar model-driven exploration in other settings. There have been attempts to handle heterogeneity by phrasing the problem as meta-learning or multi-task learning (Kairouz et al., 2021, Sec 3.3.3.3). These works rely on implicit assumptions about the different distributions. Our goal is to start with a more principled approach that makes explicit the assumptions on the relationship between different distributions and use that to derive algorithms. For example, if we were to model the \(\mathcal{D}_{i}\)'s as having means coming from a mixture of Gaussians, the estimation of cluster means would be a necessary step in an EM-type algorithm. Our choice of \(\mathcal{D}_{i}\)'s being Bernoulli is meant to capture discrete distribution learning problems that have been extensively studied in private federated settings. Our techniques are general and extend naturally to real-valued random variables where, e.g., \(\mathcal{D}_{i}\) is a Gaussian with mean \(p_{i}\) and known variance. While we make minimal assumptions on \(\mathcal{D}\), our results asymptotically match the lower bounds for the case of \(\mathcal{D}\) being Gaussian with known variance. Our techniques also have natural extensions to higher dimensions.
Summary of our results:Our main results involve three estimators; an idealized (non-realisable) estimator \(\widehat{p}_{\epsilon}^{\text{ideal}}\) that assumes that the mean and variance of \(\mathcal{D}\) are known to the algorithm, an estimator \(\widehat{p}_{\epsilon}\) that is private with respect to the user's samples, but not with respect to each user's number of samples \(k_{i}\), and finally an estimator \(\widehat{p}_{\epsilon}^{\text{priv }k}\) that is private with respect to both the samples _and_ the number of samples. Let \(\widehat{p}_{i}\) be the mean of the \(k_{i}\) samples from user \(i\). The estimators \(\widehat{p}_{\epsilon}\) and \(\widehat{p}_{\epsilon}^{\text{priv }k}\) both require as input initial, less accurate \((\epsilon,\delta)\)-DP mean and variance estimators \(\texttt{mean}_{\epsilon,\delta}\) and \(\texttt{variance}_{\epsilon,\delta}\). The main results of this paper can be (informally) summarised as follows:
* **Near optimality of \(\widehat{p}_{\epsilon}^{\text{ideal}}\) [Theorem 5.1].** For any parameterized family of distributions \(p\mapsto\mathcal{D}_{p}\), such that the Fisher information of \(\widehat{p}_{i}\) is inversely proportional to the variance of \(\widehat{p}_{i}\) for all \(i\), each \(\widehat{p}_{i}\) is sufficiently-well concentrated (e.g. sub-Gaussian) and \(p\in[1/3,2/3]\), we have that \(\widehat{p}_{\epsilon}^{\text{ideal}}\) is minimax optimal, up to logarithmic (in \(n\)) factors, among all unbiased estimators of \(p\). The estimator \(\widehat{p}_{\epsilon}^{\text{ideal}}\) itself is not unbiased, but it has very low bias. The proof of this result involves a Cramer-Rao style argument which may be of independent interest. This result allows us to use \(\widehat{p}_{\epsilon}^{\text{ideal}}\) as a yardstick by which to compare \(\widehat{p}_{\epsilon}\) and \(\widehat{p}_{\epsilon}^{\text{priv }k}\).
* **Near optimality of \(\widehat{p}_{\epsilon}\) [Theorem 4.1].** Assume there exists mean and variance estimators, \(\texttt{mean}_{\epsilon,\delta}\) and \(\texttt{variance}_{\epsilon,\delta}\), such that when run with a constant fraction (say \(n/10\)) of the users, \(\texttt{mean}_{\epsilon,\delta}\) returns a sufficiently good estimate of \(p\) (roughly no worse than the estimate from any single user, and implies a constant multiplicative approximation to \(p(1-p)\)), and when run with \(\log n/\epsilon\) users, \(\texttt{variance}_{\epsilon,\delta}\) returns a constant multiplicative approximation to \(\sigma_{p}^{2}\). If the maximum \(k_{i}\) and median \(k_{i}\) are within a factor of \((n\epsilon/\log n)-1\), then the variance of \(\widehat{p}_{\epsilon}\), with \(\texttt{mean}_{\epsilon,\delta}\) and \(\texttt{variance}_{\epsilon,\delta}\) as the inputted initial estimators, is within a constant factor of the variance of \(\widehat{p}_{\epsilon}^{\text{ideal}}\). The conditions on \(\texttt{mean}_{\epsilon,\delta}\) and \(\texttt{variance}_{\epsilon,\delta}\) are not particularly stringent and such estimators exist, for example, when \(\mathcal{D}\) is a truncated Gaussian distribution with mean bounded away from \(0\) or \(1\) and sufficiently small variance.
* **Near Optimality of \(\widehat{p}_{\epsilon}^{\text{priv }k}\) [Theorem 4.3].** Under slightly more stringent conditions on \(\mathcal{D}\) and the assumption that the maximum \(k_{i}\) and median \(k_{i}\) are within a factor of \(O(n\epsilon^{2}/\log n)\), we extend the upper bounds to the case when \(k_{i}\)'s are also considered private information. The conditions are again satisfied, for example, by truncated Gaussian distributions with mean bounded away from \(0\) or \(1\) and sufficiently small variance.
* **Lower bound in terms of \(k_{i}\) [Corollary 5.6].** Finally, we show that for any sequence \(k_{1},\cdots,k_{n}\) and variance \(\sigma_{p}^{2}\) there exists \(k^{*}\) and a family of distributions \(p\mapsto\mathcal{D}_{p}\) such that the minimax optimal error among all unbiased estimators of \(p\), for \(p\) in the range \([1/3,2/3]\), is lower bounded by \[\tilde{\Omega}\left(\min\left\{\sqrt{\frac{k^{*}}{\epsilon^{2}}+\sum_{i=1}^{n }\min\{k_{i},k^{*}\}}\frac{\sigma_{p}}{(\sum_{i=1}^{n}\min\{k_{i},\sqrt{k_{i}} k^{*}\})^{2}},\frac{\sigma_{p}}{\sqrt{n}}\right\}\right).\]
We note that our main algorithmic results require concentration of the meta-distribution \(\mathcal{D}\). We note that in practice, this is not an unreasonable assumption. For example, in the case of model evaluation, it may be be reasonable to assume that a general model has similar accuracy for the vast majority of users, or formally, that the model accuracy is well-concentrated.
### Related Work
Frequency estimation in the example-level privacy model has been well-studied in the central (Dwork et al., 2006; Dwork and Roth, 2014) and local models (Hsu et al., 2012; Erlingsson et al., 2014; Chen et al., 2020; Acharya and Sun, 2019; Acharya et al., 2019). Similarly, private mean estimation has been well studied in both central (Dwork et al., 2006; Hardt and Talwar, 2010) and local models (Duchi et al., 2018; Duchi and Rogers, 2019; Bhowmick et al., 2019) of privacy. These works have focused on providing example-level privacy (rather than user-level) in settings with homogeneous data, i.e., i.i.d. samples.
Liu et al. (2020) recently studied the problem of learning discrete distributions in the homogeneous cases (same distribution and same number of samples per user) with user-level differential privacy, and Levy et al. (2021) extended such results to other statistical tasks. These works also consider the setting with different number of samples per user although only via a reduction to same number of samples by discarding the data of users that have less than the median number of samples and effectively only using the median number of samples from all the other users. This approach can be asymptotically suboptimal for many natural distributions of \(k_{i}\)'s and is also likely to be worse in practice. Previously, McSherry and Mironov (2009) showed how to build a (user-level) differentially private recommendation system, and McMahan et al. (2018) showed how to train a language model with user-level differential privacy.
User-level differential privacy in the context of heterogeneous data distributions has been studied in the constant \(k_{i}\) setting Ozkara et al. (2022). Much of the complexity in our setting arises from variation in the \(k_{i}\) values, which makes it challenging to maintain user-level privacy while leveraging the additional data points from users with a large number of data points.
The challenges to optimization due to data heterogeneity have also been studied; Zhou and Cong (2018); Hanzely and Richtarik (2020), and Eichner et al. (2019) study the approach of using different models for different groups from a convex optimization point-of-view.
Mathematically, similar issues are addressed in meta-analysis (Borenstein et al., 2021; Wikipedia contributors, 2021), where the heterogeneity comes from different studies instead of different users. The non-private approach of inverse variance weighting that we recap in Section 3 is standard in that context.
## 2 Model and Preliminaries
Let \(\mathcal{D}\) be a distribution on \([0,1]\) with (unknown) mean \(p\) and variance \(\sigma_{p}^{2}\). We assume a population of \(n\in\mathbb{N}\) users, where each user \(i\in[n]\) has a hidden variable \(p_{i}\sim\mathcal{D}\) and \(k_{i}\in\mathbb{N}\) samples \(x_{i}^{1},\ldots,x_{i}^{k_{i}}\sim_{i.i.d.}Ber(p_{i})\). That is, the samples of user \(i\) are i.i.d. from a Bernoulli distribution with parameter \(p_{i}\), which we will denote \(\mathcal{D}_{i}=\)Ber\((p_{i})\). Assume without loss of generality that individuals are sorted by their \(k_{i}\), so that \(k_{1}\geq\cdots\geq k_{n}\). The hidden variables \(p_{i}\) of each user are unknown to the analyst. In the non-private setting, the samples \(x_{i}^{j}\) and \(k_{i}\) will be accessible to the analyst. In the private setting, access to these data is constrained.
The analyst's goal is to estimate the population mean \(p\) with an estimator of minimum variance in a manner that is differentially private with respect to user data (\(p_{i}\) and \(\{x_{i}^{j}\}\)). Each user provides their own estimate of their \(p_{i}\) to the analyst based on their data \(x_{i}\): \(\widehat{p}_{i}=\frac{1}{k_{i}}\sum_{j=1}^{k_{i}}x_{i}^{j}\). The analyst can then aggregate these (possibly along with other information) into her estimate of \(p\).
Let us first give some intuition for the distribution of these \(\widehat{p}_{i}\). Let \(\mathcal{D}(k)\) be the distribution that first samples \(p_{i}\sim\mathcal{D}\), then samples \(x_{1},\cdots,x_{k}\sim Ber(p_{i})\) and finally outputs \(\widehat{p}_{i}=\frac{1}{k}\sum_{i=1}^{k}x_{i}\). The following lemma (proven in Appendix A) shows that the variance of \(\widehat{p}_{i}\) is larger than \(\sigma_{p}^{2}\) and transitions from \(p(1-p)\) to \(\sigma_{p}^{2}\) as \(k\) increases (equivalently as \(\widehat{p}_{i}\) concentrates around \(p_{i}\)).
**Lemma 2.1**.: _For all distributions \(\mathcal{D}\) supported on \([0,1]\) with mean \(p\) and variance \(\sigma_{p}^{2}\), \(\sigma_{p}^{2}\leq p(1-p)\). Further, \(\mathbb{E}[\mathcal{D}(k)]=p\) and \(\mathrm{Var}(\mathcal{D}(k))=\frac{1}{k}p(1-p)+\left(1-\frac{1}{k}\right) \sigma_{p}^{2}\)._
We assume that \(k_{i}\) and \(p_{i}\) are independent, so the amount of data an individual has is independent of her data distribution. This is crucial for the problem setup: in order for learning from the heterogeneous population to be advantageous, there must a common meta-distribution is shared across all individuals in the population, rather than a meta-distribution only for each fixed \(k_{i}\). If \(k_{i}\) and \(p_{i}\) can be arbitrarily correlated, then the meta-distribution for each value of \(k_{i}\) can be different. Hence, the best solution in that setting is to learn on each sub-population (where the sub-populations are defined by their value of \(k_{i}\)) separately. While this assumption is natural in some settings, it is unlikely to hold in others - for example, different writing styles that are more or less verbose. In future work, it may be interesting to explore how various heterogeneity assumptions affect learning algorithms.
### Differential Privacy
Differential privacy (DP) [Dwork et al., 2006] informally limits the inferences that can be made about an individual as a result of computations on a large dataset containing their data. This privacy guarantee is achieved algorithmically by randomizing the computation to obscure small changes in the dataset. The definition of differential privacy requires a _neighbouring relation_ between datasets. If two datasets \(D\) and \(D^{\prime}\) are neighbours under the neighbouring relation, then differences between these two datasets should be hidden by the private algorithm.
**Definition 2.2** ( \((\epsilon,\delta)\)-Differential Privacy [Dwork et al., 2006]).: Given \(\epsilon\geq 0\), \(\delta\in[0,1]\) and a neighbouring relation \(\sim\), a randomized mechanism \(\mathcal{M}:\mathfrak{D}\ \rightarrow\ \mathcal{Y}\) from the set of datasets to an output space \(\mathcal{Y}\) is \((\epsilon,\delta)\)-_differentially private_ if for all neighboring datasets \(D\sim D^{\prime}\in\mathfrak{D}\), and all events \(E\subseteq\mathcal{Y}\),
\[\Pr[\mathcal{M}(D)\in E]\leq e^{\epsilon}\cdot\Pr[\mathcal{M}(D^{\prime})\in E ]+\delta,\]
where the probabilities are taken over the random coins of \(\mathcal{M}\). When \(\delta=0\), we may refer to this as \(\epsilon\)_-differential privacy_.
When each user has a single data point, the neighbouring relation is typically defined as: \(D\) and \(D^{\prime}\) are neighbours if they differ on the data of a single individual, i.e., a single data point. In our setting where users have multiple data points, we must distinguish between _user-level_ and _event-level_ DP. The former considers \(D\) and \(D^{\prime}\) neighbours if they differ on all data points associated with a single user, whereas the latter considers \(D\) and \(D^{\prime}\) neighbours only if they differ on a _single_ data point, regardless of the number of data points contributed by that user. Naturally, user-level DP provides substantially stronger privacy guarantees, and is often more challenging to achieve from a technical perspective. In this work, we will provide user-level DP guarantees.
Further, when defining user-level DP where users have heterogeneous quantities of data, we also need to distinguish between settings where the number of data points held by each user is protected information, and settings where it is publicly known. We'll refer to the former as _private \(k\) user-level differential privacy_, where the entry that differs between neighboring databases can have arbitrarily different number of data points, and the latter as _public-size user-level differential privacy_, where the amount of data held by each user is the same in neighboring databases. Formally, let \(D_{i}=\{x_{i}^{1},\cdots,x_{i}^{k_{i}}\}\) be the data of user \(i\) for each \(i\in[n]\). For private \(k\) user-level differential privacy, we say \(D\) and \(D^{\prime}\) are neighbours if there exists an index \(i\) such that for all \(j\in[n]\backslash\{i\}\), \(D_{j}=D^{\prime}_{j}\). For public-size user-level differential privacy, we say \(D\) and \(D^{\prime}\) are neighbours if they are neighbours under private \(k\) user-level differential privacy and additionally \(|D_{i}|=|D^{\prime}_{i}|\) for all \(i\in[n]\).
One standard tool for achieving \(\epsilon\)-differential privacy is the _Laplace Mechanism_. For a given function \(f\) to be evaluated on a dataset \(D\), the Laplace Mechanism first computes \(f(D)\) and then adds Laplace noise which depends on the _sensitivity_ of \(f\), defined for real-valued functions as
\[\Delta f=\max_{D,D^{\prime}\text{ neighbors}}|f(D)-f(D^{\prime})|.\]
The Laplace Mechanism outputs \(\mathcal{M}_{L}(D,f,\epsilon)=f(D)+\text{Lap}(\Delta f/\epsilon)\), and is \((\epsilon,0)\)-DP.
Differential privacy satisfies _robustness to post-processing_, meaning that any function of a DP mechanism will retain the same privacy guarantee. DP also _composes adaptively_, meaning that if an \((\epsilon_{1},\delta_{1})\)-DP mechanism and and an \((\epsilon_{2},\delta_{2})\)-DP mechanism are both applied to the _same dataset_, then the entire process is \((\epsilon_{1}+\epsilon_{2},\delta_{1}+\delta_{2})\)-DP. _Parallel composition_ of DP mechanisms says that if DP mechanisms are applied to disjoint datasets, then composition is not required. That is, if an \((\epsilon_{1},\delta_{1})\)-DP mechanism and and an \((\epsilon_{2},\delta_{2})\)-DP mechanism are each applied to _disjoint datasets_, then the entire process is \((\max\{\epsilon_{1},\epsilon_{2}\},\max\{\delta_{1},\delta_{2}\})\)-DP with respect to both datasets together.
## 3 A Non-Private Estimator
We begin by illustrating the procedure for computing an optimal estimator \(\widehat{p}\) in the non-private setting. The general structure of the estimator will be the same in both the private and non-private settings. The
analyst will compute the population-level mean estimate \(\widehat{p}\) as a weighted linear combination of the user-level estimates \(\widehat{p}_{i}\).1 The key question is how to derive the weights so that individuals with more reliable estimates (i.e., larger \(k_{i}\)) have more influence over the final result.
Footnote 1: In the non-private setting, this restriction is without loss of generality since the optimal estimator takes this form. In the private setting this is still near-optimal; see Section 5 for more details.
**Input:** number of users \(n\), number of samples held by each user \((k_{1},\ldots,k_{n}\)_s.t._\(k_{i}\geq k_{i+1})\), user-level estimates \((\widehat{p}_{1},\cdots,\widehat{p}_{n})\).
```
1:Initial Estimates
2:\(\widehat{p}^{\text{initial}}=\sum_{i=9n/10}^{n}x_{i}^{1}\)\(\triangleright\) Initial mean estimate
3:\(\widehat{\sigma}_{p}^{2}=\frac{1}{\log n(\log n-1)}\sum_{i,j\in[\log n]}( \widehat{p}_{i}-\widehat{p}_{j})^{2}\)\(\triangleright\) Initial variance estimate
4:Defining weights
5:for\(i=\log n\) to \(9n/10\)do
6: Compute \(\widehat{\sigma}_{i}^{2}=\frac{1}{k_{i}}(\widehat{p}^{\text{initial}}-( \widehat{p}^{\text{initial}})^{2})+(1-\frac{1}{k_{i}})\widehat{\sigma}_{p}^{2}\).\(\triangleright\) Estimate individual variances
7:\(\widehat{w_{i}}=\frac{1/\widehat{\sigma}_{i}^{2}}{\sum_{j=\log n}^{9n/10} \widehat{\sigma}_{j}^{2}}\)\(\triangleright\) Compute normalised weights
8:Final Estimate
9:return\(\widehat{p}=\sum_{i=\log n}^{n}\widehat{w_{i}}\widehat{p}_{i}\)\(\triangleright\) Final estimate
```
**Algorithm 1** Non-private Heterogeneous Mean Estimation \(\widehat{p}\)
Let \(\sigma_{i}^{2}\) be the variance of \(\widehat{p}_{i}\). In an idealized setting where the \(\sigma_{i}^{2}\) are all known, the analyst can minimize the variance of the estimator by weighting each user's estimate \(\widehat{p}_{i}\) proportionally to the inverse variance of their estimate. The weights are then normalised to ensure the estimate is unbiased. This approach yields the following estimator, which is optimal in the non-private setting (Hartung et al., 2008):
\[\widehat{p}^{\text{ideal}}=\sum_{i=1}^{n}w_{i}^{*}\widehat{p}_{i}\text{ where }w_{i}^{*}=\frac{1/\sigma_{i}^{2}}{\sum_{j=1}^{n}1/\sigma_{j}^{2}}. \tag{1}\]
In practice, the \(\sigma_{i}^{2}\)s are unknown, so the analyst must rely on estimates to assign weights. Fortunately, the user-level variance \(\sigma_{i}^{2}\) can be expressed as a function of \(k_{i}\) and the population statistics \(p\) and \(\sigma_{p}^{2}\), as shown in Lemma 2.1:
\[\sigma_{i}^{2}=\tfrac{1}{k_{i}}(p-p^{2})+(1-\tfrac{1}{k_{i}})\sigma_{p}^{2}. \tag{2}\]
Now, \(p\) and \(\sigma_{p}^{2}\) are also unknown but since they are population statistics, we can use simple estimators to obtain initial estimates. These initial statistics can then be used to define the weights, resulting in a refined estimate of the mean \(p\). Specifically, as outlined in Algorithm 1, we split users into three groups. The \(\log n\) individuals with the most data are used to produce an estimate of \(\text{Var}(\mathcal{D}(k_{\log n}))\), which serves as a proxy for \(\sigma_{p}^{2}\). The \(1/10\)th of individuals with the least data are used to produce an initial estimate of the mean \(p\). The remaining \(9n/10-\log n\) individuals are used to produce the final estimate. We split the individuals into separate groups to ensure the initial estimates and the final estimate are independent so we can easily obtain variance bounds on the final estimate. The specific sizes of the three groups are heuristic; the exact fraction \(1/10\) is not necessary. Under some mild conditions on \(\mathcal{D}\), and if \(n\) is large enough, the error incurred by \(\widehat{p}\) is within a constant factor of the error incurred by the ideal estimator \(\widehat{p}^{\text{ideal}}\).2
Footnote 2: This can be observed by viewing the non-private setting as a simplified version of the setting studied in Section 5, which proves near-optimality of (truncated) linear estimators for this problem.
## 4 A Framework for Private Estimators
We now turn to our main result, which is a framework for designing differentially private estimators for the mean \(p\) of the meta-distribution \(\mathcal{D}\). We discussed in Section 3 the need for initial estimates of \(p\) and \(\sigma_{p}^{2}\) to
weight the contributions of the users. In the non-private setting, there are canonical, optimal choices of these estimators; the empirical mean and empirical variance. In the private setting, these choices are not canonical, and different estimators may perform better in different settings. There is a considerable literature exploring various mean and variance estimators for the homogeneous, single-data-point-per-user setting. As such, we leave the choice of the specific initial mean and variance estimators as parameters of the framework. This allows us to focus on the nuances of the heterogeneous setting, not addressed in prior work. In Section 6, we give a specific pair of private mean and variance estimators that provably perform well in our framework.
We will define three estimators: a ideal estimator \(\widehat{p}_{\epsilon}^{\text{ideal}}\) (only implementable if all the \(\sigma_{i}^{2}\) are known), and a realisable estimator \(\widehat{p}_{\epsilon}\) in the public-size user-level DP setting, and a realisable estimator \(\hat{p}_{\epsilon}^{\text{priv }k}\) in the private \(k\) user-level DP setting. The main result in the public-size user-level DP setting (Theorem 4.1) is that under some mild conditions and assuming \(n\) is sufficiently large, there exists an \((\epsilon,\delta)\)-DP estimator \(\widehat{p}_{\epsilon}\) (Algorithm 2) such that for some constant \(C\),
\[\text{Var}(\widehat{p}_{\epsilon})\leq C\cdot\text{Var}(\widehat{p}_{\epsilon }^{\text{ideal}}).\]
In Section 4.4, we extend this result to the case where \(k_{i}\)s are private and unknown to the analyst. We will maintain the optimality of the estimator (up to logarithmic factors), under slightly more restrictive conditions (Theorem 4.3).
### The Complete Information Private Estimator
As in Section 3, we begin with a discussion of the ideal estimator if the \(\sigma_{i}\) were known. This ideal private estimator \(\widehat{p}_{\epsilon}^{\text{ideal}}\) has a similar form to \(\widehat{p}^{\text{ideal}}\) with some crucial differences. The first main distinction is that Laplace noise is added to achieve DP, where the standard deviation of the noise must be scaled to the sensitivity of the statistic. A natural solution would be to add noise directly to the non-private estimator \(\widehat{p}^{\text{ideal}}\), but the sensitivity of this statistic is too high. In fact, the worst case sensitivity of \(\widehat{p}^{\text{ideal}}\) is \(1\), which would result in the noise that completely masks the signal. Thus, the first change we make is to limit the weight of any individual's contribution by setting
\[w_{i}=\tfrac{\min\{1/\sigma_{i}^{2},T/\sigma_{i}\}}{\sum_{j=1}^{n}\min\{1/ \sigma_{j}^{2},T/\sigma_{j}\}}\]
for some truncation parameter \(T\). Analogous to the weights used in Section 3, this choice of \(w_{i}\) is still inversely proportional to \(\sigma_{i}^{2}\) up to an upper limit that depends on the truncation parameter \(T\), and then normalized to ensure the weights sum to \(1\) so the estimator is unbiased. Intuitively, the parameter \(T\) controls the trade-off between variance of the weighted sum of individual estimates (which is minimized by assigning high weight to low variance estimators) and variance of the noise added for privacy (which is minimized by assigning roughly equal weight to all users).
We make one final modification to lower the sensitivity of the statistic. Inspired by the Gaussian mean estimator of Karwa and Vadhan (2018), we truncate the individual contributions \(\widehat{p}_{i}\) into a sub-interval of \([0,1]\). The truncation intervals \([a_{i},b_{i}]\) are chosen to be as small as possible (to reduce the sensitivity and hence the noise added for privacy), while simultaneously ensuring that \(\widehat{p}_{i}\in[a_{i},b_{i}]\) with high probability (to avoid truncating relevant information for the estimation). In order to achieve this, we need a tail bound on the distribution \(\mathcal{D}\). To maintain generality for now, we assume there exists a known function \(f_{\mathcal{D}}^{k}(n,\sigma_{p}^{2},\beta)\) that gives high-probability concentration guarantees of \(\widehat{p}_{i}\) around \(p\), and is defined such that
\[\Pr\left(\forall i,|\widehat{p}_{i}-p|\leq f_{\mathcal{D}}^{k_{i}}(n,\sigma_{p }^{2},\beta)\right)\geq 1-\beta.\]
Appendix F presents a more detailed discussion of the structure of these concentration functions and how they may be estimated if they are unknown to the analyst.
We can now describe the full information, or _ideal_ estimator \(\widehat{p}_{\epsilon}^{\text{ideal}}\):
\[\widehat{p}_{\epsilon}^{\text{ideal}}=\sum_{i=1}^{n}w_{i}^{*}[\widehat{p}_{i} ]_{a_{i}}^{b_{i}}+\text{Lap}(\tfrac{\max_{i}w_{i}^{*}[b_{i}-a_{i}]}{\epsilon}), \tag{3}\]
where \([\widehat{p}_{i}]_{a_{i}}^{b_{i}}\) denotes the projection of \(\widehat{p}_{i}\) onto the interval \([a_{i},b_{i}]\) and
\[a_{i}=p-f_{\mathcal{D}}^{k_{i}}(n,\sigma_{p}^{2},\beta),\quad b_{i}=p+f_{ \mathcal{D}}^{k_{i}}(n,\sigma_{p}^{2},\beta),\ \ \text{and}\ \ w_{i}^{*}=\tfrac{\min\{1/\sigma_{i}^{2},T^{*}/\sigma_{i}\}}{\sum_{j=1}^{n} \min\{1/\sigma_{j}^{2},T^{*}/\sigma_{j}\}}. \tag{4}\]
We would like to choose the truncation parameter \(T^{*}\) to minimise the variance of the resulting estimator:
\[\text{Var}(\widehat{p}_{\epsilon}^{\text{ideal}})=\sum_{i=1}^{n}(w_{i}^{*})^{ 2}\text{Var}([\widehat{p}_{i}]_{a_{i}}^{b_{i}})+\max_{i}\tfrac{(w_{i}^{*})^{2} |b_{i}-a_{i}|^{2}}{\epsilon^{2}}. \tag{5}\]
Although we do not know \(\text{Var}([\widehat{p}_{i}]_{a_{i}}^{b_{i}})\) exactly, we do know that \([\widehat{p}_{i}]_{a_{i}}^{b_{i}}=\widehat{p}_{i}\) with high probability, and thus we can approximate \(\text{Var}([\widehat{p}_{i}]_{a_{i}}^{b_{i}})\) with \(\sigma_{i}^{2}\). Throughout the remainder of the paper, we will assume that \(\beta\) is chosen such that \(\tfrac{1}{2}\sigma_{i}^{2}\leq\text{Var}([\widehat{p}_{i}]_{a_{i}}^{b_{i}}).\) Thus, we will approximate the optimal truncation parameter by
\[T^{*} =\arg\min_{T}\sum_{i=1}^{n}(w_{i}^{*})^{2}\sigma_{i}^{2}+\max_{i} \frac{(w_{i}^{*})^{2}|b_{i}-a_{i}|^{2}}{\epsilon^{2}}\] \[=\arg\min_{T}\tfrac{1}{(\sum_{j=1}^{n}\min\{1/\sigma_{j}^{2},T/ \sigma_{i}\})^{2}}(\sum_{i=1}^{n}\min\{1/\sigma_{i}^{2},T^{2}\}+\max_{i}\tfrac {\min\{1/\sigma_{i}^{4},T^{2}/\sigma_{i}^{2}\}|b_{i}-a_{i}|^{2}}{\epsilon^{2}}). \tag{6}\]
We'll show in Section 5 that under some conditions on the Fisher information of \(\mathcal{D}(k)\), \(\widehat{p}_{\epsilon}^{\text{ideal}}\) is optimal up to logarithmic factors among all private unbiased estimators for heterogeneous mean estimation.
**Example 1**.: _As a simple example, suppose that \(p\in(\tfrac{1}{3},\tfrac{2}{3})\), \(\sigma_{p}=1/\sqrt{n}\), and \(k_{i}=\lceil\tfrac{n}{i}\rceil\). In this case, an asymptotically optimal non-private estimator averages all the \(\sum k_{i}=O(n\log n)\) available samples. It can be shown that this gives us an unbiased estimator with standard deviation \(\Theta(\tfrac{1}{\sqrt{n\log n}})\). A naive sensitivity-based noise addition method will give us privacy error \(O(\tfrac{1}{\varepsilon\log n})\), since the weight of the first user in this average is \(\Theta(1/\log n)\). Our truncation-based algorithm will truncate the \(i\)th user's contribution to a range of width \(\sqrt{\tfrac{\log n}{k_{i}}}\approx\sqrt{\tfrac{i\log n}{n}}\). Applying our algorithm would then give us privacy error \(\Theta(\tfrac{1}{\varepsilon\sqrt{n\log n}})\). In other words, for constant \(\varepsilon\), privacy does not have an asymptotic cost. We remark that in this case, any uniform weighted average will incur asymptotically larger standard deviation \(\Omega(\tfrac{1}{\sqrt{n}})\)._
### Realizable Private Heterogeneous Mean Estimation
Our goal in this section is to design a realizable estimator \(\widehat{p}_{\epsilon}\) that is competitive with the ideal estimator \(\widehat{p}_{\epsilon}^{\text{ideal}}\). As in the non-private setting, we divide the individuals into three groups. The first group, consisting of the \(n/10\) individuals with the lowest \(k_{i}\) will be used to compute the initial mean estimate \(\widehat{p}_{\epsilon}^{\text{initial}}\). The \(L\) individuals with the largest \(k_{i}\) will be used to compute the initial variance estimate \(\widehat{\sigma}_{p}^{2}\). These will respectively be computed using private subroutines \(\texttt{mean}_{\epsilon,\delta}\) and \(\texttt{variance}_{\epsilon,\delta}\), which each provide event-level DP, as they each operate on only a single point from each user. These initial estimates will be plugged into expressions to compute \(\widehat{\sigma}_{i}^{2}\), \(\widehat{a_{i}}\), and \(\widehat{b_{i}}\) for the remaining individuals \(L+1\leq i\leq 9n/10\). As in the non-private setting, the specific sizes of these groups are heuristic. The important thing is that the size of the first two groups are large enough that the resulting mean and variance estimates are sufficiently accurate, and the last group contains \(\Theta(n)\)-users whose \(k_{i}\) is above the median.
Since the estimate \(\widehat{p}_{\epsilon}^{\text{initial}}\) used in \(\widehat{a_{i}}\) and \(\widehat{b_{i}}\) may have additional error up to \(\alpha\) (which will depends on the additive accuracy guarantee of \(\texttt{mean}_{\epsilon,\delta}\)), we shift these estimates by an additive \(\alpha\) to account for this error. Next, all of these intermediate estimates and the user-level mean estimates \(\widehat{p}_{i}\) from users \(L+1\leq i\leq 9n/10\) will be used to compute the optimal weight cutoff \(\widehat{T}^{*}\), the optimal weights \(\widehat{w}_{i}^{*}\) for each user \(L+1\leq i\leq 9n/10\), and finally the estimator \(\widehat{p}_{\epsilon}\) as a weighted sum of the truncated user-level estimates \([\widehat{p}_{i}]_{\widehat{a_{i}}}^{\widehat{b_{i}}}\) plus Laplace noise. This procedure is presented in full detail in Algorithm 2.
For the remainder of this section, we turn to establishing the accuracy requirements of \(\texttt{mean}_{\epsilon,\delta}\) and \(\texttt{variance}_{\epsilon,\delta}\) that ensure that the variance of \(\widehat{p}_{\epsilon}\) is within a constant factor of the variance of \(\widehat{p}_{\epsilon}^{\text{ideal}}\).
**Theorem 4.1**.: _For any \(\epsilon>0\), \(\delta\in[0,1]\), \(\alpha>0\), \(\beta\in[0,1]\), \(n\in\mathbb{N}\), \(0\leq L\leq 3n/5\), \((\epsilon,\delta)\)-DP mean estimator mean\({}_{\epsilon,\delta}\), \((\epsilon,\delta)\)-DP variance estimator variance\({}_{\epsilon,\delta}\), and sequence \((k_{1},\ldots,k_{n}\) s.t. \(k_{i}\geq k_{i+1})\), Algorithm 2 is \((\epsilon,\delta)\)-DP. If,_
**Input parameters:** privacy parameters \(\epsilon>0\), \(\delta\in[0,1]\), desired high probability bound \(\beta\in[0,1]\), number of users \(n\), an \((\epsilon,\delta)\)-DP mean estimator \(\texttt{mean}_{\epsilon,\delta}\), error guarantee on \(\texttt{mean}_{\epsilon,\delta}\)\(\alpha>0\), an \((\epsilon,\delta)\)-DP variance estimator \(\texttt{variance}_{\epsilon,\delta}\), number of samples for variance estimator \(L\), and number of samples held by each user \((k_{1},\ldots,k_{n}\)\(s.t.\)\(k_{i}\geq k_{i+1})\).
**Input data:** User-level estimates \((\widehat{p}_{1},\cdots,\widehat{p}_{n})\)
```
1:Initial Estimates
2:\(\widehat{p}_{\epsilon}^{\text{initial}}=\texttt{mean}_{\epsilon,\delta}( \texttt{x}_{\texttt{9n/10+1}}^{1},\cdots,\texttt{x}_{\texttt{n}}^{1})\)\(\triangleright\) Initial mean estimate
3:\(\widehat{\sigma}_{p}^{2}=\texttt{variance}_{\epsilon,\delta}(\widehat{p}_{1}, \cdots,\widehat{p}_{L})\)\(\triangleright\) Initial variance estimate
4:Defining weights and truncation
5:for\(i=L+1\) to \(9n/10\)do
6: Compute \(\widehat{\sigma}_{i}^{2}=\frac{1}{k_{i}}(\widehat{p}_{\epsilon}^{\text{ initial}}-(\widehat{p}_{\epsilon}^{\text{initial}})^{2})+(1-\frac{1}{k_{i}}) \widehat{\sigma}_{p}^{2}\). \(\triangleright\) Estimate individual variances
7:\(\widehat{a}_{i}=\widehat{p}_{\epsilon}^{\text{initial}}-\alpha-f_{D}^{k_{i}} (\widehat{\sigma}_{p}^{2},\beta)\)
8:\(\widehat{b}_{i}=\widehat{p}_{\epsilon}^{\text{initial}}+\alpha+f_{D}^{k_{i}} (n,\widehat{\sigma}_{p}^{2},\beta)\)\(\triangleright\) Estimate truncation parameters
9:\(\widehat{T}^{*}=\arg\min_{T}\frac{\sum_{i=L+1}^{9n/10}\min\{\frac{1}{\widehat{ \sigma}_{i}^{2}},T^{2}\}+\max_{L+1\leq i\leq 9n/10}\frac{\min\{1/\widehat{ \sigma}_{i}^{4},T^{2}/\widehat{\sigma}_{j}^{2}\}|\widehat{b}_{i}-\widehat{c}_{ i}|^{2}}{\epsilon^{2}}}{(\sum_{i=L+1}^{9n/10}\min\{1/\widehat{\sigma}_{j}^{2},T/ \widehat{\sigma}_{i}\})^{2}}\)
10:\(\triangleright\) Compute weight truncation
11:for\(i=L+1\) to \(9n/10\)do
12:\(\widehat{w}_{i}^{*}=\frac{\min\{1/\widehat{\sigma}_{i}^{2},\widehat{T}^{*}/ \widehat{\sigma}_{i}\}}{\sum_{j=L+1}^{9n/10}\min\{1/\widehat{\sigma}_{j}^{2}, \widehat{T}^{*}/\widehat{\sigma}_{i}\}}\)\(\triangleright\) Compute weights
13:Final Estimate
14:\(\Lambda=\max_{i\in[L+1,9n/10]}\frac{\min\{1/\widehat{\sigma}_{i}^{2},\widehat{T}^{*}/ \widehat{\sigma}_{i}\}|\widehat{b}_{i}-\widehat{a}_{i}|}{\sum_{j=L+1}^{9n/10} \min\{1/\widehat{\sigma}_{j}^{2},\widehat{T}^{*}/\widehat{\sigma}_{i}\}}\)\(\triangleright\) Compute sensitivity
15:Sample \(Y\sim\text{Lap}\left(\frac{\Lambda}{\epsilon}\right)\)\(\triangleright\) Sample noise added for privacy
16:return\(\widehat{p}_{\epsilon}=\sum_{i=L+1}^{9n/10}\widehat{w}_{i}^{*}[\widehat{p}_{i} ]\widehat{b}_{i}^{\widehat{b}_{i}}+Y\)\(\triangleright\) Final estimate
```
**Algorithm 2** Private Heterogeneous Mean Estimation \(\widehat{p}_{\epsilon}\)
The final assumption ensures that the \(L\) users with the most data can not estimate the mean of meta-distribution alone. In the setting where these \(L\) users can give a very accurate estimate of the mean, we conjecture that there is little benefit in incorporating the data of the remaining users. If this assumption does not hold, then an estimator that better utilizes only the top \(\log\)\(n\) users may be optimal. The strictness of this condition depends on the sample complexity of estimating the variance of \(\mathcal{D}(k)\). We'll see in Section 6.2 that for well-behaved distributions like Gaussians, the sample complexity for obtaining a constant multiplicative approximation of \(\text{Var}(\mathcal{D}(k))\) is \(O(\log(1/\beta)/\epsilon)\). Thus for sufficiently well-behaved distributions, up to logarithmic factors, this condition simply requires that the number of data points held by the user with the most data is at most \(n\) times the number of data points of the median user. If \(n\) is large, then this is unlikely to be a limiting factor.
The first two conditions of Theorem 4.1 ensure that the mean and variance estimates are sufficiently accurate to use in the remainder of the algorithm. Notice that the initial estimates do not need to be especially accurate. In fact, provided \(p\) is not too close to \(0\) or \(1\), the DP mean estimator that simply adds
noise to the sample mean achieves sufficient accuracy (see Lemma 6.1 for details). In Section 6, we also give a DP variance estimator that achieves the desired accuracy guarantee using only \(L=\log n/\epsilon\) samples, under some mild conditions (Lemma 6.4). Thus the set of mean and variance estimators that satisfy the accuracy requirements of Theorem 4.1 are non-empty. We note that the constants \(1/2\), \(3/2\) and \(8\) in Theorem 4.1 are not intrinsic; any constant multiplicative factors will suffice. We also note that the specific sizes of the three groups outlined in Algorithm 2 are heuristic and can be varied to ensure that the initial estimator achieves the required accuracy.
A full proof of Theorem 4.1 is given in Appendix B; we present intuition and a proof sketch here.
The main distinction between \(\widehat{p}_{\epsilon}^{\text{ideal}}\) and \(\widehat{p}_{\epsilon}\) is the use of the output of the estimators \(\texttt{mean}_{\epsilon,\delta}\) and \(\texttt{variance}_{\epsilon,\delta}\) to estimate \(\sigma_{i}^{2}\), \(a_{i}\) and \(b_{i}\). Thus, the main component of the proof of Theorem 4.1 is to show that the conditions stated in the theorem are enough to ensure that \(\widehat{\sigma_{i}}^{2}\), \(\widehat{a}_{i}\) and \(\widehat{b}_{i}\) are sufficiently accurate.
**Lemma 4.2**.: _Given \(\widehat{p}_{\epsilon}^{\text{initial}}\), \(\widehat{\sigma}_{p}^{2}\), and \(k_{i}\), define \(\widehat{\sigma_{i}}^{2}=\frac{1}{k_{i}}\widehat{p}_{\epsilon}^{\text{initial} }(1-\widehat{p}_{\epsilon}^{\text{initial}})+\frac{k_{i}-1}{k_{i}}\widehat{ \sigma}_{p}^{2}\). Under the conditions of Theorem 4.1, for all \(i>L\), we have \(\widehat{\sigma_{i}}^{2}\in\left[\frac{1}{2}\sigma_{i}^{2},9.5\sigma_{i}^{2}\right]\) and \(\widehat{[b}_{i}-\widehat{a}_{i}]\leq 4|b_{i}-a_{i}|\)._
A detailed proof of Lemma 4.2 is presented in Appendix B. Lemma 4.2 implies that the individual variance estimates used in the weights, and the truncation parameters are accurate up to constant multiplicative factors. The main ingredient left then is to show that using only a subset of the population in the final estimate only affects the performance up to a multiplicative factor. Under the assumption that \(\frac{k_{\max}}{k_{\text{med}}}\leq\frac{n/2-L}{L}\), where \(\sigma_{k_{\max}}^{2}=\text{Var}(\widehat{p}_{1})\) and \(\sigma_{k_{\text{med}}}^{2}=\text{Var}(\widehat{p}_{n/2})\) then
\[\sigma_{k_{\text{med}}}^{2} =\tfrac{1}{k_{\text{med}}}p(1-p)+(1-\tfrac{1}{k_{\text{med}}}) \sigma_{p}^{2}\] \[\leq\tfrac{n/2-L}{L}\tfrac{1}{k_{\max}}p(1-p)+(1-\tfrac{1}{k_{ \max}})\sigma_{p}^{2}\] \[\leq\tfrac{n/2-L}{L}\sigma_{k_{\max}}^{2}. \tag{7}\]
We use this to show that for any truncation parameter \(T\),
\[\sum_{i=1}^{n}\min\{\tfrac{1}{\sigma_{i}^{2}},\tfrac{T}{\sigma_{i}}\}\leq 4 \sum_{i=L+1}^{9n/10}\min\{\tfrac{1}{\sigma_{i}^{2}},\tfrac{T}{\sigma_{i}}\}.\]
Using this, along with the bounds on estimated quantities from Lemma 4.2, we show that with high probability, the variance of the our estimator \(\widehat{p}_{\epsilon}\) is within a constant factor of \(\text{Var}(\widehat{p}_{\epsilon}^{\text{ideal}})\), as given in Equation (5):
\[\text{Var}(\widehat{p}_{\epsilon}) =\tfrac{\sum_{i=L+1}^{9n/10}\min\{\tfrac{1}{\widehat{\sigma}_{i} ^{2}},\tfrac{\widehat{p}^{*2}}{\widehat{p}_{i}^{*2}}\}\sigma_{i}^{2}+\max_{i} \tfrac{\min\{\tfrac{1}{\widehat{\sigma}_{i}^{2}},\tfrac{\widehat{p}^{*2}}{ \widehat{\sigma}_{i}^{2}}\}|\widehat{b}_{i}-\widehat{a}_{i}|^{2}}{e^{2}}}{( \sum_{j=L+1}^{9n/10}\min\{1/\widehat{\sigma_{j}}^{2},\tfrac{T^{*}}{\widehat{ \sigma}_{i}^{2}}\})^{2}} \tag{8}\] \[\leq O(\text{Var}(\widehat{p}_{\epsilon}^{\text{ideal}})).\]
We remark that this framework is amenable to being performed in a federated manner if one has private federated mean and variance estimators. Steps (6) - (8) and Step (12) can be performed locally. Steps (9) and the final sum in Step (16) would need to be altered to fit the federated framework. We will see in Section 4.4 that it is sufficient to replace Step (9) with an estimate of \(\frac{1}{\sigma_{L}}\) (the inverse standard deviation of the user with the \(L\)-th most data). The final step is then a simple addition with output perturbation, which can be performed in a federated manner (e.g., McMahan et al. (2017); Kairouz et al. (2021)).
### Special Case: The constant \(p_{i}\) case.
In the previous section, we considered the setting where there was heterogeneity in both the users' distributions (i.e., the \(p_{i}\)s were not constant), as well as the number of data points that they each held (i.e., the \(k_{i}\)s were not constant). In the absence of variation in the \(p_{i}\), each user is sampling from the same distribution \(\text{Ber}(p)\). When privacy is not a concern, this setting reduces to the single-data-point-per-user setting where
the sample size is increased to \(\sum_{i=1}^{n}k_{i}\). However, under the constraint of user-level differential privacy, this setting is distinct from the single-data-point-per-user setting, since we need to protect the entirety of each users data set. In fact, much of the complexity of Algorithm 2 is required even in this simpler case. In particular, the truncated inverse variance weighting is still required in this case when there is variation in the \(k_{i}\). In fact, the only step of Algorithm 2 that is not required is Step 3, since already know that \(\sigma_{p}^{2}=0\). Since there is no variance in \(\mathcal{D}\), the high probability bound \(f_{\mathcal{D}}^{k_{i}}(n,\widehat{\sigma_{p}^{2}},\beta)\) is just due to the randomness in the binomial distribution \(\text{Bin}(k_{i},p)\), which comes from averaging \(k_{i}\) samples drawn from \(\text{Ber}(p)\).
When \(\sigma_{p}^{2}=0\), \(\sigma_{i}\) has the simple formula \(\sigma_{i}=\frac{\sqrt{p(1-p)}}{k_{i}}\) and we can directly translate from the truncation threshold \(T\) on \(\sigma_{i}\) to a truncation threshold \(k\) on \(k_{i}\), \(T=\frac{\sqrt{p(1-p)}}{k}\). Further, if we assume that all the \(k_{i}\) are large enough (\(\min k_{i}\geq 2\ln(1/\delta)/p\)) then we also have the simple formula \(f_{\mathcal{D}}^{k_{i}}(n,\widehat{\sigma_{p}^{2}},\beta)=\sqrt{\frac{3p\ln(2 /\beta)}{k_{i}}}\). We can plug these into Equation (6) (recall that \(T^{*}\) is defined as the truncation threshold that minimizes the variance of \(\widehat{p}_{\epsilon}^{\text{ideal}}\)) to obtain the following formula for the variance of \(\widehat{p}_{\epsilon}^{\text{ideal}}\), and hence the variance of \(\widehat{p}_{\epsilon}\) is:
\[\min_{k}\frac{p(1-p)\sum_{i=1}^{n}\min\{k_{i},k\}+6p\ln(2/\beta)\frac{k}{\epsilon ^{2}}}{(\sum_{j=1}^{n}\min\{k_{i},\sqrt{k_{i}k}\})^{2}}. \tag{9}\]
Even in the private setting, one can reduce to the single-data-point-per-user setting by reducing the sample size by a factor of 2, and forcing the \(n/2\) users with the most data points to produce their estimate \(\hat{p}_{i}\) using only \(k_{\text{med}}\) (the median \(k_{i}\)) data points. Then each estimate \(\hat{p}_{i}\) is a sample from the same distribution and we can compute their mean. To the best of our knowledge, all the prior work in the private literature that handles variations in \(k_{i}\) follows this formula. However, not only does this algorithm reduce the sample size by a factor of 2, it also unnecessarily hinders the contribution of users with many data points. As a simple example, suppose that all the users have a single data point, except for \(\sqrt{n}\) users, which have \(n\) data points. Then the algorithm which forces \(n/2\) of the users to use the median number of data points has an error rate of \(\Theta(\frac{1}{n}+\frac{1}{n^{2}\epsilon^{2}})\) assuming that \(p\) is bounded away from 0 or 1. Letting \(k=n\) in Equation 9 implies that that the truncated inverse variance weighted algorithm in the previous section is better able to utilise the data of the users with high \(k_{i}\)s, resulting in an error rate of \(O(\frac{1}{n^{3/2}}+\frac{1}{n^{2}\epsilon^{2}})\).
### Extension: private \(k\) user-level differential privacy setting
Let us now turn to our problem in the private \(k\) user-level differential privacy setting, where the \(k_{i}\)s are considered private and require formal privacy protections. We will need to add considerably more machinery to Algorithm 2 to make it private under this stronger notion of privacy. Under public-size user-level privacy, the quantities \(\hat{T}^{*}\) (the weight truncation parameter) and \(\Lambda\) (the sensitivity of the final estimate) in Algorithm 2 do not pose privacy concerns since they only depend on the private data \(\widehat{p}_{i}\) through the \(\widehat{p}_{\epsilon}^{\text{initial}}\) and \(\widehat{\sigma}_{i}^{2}\), which are both produced differentially privately. However, both these quantities depend on the \(k_{i}\) directly, and hence care needs to be taken when using them under private \(k\) user-level DP.
In Algorithm 3, we outline the extension of Algorithm 2 to satisfy private \(k\) user-level differential privacy. It is different to Algorithm 2 in two main ways: the method for truncating the weights and the method for computing the scale of the noise needed to maintain privacy.
The first significant change in Algorithm 3 is how the sensitivity parameter \(\Lambda\) is chosen. The final statistic is more sensitive under the view of private \(k\) user level privacy; the weight of every user can change as a result of a single user changing the amount of data they hold (due to the resulting change in the normalisation constant). Rather than an upper bound on the global sensitivity, \(\Lambda\) as defined in Algorithm 3, is, with high probability, an upper bound on the _local_ sensitivity of all databases that lie in a neighbourhood of \(D\). Given a function \(f\) from the set of databases to \(\mathbb{R}\), and a database \(D\), the _local sensitivity_ of \(f\) at \(D\) is defined by \(\texttt{LS}(f;D)=\max_{D^{\prime}\text{ neighbour of }D}|f(D)-f(D^{\prime})|.\) We use a standard framework from the differential privacy literature called propose-test-release (PTR) [Dwork and Lei, 2009] to privately verify that \(\Lambda\) is indeed an upper bound on the local sensitivity of all databases in a neighbourhood of \(D\), which allows us to safely add noise proportional to \(\Lambda\) to privatise the final statistic. A database \(D^{\prime}\) is said to be a \(\kappa\)-neighbour of \(D\) if it differs from \(D\) on the data of at most \(\kappa\) data subjects, and if it contains the same number of data subjects.
**Input parameters:** Privacy parameters \(\epsilon>0\), \(\delta\in[0,1]\), desired high probability bound \(\beta\in[0,1]\), number of users \(n\), an \((\epsilon,\delta)\)-DP mean estimator \(\texttt{mean}_{\epsilon,\delta}\), error guarantee on \(\texttt{mean}_{\epsilon,\delta}\)\(\alpha>0\), an \((\epsilon,\delta)\)-DP variance estimator \(\texttt{variance}_{\epsilon,\delta}\), number of samples for variance estimator \(L\), an upper bound on the total number of data points held by a single user \(k_{\max}\), an \(\epsilon\)-DP estimator of the \(\ell\)th order statistic \(\texttt{EM}_{\epsilon}(:\ell,k_{\max})\).
**Input data:** Number of samples held by each user \((k_{1},\ldots,k_{n}\)\(s.t.\)\(k_{i}\geq k_{i+1})\), and user-level estimates \((\widehat{p}_{1},\cdots,\widehat{p}_{n})\).
```
1:Initial Estimates
2:\(\widehat{p}^{\text{initial}}=\texttt{mean}_{\epsilon,\delta}(\texttt{x}_{9 \text{n}/10+1}^{1},\cdots,\texttt{x}_{n}^{1})\)\(\triangleright\) Initial mean estimate
3:\(\widehat{\sigma}_{p}^{2}=\texttt{variance}_{\epsilon,\delta}(\widehat{p}_{1}, \cdots,\widehat{p}_{L})\)\(\triangleright\) Initial variance estimate
4:Compute Sensitivity Proposal
5:\(\widehat{k_{L}}=\texttt{EM}_{\epsilon}(k_{1},\cdots,k_{n};L,k_{\max})\)\(\triangleright\) Compute \(L\)-th order statistic
6:for\(i\in[L+1,9n/10]\)do
7:\(\widehat{k}_{i}=\min\{k_{i},\widehat{k_{L}}\}\)
8:\(\widehat{\sigma_{i}}^{2}=\frac{1}{k_{i}}(\widehat{p}^{\text{initial}}_{i}-( \widehat{p}^{\text{initial}}_{\epsilon})^{2})+(1-\frac{1}{k_{i}})\widehat{ \sigma_{p}}^{2}\).
9:\(v_{i}=\frac{1}{\sigma_{i}^{2}}\)\(\triangleright\) Compute truncated, unnormalised weights
10:\(\widehat{\sigma_{\min}}^{2}=\frac{1}{k_{L}}(\widehat{p}^{\text{initial}}_{ \epsilon}-(\widehat{p}^{\text{initial}}_{\epsilon})^{2})+(1-\frac{1}{k_{L}}) \widehat{\sigma_{p}}^{2}\).
11:\(\widehat{N}=\sum_{j=L+1}^{n/10}v_{i}+\text{Lap}\left(\frac{1}{\epsilon\sigma _{\min}^{2}}\right)-\frac{1}{\epsilon\sigma_{\min}^{2}}\ln(2\delta)\)\(\triangleright\) Compute noisy normalisation term
12:\(\Lambda=12\frac{k_{D}^{k_{\max}(n,\widehat{\sigma_{p}}^{2},\beta)}}{\sigma _{\min}^{2}N}\)\(\triangleright\) Compute local sensitivity proposal
13:Propose-Test-Release on \(\mathcal{M}(\cdot\,;\widehat{k_{L}},n,\widehat{p}^{\text{initial}}_{\epsilon}, \widehat{\sigma}_{p}^{2},\alpha)\)
14:\(\overline{D_{T}}=\{(\widehat{p}_{i},k_{i})\}_{i\in[L+1:9n/10]}\)
15:\(\kappa^{*}=\arg\max\{\kappa\in\mathbb{N}\mid\forall D^{\prime}\text{ s.t. }D^{\prime}\text{ is a $\kappa$-neighbor of }D_{T},\texttt{LS}(\mathcal{M}(\cdot;\widehat{k_{L}},9n/10-L,\widehat{p}^{ \text{initial}}_{\epsilon},\widehat{\sigma}_{p}^{2},\alpha);D^{\prime})\leq \Lambda\}\)
16:\(\triangleright\) Compute distance to high sensitivity dataset
17:\(\tilde{\kappa}=\kappa^{*}+\text{Lap}(1/\epsilon)\)
18:if\(\tilde{\kappa}<\frac{\log(1/\delta)}{\epsilon}\)then
19:return\(\widehat{p}^{\text{priv $k$}}_{\epsilon}=\widehat{p}^{\text{initial}}_{\epsilon}\)\(\triangleright\) Return initial estimate if proposed local sensitivity too small
20:else
21: Sample \(Y\sim\text{Lap}\left(\frac{\Delta}{\epsilon}\right)\)\(\triangleright\) Sample noise added for privacy
22:return\(\widehat{p}^{\text{priv $k$}}_{\epsilon}=\mathcal{M}(D_{T};\widehat{k_{L}},9n/10-L, \widehat{p}^{\text{initial}}_{\epsilon},\widehat{\sigma}_{p}^{2},\alpha)+Y\)\(\triangleright\) Final estimate
```
**Algorithm 3** Private Heterogeneous Mean Estimation \(\hat{p}^{\text{priv $k$}}_{\epsilon}\)
Next, the function \(\mathcal{M}\) as described in Algorithm 4 incorporates the truncation of weights in a slightly different (but nearly equivalent) manner to Algorithm 2, but is otherwise the same as Algorithm 2, without the addition of noise. Observe that choosing a truncation parameter \(T\) is equivalent to choosing an integer \(k\) such that \(T=1/\text{Var}(\mathcal{D}(k))\), so \(\widehat{k_{L}}\) plays the role in Algorithm 3 that \(T^{*}\) plays in Algorithm 2. The statistic \(\widehat{k_{L}}\) is a private estimate of the \(L\)-th order statistic of the set \(\{k_{1},\cdots,k_{n}\}\). Since the only users that participate in the final estimate (and hence have their data truncated) all have \(k_{i}<k_{L}\), this algorithm attempts to find the smallest truncation parameter such that no data are actually truncated. We will show that provided either \(\epsilon\) is not too small or the ratio \(k_{\max}/k_{\text{med}}\) is not too large, this level of truncation is sufficient. There are several existing algorithms in the literature that can be used to privately estimate the \(L\)-th order statistic \(\widehat{k_{L}}\). A simple algorithm [13, 14, 15, 16] that estimates the order statistic using standard differential privacy framework called the Exponential Mechanism (EM) [15] is sufficient up to a constant factor. For a full description of this algorithm, as well as its accuracy guarantees, see [14].
In order for this algorithm to produce accurate results, we need an upper bound on the maximum number of data points a single user can have; we will call this number \(k_{\max}\).
**Theorem 4.3**.: _For any \(\epsilon>0\), \(\delta\in[0,1]\), \(\beta\in[0,1]\), \(n\in\mathbb{N}\), \(\alpha>0\), \(L\in[n]\)\((\epsilon,\delta)\)-DP mean estimator \(\texttt{mean}_{\epsilon,\delta}\), \((\epsilon,\delta)\)-DP variance estimator \(\texttt{variance}_{\epsilon,\delta}\), \(k_{\max}\in\mathbb{N}\), \(\epsilon\)-DP estimator of the \(\ell\)th order statistic \(\texttt{EM}_{\epsilon}(\cdot;\ell,k_{\max})\), Algorithm 3 is \((3\epsilon,2\delta)\)-DP. Let \(\Upsilon=\frac{\log(1/\delta)}{\epsilon}+\frac{\ln(1/\delta)\ln(1/\beta)}{ \epsilon}\). If the conditions of Theorem 4.1 hold and_
* \(\frac{k_{\max}}{k_{\text{med}}}\leq\min\left\{\frac{\log\frac{\alpha}{\beta}}{ \log\frac{n^{\frac{1}{\delta}}}{\beta}}\frac{n-\Upsilon-1}{2},\frac{n-1}{2( \Upsilon+1)},\frac{\epsilon^{2}(n/2-L-1)}{\log^{2}(n/\beta)},\frac{(n/4-1) \epsilon}{3\ln(2/\delta)}\right\}\)_,_
* _for all_ \(k\leq k_{\max}\)_,_ \(\max\{\alpha,\sigma_{k}\}\leq f_{\mathcal{D}}^{k}(n,\widehat{\sigma_{p}}^{2}, \beta)\leq 2\sigma_{k}\sqrt{\log(n/\beta)}\)_, where_ \(\sigma_{k}^{2}=\text{Var}(\mathcal{D}(k))\)__
* _for any set_ \(I\subset[n]\)_, with probability_ \(1-\beta\)_,_ \(\left|\frac{\sum_{i\in I}v_{i}\widehat{p}_{i}}{\sum_{i\in I}v_{i}}-p\right| \leq 2\sqrt{\text{Var}\left(\frac{\sum_{i\in I}v_{i}\widehat{p}_{i}}{\sum_{ i\in I}v_{i}}\right)\log(1/\beta)}\)_,_
_then with probability \(1-4\beta\), \(\text{Var}(\hat{p}_{\epsilon}^{\text{priv }k})\leq\tilde{O}\left(\text{Var}(\widehat{p})\right)\)_
Theorem 4.3 implies that under some mild conditions, the variance of \(\hat{p}_{\epsilon}^{\text{priv }k}\) is within a constant factor of the variance of \(\widehat{p}\), the non-private realisable estimator. While the conditions of this theorem may seem intimidating, they are not particularly stringent for reasonable parameter settings.
* **Conditions on L.** In Section 4.2, when discussing the conditions of Theorem 4.1, we discussed that \(L=\tilde{O}(1/\epsilon)\) is sufficient for learning a constant multiplicative approximation to \(\sigma_{p}^{2}\) for sufficiently well-behaved distributions. We'll give such an example estimator in Section 6.2. If we increase \(L\) to \(O(\log(n)/\epsilon)\) then the third condition in Theorem 4.1 (which we still need to satisfy) becomes only slightly more restrictive, and we can satisfy the first condition of Theorem 4.3 provided \(k_{\max}\) and \(1/\beta\) are both polylogarithmic in \(n\).
* **Conditions on \(k_{\max}/k_{\text{med}}\).** Up to logarithmic factors, the required upper bound on the ratio \(k_{\max}/k_{\text{med}}\) is \(\tilde{O}(\epsilon^{2}n)\). For moderate values of \(\epsilon\), this condition is unlikely to be prohibitive in practice, although it is more restrictive than the upper bound of \(\tilde{O}(\epsilon n)\) that was required in Theorem 4.1.
* **Concentration bounds.** The final two conditions are concentration bounds, essentially requiring \(\mathcal{D}(k)\) to be sub-Gaussian. This condition is technically absent from Theorem 4.1, although a similar condition is required in order to design a private variance estimation algorithm with sufficiently good accuracy.
The proof that Algorithm 3 is \((3\epsilon,2\delta)\)-DP is fairly routine, details can be found in the appendix. There are two main differences between Algorithm 3 and Algorithm 2 that affect the utility: the replacement of the optimal truncation with truncation based on \(\widehat{k_{L}}\), and the use of propose-test-release (PTR) to determine the level of noise added to the final estimate. We will control the impact of these two factors separately.
Let us consider the impact of changing the truncation parameter. Set \(T_{L}=\frac{1}{\hat{\sigma_{\min}}^{2}}\). Assuming the PTR component of the algorithm does not fail, the variance of \(\hat{p}_{\epsilon}^{\text{priv }k}\) can be written as two terms, namely the variance that exists in the non-private setting, and the additional noise due to privacy:
\[\text{Var}(\hat{p}_{\epsilon}^{\text{priv }k})=\underbrace{\frac{\sum_{i=L+1}^{9n/10} \min\left\{\frac{T_{L}^{2}}{\hat{\sigma_{i}}^{2}},\frac{1}{\hat{\sigma_{i}}^{ 2}}\right\}\text{Var}([\hat{p}_{1}]\widehat{\hat{\hat{\sigma_{i}}}^{2}}]}{ \left(\sum_{i=L+1}^{9n/10}\min\left\{\frac{T_{L}}{\hat{\sigma_{i}}},\frac{1}{ \hat{\sigma_{i}}^{2}}\right\}\right)^{2}}}_{\text{non-private term}}+\underbrace{ \left(\frac{12\frac{f_{\mathcal{D}}^{\widehat{\epsilon_{L}}}(n,\hat{\sigma_{p} }^{2},\beta)}{\hat{\sigma_{\min}}^{2}N}}\right)^{2}}_{\text{private term}}.\]
The truncation has opposite effects on each of these terms. As \(T\) decreases, the private variance term decreases while the non-private variance term increases. When we set \(T_{L}=1/\text{Var}(\mathcal{D}(k_{L+K}))\), where \(K\in[-\frac{1}{2}L,\frac{1}{2}L]\) then if \(K\) is negative, no truncation occurs and the non-private term is optimal. Even if \(K\) is positive, only a small number of data points are truncated so the non-private term is still close to its optimal value. However, setting the truncation parameter this large means that the private term is larger than necessary. We show that even though the private term may be larger than it would be with the optimal truncation, under the conditions of the theorem, the non-private term dominates the variance anyway.
Let us now consider the impact of the use of propose-test-release (PTR). The two relevant components for the how the PTR component of Algorithm 3 affects the utility are the scale of \(\Lambda/\epsilon\) and the probability that the proposed sensitivity is too small resulting in the algorithm ending in line (19), rather than line (22). The impact of the former is easy to analyse since the noise added is simply output perturbation. In order to show that the PTR ends in line (22) with high probability, we need to show that with high probability (over the randomness in the samples), \(\kappa^{*}\) as defined in line (15) is large enough. Since this claim is in essence about \(\mathcal{M}(\cdot;k_{\max},n,\hat{p},\hat{\sigma_{p}}^{2})\), we will state this claim in the notation of Algorithm 4.
**Lemma 4.4**.: _Given \(k_{\max}\in\mathbb{N}\), \(n\in\mathbb{N}\), \(\hat{p}\in[0,1]\), \(\hat{\sigma_{p}}^{2}\in[0,1]\) and \(k_{1},\cdots,k_{n}\), let \(\Upsilon=\frac{\log(1/\delta)}{\epsilon}+\frac{\ln(1/\delta)\ln(1/\beta)}{\epsilon}\), if the conditions of Theorem 4.3 hold and \(D=\{(\widehat{p}_{i},k_{i})\}_{i=1}^{n}\) is a dataset such that \(\widehat{p}_{i}\sim\mathcal{D}(k_{i})\), then with probability \(1-\beta\), for any \(D^{\prime}\) that is a \(\kappa\)-neighbour of \(D\) for \(0\leq\kappa\leq\Upsilon\), we have_
\[\texttt{LS}(\mathcal{M}(\cdot;k_{\max},m,\hat{p},\hat{\sigma_{p}}^{2},\alpha) ;D^{\prime})\leq 12\frac{v_{k_{\max}}f_{\mathcal{D}}^{k_{\max}}(n,\hat{ \sigma_{p}}^{2},\beta)}{\sum_{i=1}^{n}v_{i}}.\]
## 5 Near Optimality and Lower Bounds
In Section 4, we showed that the variance of our realisable private estimator \(\widehat{p}_{\epsilon}\) was within a constant of that of the complete information estimator \(\widehat{p}_{\epsilon}^{\text{ideal}}\). In this section, we will show that in fact, \(\widehat{p}_{\epsilon}\) performs as well (up to logarithmic factors) as the true optimal private estimator. We'll also give a lower bound on the performance of the optimal estimator in terms of the \(k_{i}\). This will give us some intuition into the types of distributions of \(k_{i}\)'s that benefit from this refined analysis.
### Minimax Optimality of \(\widehat{p}_{\epsilon}\)
The goal of this section is to show that the estimator \(\widehat{p}_{\epsilon}\) discussed in Section 4.2 is minimax optimal up to logarithmic factors among the class of unbiased estimators. In light of Theorem 4.1, it suffices to show that the estimator \(\widehat{p}_{\epsilon}^{\text{ideal}}\) defined by Equations 3, (4), and (6) is minimax optimal up to logarithmic factors. Let \(\mathcal{P}\) be a parameterized family of distributions \(p\mapsto\mathcal{D}_{p}\), where \(\mathbb{E}[\mathcal{D}_{p}]=p\) and \(\mathcal{D}_{p}\) is supported on \([0,1]\). For \(p\in[0,1]\) and \(k\in\mathbb{N}\), let \(\phi_{p,k}\) be the probability density function of \(\mathcal{D}_{p}(k)\). In this section, we will return to the known size user-level differential privacy setting. Hence, we will let \(k_{1},\cdots,k_{n}\) be fixed.
Our lower bound will show that the estimation error must consist of a statistical term and a privacy term. Such a lower bound thus must generalize a statistical lower bound. We will rely on the Cramer-Rao approach to proving statistical lower bounds; as we show, it is particularly amenable to incorporating a privacy term. This approach relates the variance of any unbiased estimator of the mean of a distribution to the inverse of the Fischer information; the proof naturally extends to the case where we are given samples from a set of distributions with the same mean but different variances, as is the case in our setting. For many distributions of interest, e.g., Gaussian and Bernoulli, the Fischer information of a single sample is the inverse of the variance, and we make that assumption for \(\mathcal{D}_{p}\). We also assume that the \(\mathcal{D}_{p}\) has sub-Gaussian tails. Thus, as long as the set of permissible meta-distributions includes distributions with this property, e.g., includes truncated Gaussians, our lower bound applies.
**Theorem 5.1**.: _Let \(\mathcal{P}\) be a parameterized family of distributions \(p\mapsto\mathcal{D}_{p}\) and suppose that for all \(p\in[0,1]\) and \(k\in\mathbb{N}\), the Fisher information of \(\phi_{p,k}\) is inversely proportional to the variance, \(\mathrm{Var}(\mathcal{D}_{p}(k))\):_
\[\int(\tfrac{\partial}{\partial p}\log\phi_{p,k}(x))^{2}\phi_{p,k}(x)dx=O(\tfrac {1}{\mathrm{Var}(\mathcal{D}_{p}(k))}), \tag{10}\]
_and for all \(p\), \(n>0\), \(k\in\mathbb{N}\) and \(\beta\in[1/3,2/3]\), \(f^{k}_{\mathcal{D}_{p}}(n,\sigma_{p}^{2},\beta)=\tilde{O}(\mathrm{Var}( \mathcal{D}_{p}(k)))\), then_
\[\min_{M,\ \mathrm{unbiased}}\max_{p\in[1/3,2/3]}[\mathrm{Var}_{ \forall i\in[n],x_{i}\sim\mathcal{D}(k_{i}),M}(M)] =\tilde{O}\left(\max_{p\in[1/3,2/3]}\big{[}\mathrm{Var}_{\forall i \in[n],x_{i}\sim\mathcal{D}(k_{i}),M}(\widehat{p}^{\mathrm{ideal}}_{\epsilon}) \big{]}\right)\] \[=\tilde{O}\left(\min_{T}\frac{\sum_{i=1}^{n}\min\{1/\sigma_{i}^{2},T^{2}\}+\max_{i}\frac{\min\{1/\sigma_{i}^{4},T^{2}/\sigma_{i}^{2}\}|b_{i}-a_{ i}|^{2}}{\epsilon^{2}}}{(\sum_{j=1}^{n}\min\{1/\sigma_{j}^{4},T/\sigma_{i} \})^{2}}\right).\]
_Further, under the conditions of Theorem 4.1,_
\[\max_{p\in[1/3,2/3]}\big{[}\mathrm{Var}_{\forall i\in[n],x_{i}\sim\mathcal{D}( k_{i}),M}(\widehat{p}_{\epsilon})\big{]}=\tilde{O}\left(\min_{M,\ \mathrm{unbiased}}\max_{p\in[1/3,2/3]}[\mathrm{Var}_{\forall i\in[n],x_{i}\sim \mathcal{D}(k_{i}),M}(M)]\right).\]
Theorem 5.1 says the estimator \(\widehat{p}^{\mathrm{ideal}}_{\epsilon}\) has variance only a logarithmic factor worse than the variance of the optimal unbiased estimator. Due to the truncation of the \(\hat{p}_{i}\), the estimator \(\widehat{p}^{\mathrm{ideal}}_{\epsilon}\) is not unbiased, although the bias can be made polynomially small by widening the truncation interval so truncation does not occur with high probability. The theorem can also be slightly extended to include estimators with polynomially small bias. This small bias assumption seems to be inherent in the Cramer-Rao style proof that we use.
We will prove Theorem 5.1 in three steps. The following class of noisy linear estimators, NLE, will act as an intermediary in our proof. The notation \(\sigma_{i}\) denotes \(\mathrm{Var}(x_{i})\), which accounts for the randomness in generating \(x_{i}\).
\[\texttt{NLE}=\Big{\{}M_{\texttt{NLE}}(\mathbf{x};\mathbf{w})= \sum_{i=1}^{n}w_{i}x_{i}+\mathrm{Lap}(\tfrac{\max_{i}w_{i}\sigma_{i}}{ \epsilon})\bigm{|}w_{i}\in[0,1],\sum_{i=1}^{n}w_{i}=1\Big{\}}.\]
Similar to \(\widehat{p}^{\mathrm{ideal}}_{\epsilon}\), this class of estimators is not realizable since we only have access to an estimate of \(\sigma_{i}=\mathrm{Var}(\mathcal{D}_{p}(k_{i}))\). Additionally, the estimators in NLE are not necessarily \(\epsilon\)-DP.
To prove Theorem 5.1, we will first show that the weights used in \(\widehat{p}^{\mathrm{ideal}}_{\epsilon}\) define the optimal weight vector among the estimators in NLE. Then, we'll show that (up to constant factors) the minimax optimal estimator among unbiased estimators lies in NLE. Finally, we'll show that the variance of \(\widehat{p}^{\mathrm{ideal}}_{\epsilon}\) is at most a logarithmic factor worse than its not-quite-private counterpart in NLE. This completes the proof of the near minimax optimality of \(\widehat{p}^{\mathrm{ideal}}_{\epsilon}\), and hence \(\widehat{p}_{\epsilon}\).
The first step is shown in Lemma 5.2, which shows that the weights used in \(\widehat{p}^{\mathrm{ideal}}_{\epsilon}\) are optimal (i.e., variance-minimizing) among all estimators in the set NLE.
**Lemma 5.2**.: _Given \(\widehat{p}_{i}\sim\mathcal{D}_{p}(k_{i})\) with variance \(\sigma_{i}^{2}\) for all \(i\in[n]\) and \(w\in[0,1]^{n}\) such that \(\sum_{i=1}^{n}w_{i}=1\), let \(\widehat{p}=\sum_{i=1}^{n}w_{i}\widehat{p}_{i}+\mathrm{Lap}(\tfrac{\max_{i}w_{i} \sigma_{i}}{\epsilon})\). The variance of \(\widehat{p}\) is minimized by the following weights:_
\[\tilde{w_{i}}^{*}=\frac{\min\{1/\sigma_{i}^{2},T/\sigma_{i}\}}{\sum_{j=1}^{n} \min\{1/\sigma_{j}^{2},T/\sigma_{j}\}}\]
_for some \(T\)._
Since the threshold \(T^{*}\) in \(\widehat{p}^{\rm ideal}_{\epsilon}\) was chosen to minimize \(\mathrm{Var}(\widehat{p}^{\rm ideal}_{\epsilon})\), then we know that the weights \(w_{i}^{*}\) in \(\widehat{p}^{\rm ideal}_{\epsilon}\) are optimal. The proof of Lemma 5.2 can be found in Appendix D. The main component of the proof is showing that under the constraint of differential privacy, no individual's contribution should be too heavily weighted.
Now, let us turn to the second - and main - component of the proof of Theorem 5.1. Lemma 5.3 formalises the statement that an estimator inside the class NLE is minimax optimal among unbiased estimators. That is, for any unbiased estimator \(M\), there exists an estimator \(M_{\tt ML}\in\texttt{NLE}\) with lower worst-case variance.
**Lemma 5.3**.: _Let \(\mathcal{P}\) be a parameterized family of distributions \(p\mapsto\mathcal{D}_{p}\) and suppose that \(M:[0,1]^{n}\to[0,1]\) is an \(\epsilon\)-DP estimator such that for all \(p\in[1/3,2/3]\), if_
1. \(M\) _is unbiased,_ \(\mu_{M}(p)=p\)__
2. _the Fisher information of_ \(\phi_{p,k_{i}}\) _is inversely proportional to the variance_ \[\int(\tfrac{\partial}{\partial p}\log\phi_{p,k_{i}}(x_{i}))^{2}\phi_{p,k_{i}} (x_{i})dx_{i}=O(\tfrac{1}{\mathrm{Var}(\mathcal{D}_{p}(k_{i}))}),\]
_then there exists an estimator \(M_{\tt ML}\in\texttt{NLE}\) such that_
\[\max_{p\in[1/3,2/3]}[\mathrm{Var}_{\forall i\in[n],x_{i}\sim\mathcal{D}(k_{i}),M_{\tt ML}}(M_{\tt ML})]\leq O\left(\max_{p\in[1/3,2/3]}[\mathrm{Var}_{\forall i \in[n],x_{i}\sim\mathcal{D}(k_{i}),M}(M)]\right).\]
A detailed proof of Lemma 5.3 can be found in Appendix D, but let us give a brief sketch of the proof here. Given an estimator \(M_{\tt ML}\in\texttt{NLE}\), the variance of \(M_{\tt ML}\) can be written as
\[\mathrm{Var}(M_{\tt ML})\leq\sum_{i=1}^{n}w_{i}^{2}\mathrm{Var}(\mathcal{D}(k_ {i}))+O(\tfrac{\max w_{i}\sigma_{i}}{\epsilon})^{2}. \tag{11}\]
That is, it can be decomposed as the variance contribution of each individual coordinate, and the variance contribution of the additional noise due to privacy. Lemma 5.4 (proved in Appendix D) shows that the variance of any estimator \(M\) can be lower bounded by a similar decomposition. Since this involves considering the impact of each coordinate individually, the following notation will be useful. Given an estimator \(M\), vector \(\boldsymbol{q}\in[0,1]^{n}\) and set \(I\subset[n]\), let \(\mu_{M}(x_{[n]setminus I};\boldsymbol{q})=\mathbb{E}_{\forall i\in I,x_{i} \sim\mathcal{D}_{q_{i}}(k_{i}),M}[M(x_{1},\cdots,x_{n})]\) be the expectation over only randomness in \(I\) and \(M\). Note that in this notation, user \(i\) is sampling from a meta-distribution with mean \(q_{i}\), which may be different for each user. We will abuse notation slightly to let \(\mu_{M}(\boldsymbol{q})=\mu_{M}(\emptyset;\boldsymbol{q})\), and for \(p\in[0,1]\), we will let \(\mu_{M}(x_{[n]\setminus I};p)=\mu_{M}(x_{[n]\setminus I};(p,\cdots,p))\). When the estimator \(M\) is clear from context, we will omit it.
**Lemma 5.4**.: _For any randomised mechanism \(M:[0,1]^{n}\to[0,1]\),_
\[\mathrm{Var}_{\forall i\in[n],x_{i}\sim\mathcal{D}_{p}(k_{i}),M}( M)=\mathbb{E}_{\forall i\in[n],x_{i}\sim\mathcal{D}_{p}(k_{i}),M}[(M(x_{1},...,x_{n})-\mu(p))^{2}]\] \[\qquad\geq\sum_{i=1}^{n}\mathbb{E}_{x_{i}\sim\mathcal{D}_{p}(k_{ i})}[(\mu(x_{i};p)-\mu(p))^{2}]+\mathbb{E}_{\forall i\in[n],x_{i}\sim \mathcal{D}_{p}(k_{i}),M}[(M(x_{1},...,x_{n})-\mu(x_{1},...,x_{n};p))^{2}] \tag{12}\]
In Equation (12), the first term is the sum of contributions to the variance of the individual terms \(x_{i}\), and the second term is the contribution to the variance of the noise added for privacy. Now we want to define a weight vector \(\mathbf{w}\) such that the terms in Equation (12) are lower bounded by the corresponding terms in Equation (11). The key component of the proof is the observation that if we let
\[w_{i}(p)=\tfrac{\partial}{\partial q_{i}}\mu(\boldsymbol{q})\bigm{|}_{ \boldsymbol{q}=(p,\cdots,p)} \tag{13}\]
then we can show that there exists a constant \(c\) such that
\[\mathbb{E}_{x_{i}\sim\mathcal{D}_{p}(k_{i})}[(\mu(x_{i};p)-\mu(p))^{2}]\geq c \cdot w_{i}(p)^{2}\mathrm{Var}(\mathcal{D}_{p}(k_{i})). \tag{14}\]
This controls the contribution of each individual coordinate to the variance of \(M\). It remains only to control the contribution of the noise due to privacy. We show that there exists \(x_{i}\), \(x^{\prime}_{i}\) such that
\[|\mu(x_{i};p)-\mu(x^{\prime}_{i};p)|\geq\Omega(w_{i}(p)\cdot\sqrt{\mathrm{ Var}(\mathcal{D}_{p}(k_{i}))}),\]
which we show implies that,
\[\mathbb{E}_{\forall i\in[n],x_{i}\sim\mathcal{D}_{p}(k_{i}),M}[(M(x_{1},\cdots,x_{ n})-\mu(x_{1},\cdots,x_{n};p))^{2}]\geq\Omega(\tfrac{w_{i}(p)^{2}\mathrm{Var}( \mathcal{D}_{p}(k_{i}))}{\epsilon^{2}}). \tag{15}\]
Intuitively, the worst-case \(|\mu(x_{i};p)-\mu(x_{i}^{\prime};p)|\) plays an analogous role to the sensitivity, since it captures the impact of changing one user's data. Since \(M\) is an \(\epsilon\)-DP mechanism and \(|\mu(x_{i};p)-\mu(x_{i}^{\prime};p)|\) is at least \(\Omega(w_{i}(p)\cdot\sqrt{\mathrm{Var}(\mathcal{D}_{p}(k_{i}))})\), we show that it must include noise with standard deviation of at least this magnitude over \(\epsilon\). This is consistent with, e.g., the Laplace Mechanism that adds noise with standard deviation \(\Theta(\Delta f/\epsilon)\).
Combining Lemma 5.4 with Equations (14) and (15) gives that the variance of \(M\) is at least,
\[\mathrm{Var}_{\forall i\in[n],x_{i}\sim\mathcal{D}_{p}(k_{i}),M}(M)\geq\sum_{ i=1}^{n}c\cdot w_{i}(p)^{2}\mathrm{Var}(\mathcal{D}_{p}(k_{i}))+\Omega(\tfrac{w_{i}(p)^ {2}\mathrm{Var}(\mathcal{D}_{p}(k_{i}))}{\epsilon^{2}}).\]
Finally, we must create a corresponding \(M_{\texttt{NL}}\in\texttt{NLE}\) for comparison, using the same weights. Since \(\sum_{i=1}^{n}w_{i}(p)\) as defined in Equation (13) need not equal \(1\), these weights will need to be normalized to sum to \(1\) to create an estimator in NLE. We need to show this normalisation does not substantially increase the variance of the resulting estimator. In order to show this, we show that there exists a \(p^{*}\in[1/3,2/3]\) such that \(\sum_{i=1}^{n}w_{i}(p^{*})\geq 1\), since normalizing the estimator by a factor of \(\tfrac{1}{\sum_{i=1}^{n}w_{i}(p^{*})}\) will affect the variance by a factor of \(\tfrac{1}{(\sum_{i=1}^{n}w_{i}(p^{*}))^{2}}\), and thus if \(\sum_{i=1}^{n}w_{i}(p^{*})\geq 1\), then this will decrease variance. This desired fact follows from the definition of \(w_{i}\), and the fact that \(M\) is unbiased. Now, if we define
\[M_{\texttt{NL}}(\mathbf{x})=\frac{\sum_{i=1}^{n}w_{i}(p^{*})x_{i}+\mathrm{ Lap}(\tfrac{\max_{i}w_{i}(p^{*})\sqrt{\mathrm{Var}(\mathcal{D}_{p}(k_{i}))}}{ \epsilon})}{\sum_{i=1}^{n}w_{i}(p^{*})},\]
then \(M_{\texttt{NL}}\in\texttt{NLE}\) and \(\mathrm{Var}_{\forall i\in[n],x_{i}\sim\mathcal{D}_{p}(k_{i}),M_{\texttt{NL}} }(M_{\texttt{NL}})=\Theta\left(\mathrm{Var}_{\forall i\in[n],x_{i}\sim\mathcal{ D}_{p}(k_{i}),M}(M)\right)\).
The final component needed for the proof of Theorem 5.1 is a translation from the estimators in NLE, which are not \(\epsilon\)-DP to the corresponding \(\epsilon\)-DP estimator. For any weight vector \(\mathbf{w}\), we can define an \(\epsilon\)-DP estimator by truncating the data point \(x_{i}\) and calibrating the noise appropriately:
\[M_{\texttt{TNL}}(x_{1},\cdots,x_{n};\mathbf{w})=\sum_{i=1}^{n}w_{i}[x_{i}]_{p- f_{\mathcal{D}}^{k_{i}}(n,\sigma_{p}^{2},\beta)}^{p+f_{\mathcal{D}}^{k_{i}}(n, \sigma_{p}^{2},\beta)}+\mathrm{Lap}(\tfrac{\max_{i}2w_{i}f_{\mathcal{D}}^{k_{ i}}(n,\sigma_{p}^{2},\beta)}{\epsilon}).\]
Provided \(f_{\mathcal{D}}^{k_{i}}(n,\sigma_{p}^{2},\beta)\approx\mathrm{Var}(\mathcal{ D}(k_{i}))\), the estimators \(M_{\texttt{TNL}}\) have approximately the same variance as the corresponding element of NLE, but are slightly biased. This is formalized in the following lemma.
**Lemma 5.5**.: _For any distribution \(\mathcal{D}\), \(n>0\) and \(\beta\in[0,1]\), if for all \(k_{i}\), \(f_{\mathcal{D}}^{k_{i}}(n,\sigma_{p}^{2},\beta)=\tilde{O}(\mathrm{Var}( \mathcal{D}(k_{i}))\) then for any \(\mathbf{w}\in[0,1]^{n}\) such that \(\sum_{i=1}^{n}w_{i}=1\), we have \(\mathrm{Var}(M_{\texttt{TNL}}(\ \cdot\ ;\mathbf{w}))=\tilde{O}(\mathrm{Var}(M_{ \texttt{NL}}(\ \cdot\ ;\mathbf{w})))\). Further, the bias of \(M_{\texttt{TNL}}\) is at most \(\beta\)._
Finally, we have the tools to prove the main theorem in this section, Theorem 5.1:
\[\min_{M\text{ unbiased}}\max_{p\in[1/3,2/3]}[\mathrm{Var}_{\mathcal{ D}_{p}}(M)] =\Omega(\min_{M\in\texttt{NLE}}\max_{p\in[1/3,2/3]}[\mathrm{Var}_{ \mathcal{D}_{p}}(M)])\] \[=\Omega(\max_{p\in[1/3,2/3]}[\mathrm{Var}_{\mathcal{D}_{p}}(p_{ \epsilon}^{\texttt{NLE}})])\] \[=\tilde{\Omega}(\max_{p\in[1/3,2/3]}[\mathrm{Var}_{\mathcal{D}_{ p}}(\widehat{p}_{\epsilon}^{\text{ideal}})])\] \[=\tilde{\Omega}(\max_{p\in[1/3,2/3]}[\mathrm{Var}_{\mathcal{D}_{ p}}(\widehat{p}_{\epsilon})])\]
where \(p_{\epsilon}^{\texttt{NLE}}\in\texttt{NLE}\) has the same weights as \(\widehat{p}_{\epsilon}^{\text{ideal}}\). The equalities follow from Lemmas 5.3, 5.2, 5.5, and Theorem 4.1, respectively.
### Minimax Lower Bound on Estimation Rate
In addition to establishing the near optimality of \(\widehat{p}_{\epsilon}\), we will also give a lower bound on minimax rate of estimation in terms of the parameters \(k_{1},\cdots,k_{n}\) and \(\sigma_{p}^{2}\). Note that we can view the truncation of the weights \(w_{i}\) as establishing an effective upper bound on \(k_{i}\). Given \(k_{1},\cdots,k_{n}\in\mathbb{N}\), and \(\epsilon>0\), let
\[k^{*}=\arg\min_{k}\frac{\frac{k^{*}+\sum_{i=1}^{n}\min\{k_{i},k\}}{(\sum_{i=1} ^{n}\min\{k_{i},k\})^{2}}}{(\sum_{i=1}^{n}\min\{k_{i},k\})^{2}}. \tag{16}\]
Intuitively, in the case that \(\sigma_{p}=0\), we want to use as many samples as possible, but one user contributing many samples leads to larger sensitivity and thus privacy cost. Limiting the number of samples per user to \(k_{\max}\) allows us to limit the sensitivity to be about \(w_{\max}(1/\sqrt{k_{\max}})\). Since \(w_{i}\) is proportional to the number of samples used, the variance of the estimator when using at most \(k^{*}\) samples per user is akin to choosing a threshold that minimises the variance.
**Corollary 5.6**.: _Given \(k_{1},\cdots,k_{n}\in\mathbb{N}\), and \(\sigma_{p}\), there exists a family of distributions \(\mathcal{D}_{p}\) such that_
\[\min_{M,\text{ unbiased }p\in[1/3,2/3]}\max_{\forall i\in[n],x_{i}\sim \mathcal{D}_{p}(k_{i})}[M(x_{1},\cdots,x_{n})]\geq\tilde{\Omega}\left(\min_{ k^{*}}\left\{\frac{k^{*}+\sum_{i=1}^{n}\min\{k_{i},k^{*}\}}{(\sum_{i=1}^{n}\min\{k_{i },\sqrt{k_{i}}k^{*}\})^{2}},\frac{\sigma_{p}^{2}}{n}\right\}\right).\]
Corollary 5.6 is proved in two parts, using two different families of distributions \(\mathcal{D}_{p}\). The first family is where \(\sigma_{p}^{2}=0\), so \(\mathcal{D}_{p}(k)=\text{Bin}(k,p)\) for all \(k\in[n]\). For this family, we know that the minimax error is obtained by the mechanism \(\widehat{p}_{\epsilon}^{\text{ideal}}\). Calculating the variance of \(\widehat{p}_{\epsilon}^{\text{ideal}}\) on this family, we obtain the first term of the minimum. The second family is the family of truncated Gaussian distributions (truncated so that \(\mathcal{D}\) is supported on \([0,1]\)). The variance of the optimal estimator for this family would be lower bounded by \(\sigma_{p}^{2}/n\), even if each user was given a sample directly from \(\mathcal{D}\), rather than from \(\mathcal{D}(k)\). Thus, using a reduction to the case of simply estimating \(p\) given \(n\) samples from \(\mathcal{D}\), we obtain the second term in the minimum.
## 6 Example Initial Estimators
In this section we give example initial mean and variance estimation procedures that can be used in the framework described in Section 4. For both estimators, we show that they satisfy the conditions of Theorem 4.1, and thus can be used as initial estimators in Algorithm 2, assuming all other technical conditions are satisfied. This also immediately implies that the set of initial mean and variance estimators which satisfy the conditions of Theorem 4.1 is non-empty.
We note again that the estimators described in this section are examples of estimators that achieve the conditions of Theorem 4.1, and that any private mean and variance estimators that satisfy these conditions could be used instead. As discussed in Section 4.2, one may choose to use different estimators of these initial quantities in different settings (for example, if local differential privacy is required or if different distributional assumptions are known).
### Initial Mean Estimation
We will begin with the initial mean estimation procedure \(\texttt{mean}_{\epsilon,\delta}\) to computed \(\widehat{p}_{\epsilon}^{\text{initial}}\). We consider the simplest mean estimation subroutine, where the analyst collects a single data point from the \(n/10\) users with the smallest \(k_{i}\), then privately computes the empirical mean of these points using the Laplace Mechanism. The following lemma shows that this process is differentially private and satisfies the accuracy conditions of Theorem 4.1, i.e., that with high probability, \(\widehat{p}_{\epsilon}^{\text{initial}}\) is close to \(p\) and \(\widehat{p}_{\epsilon}^{\text{initial}}(1-\widehat{p}_{\epsilon}^{\text{ initial}})\) is close to \(p(1-p)\).
**Lemma 6.1**.: _Fix any \(\epsilon>0\) and let \(\widehat{p}_{\epsilon}^{\text{initial}}=\texttt{mean}_{\epsilon,\delta}( \mathrm{x}_{(\mathfrak{g}n/10)+1}^{1},\cdots,\mathrm{x}_{\mathrm{n}}^{1})= \frac{1}{n/10}\sum_{\mathrm{i}=(\mathfrak{g}n/10)+1}^{n}\mathrm{x}_{\mathrm{i} }^{1}+\mathrm{Lap}\left(\frac{10}{\epsilon n}\right)\). Then \(\texttt{mean}_{\epsilon,\delta}\) is \((\epsilon,0)\)-differentially private, \(\mathbb{E}[\widehat{p}_{\epsilon}^{\text{initial}}]=p\) and if \(p\geq\frac{20\log(1/\beta)}{n}\), then for \(n\) sufficiently large,_
\[\Pr[\|\widehat{p}_{\epsilon}^{\text{initial}}-p|\leq\alpha]\leq\beta\text{ for }\alpha=2\max\{\sqrt{\frac{12\widehat{p}_{\epsilon}^{\text{initial}}\log(4/\beta)}{n/10}+ \frac{36\log^{2}(4/\beta)}{n^{2}/100}+\frac{6\log(4/\beta)}{n/10},\frac{\log (2/\beta)}{\epsilon n/10}\}}\leq f_{\mathcal{D}}^{k_{i}}(n,\sigma_{p}^{2}, \beta).\]
_Further, if \(\min\{p,1-p\}\geq 12\max\left\{\frac{3\log(4/\beta)}{n/10},\frac{\log(2/\beta)}{ \epsilon n/10}\right\}\) then with probability \(1-\beta\), \(\widehat{p}_{\epsilon}^{\mathrm{initial}}\in[\frac{1}{2}p,\frac{3}{2}p]\) and \(\widehat{p}_{\epsilon}^{\mathrm{initial}}(1-\widehat{p}_{\epsilon}^{\mathrm{ initial}})\in[\frac{p(1-p)}{2},\frac{3p(1-p)}{2}]\)._
The concentration bound follows from noticing that \(\mathcal{D}=\mathrm{Ber}(p)\) and using the concentration of binomial random variables. The full proof is in Appendix E.
Note that the expression of \(\alpha\) depends only on quantities known to the analyst - including \(\widehat{p}_{\epsilon}^{\mathrm{initial}}\), which will be observed as output - so that \(\alpha\) can be computed directly for use in Algorithm 2. Although our presentation of Algorithm 2 requires \(\alpha\) to be specified up front as input to the algorithm, it could equivalently be computed internally by the algorithm as a function of \(\widehat{p}_{\epsilon}^{\mathrm{initial}}\) and other input parameters.
### Initial Variance Estimation
We now turn to our variance estimation procedure \(\mathtt{variance}_{\epsilon,\delta}\) for estimating \(\sigma_{p}^{2}\). Let us first provide some background on privately estimating the standard deviation of well-behaved distributions. Lemma 6.2 guarantees the existence of a differentially private algorithm for estimating standard deviation within a small constant factor with high probability, as long as the sample size is sufficiently large. The following is a slight generalisation of the estimation of the standard deviation of a Gaussian given by Karwa and Vadhan (2018).
**Lemma 6.2** (DP standard deviation estimation).: _For all \(n\in\mathbb{N}\), \(\sigma_{\min}<\sigma_{\max}\in[0,\infty],\epsilon>0,\delta\in(0,\frac{1}{n}], \beta\in(0,1/2),\zeta>0,\) there exists an \((\epsilon,\delta)\)-differentially private algorithm \(\mathcal{M}\) that satisfies the following: if \(x_{1},\ldots,x_{n}\) are i.i.d. draws from a distribution \(P\) which has standard deviation \(\sigma\in[\sigma_{\min},\sigma_{\max}]\) and absolute central third moment \(\rho=\mathbb{E}[|x-\mu(P)|^{3}]\) such that \(\frac{\rho}{\sigma^{3}}\leq\zeta\), then if \(n\geq c\zeta^{2}\min\{\frac{1}{\epsilon}\ln(\frac{\ln\sigma_{\max}^{2}}{ \beta}),\frac{1}{\epsilon}\ln(\frac{1}{\delta\beta})\}\), (where \(c\) is a universal constant), then \(\mathcal{M}\) produces an estimate \(\widehat{\sigma}\) of the standard deviation such that \(\Pr_{x_{1},\ldots,x_{n}\sim P,\mathcal{M}}(\sigma^{2}\leq\widehat{\sigma}^{2 }\leq 8\sigma^{2})\geq 1-\beta\)._
The proof of Lemma 6.2 is given formally in Appendix E.1, along with a detailed description of the algorithm \(\mathcal{M}\). The remaining omitted proofs in this section are in Appendix E. We note that the interval \([\sigma_{\min},\sigma_{\max}]\) can be set fairly large without much impact on the sample complexity, in the case that little is known about \(\sigma\) a priori.
In order to estimate \(\sigma_{p}^{2}\), we will use the estimator promised by Lemma 6.2 on the data of the \(L=\log n/\epsilon\) users with the largest \(k_{i}\). Let \(k=k_{\log n/\epsilon}\), so the top \(\log n/\epsilon\) individuals all have at least \(k\) data points. We will have these individuals report \(\widehat{p}_{i}^{k}:=\frac{1}{k}\sum_{j=1}^{k}x_{j}^{i}\), which is the empirical mean of their first \(k\) data points. Thus, we are running the estimator promised in Lemma 6.2 on \(\mathcal{D}(k)\) with \(\log n/\epsilon\) data points. In order to utilise Lemma 6.2, we first need to ensure that \(\mathcal{D}(k)\) satisfies the moment condition that \(\rho/\sigma^{3}\) is bounded, which is shown in Lemma 6.3.
**Lemma 6.3**.: _For \(k\in\mathbb{N}\), suppose \(p\in[\frac{1}{k},1-\frac{1}{k}]\), \(\sigma_{p}\geq\frac{1}{k}\), \(k\geq 2\), and there exists \(\gamma>0\) such that \(\frac{\rho_{\mathcal{D}}}{\sigma_{p}^{3}}\leq\gamma\) where \(\rho_{\mathcal{D}}\) denotes the absolute central third moment of \(\mathcal{D}\). Then \(\frac{\rho_{\mathcal{D}(k)}}{\mathrm{Var}(\mathcal{D}(k))^{3/2}}\leq 8(3 \sqrt{3}+\gamma)\)._
With this result, we can apply Lemma 6.2 to our setting to privately achieve an estimate \(\widehat{\sigma}_{p,k}^{2}\) that is close to the true population-level variance \(\sigma_{p}^{2}\), as shown in Lemma 6.4. Note that as \(k\) grows large, the allowable range for \(p\) approaches the full support \([0,1]\) and the allowable standard deviation \(\sigma_{p}\) approaches any non-negative number.
Lemma 6.4 combines the two previous results to show that Lemma 6.2 can be applied to the individual reports \(\widehat{p}_{i}^{k}\) from the top \(\log n\) users, and the resulting variance estimate will satisfy the accuracy conditions of Theorem 4.1.
**Lemma 6.4**.: _Given \(\sigma_{\min}<\sigma_{\max}\in[0,\infty],\epsilon>0,\delta\in(0,\frac{1}{n}], \beta\in(0,1/2)\), and \(\zeta>0\), let \(\mathcal{M}\) be the \((\epsilon,\delta)\)-differentially private mechanism given by Lemma 6.2, and let \(\widehat{\sigma}_{p,k}^{2}=\mathcal{M}(\widehat{p}_{1}^{k},\cdots,\widehat{p}_{ \log n/\epsilon}^{k})\), where \(\widehat{p}_{1}^{k},\cdots,\widehat{p}_{\log n/\epsilon}^{k}\sim\mathcal{D}(k)\). If there exists \(\zeta>0\) such that \(\frac{\rho_{\mathcal{D}}}{\sigma_{p}^{2}}\leq\zeta\) where \(\rho_{\mathcal{D}}=\mathbb{E}_{x\sim\mathcal{D}}[|x-p|^{3}]\), \(\sqrt{\frac{1}{k}p(1-p)+\frac{k-1}{k}\sigma_{p}^{2}}\in[\sigma_{\min},\sigma_{ \max}]\), \(\sigma_{p}>\frac{1}{k}\), \(p\in\left[\frac{1}{k},1-\frac{1}{k}\right]\), and \(\log n\geq c(8(3\sqrt{3}+\zeta))^{2}\min\{\ln(\frac{\ln(\frac{\rho_{\max}}{ \beta})}{\beta}),\ln(\frac{1}{\delta\beta})\}\), then with probability \(1-\beta\), \(\widehat{\sigma}_{p,k}^{2}\in[\mathrm{Var}(\mathcal{D}(k)),8\mathrm{Var}( \mathcal{D}(k))]\)._
|
2302.11255
|
Quasiprobability distribution of work in the quantum Ising model
|
A complete understanding of the statistics of the work done by quenching a
parameter of a quantum many-body system is still lacking in the presence of an
initial quantum coherence in the energy basis. In this case, the work can be
represented by a class of quasiprobability distributions. Here, we try to
clarify the genuinely quantum features of the process by studying the work
quasiprobability for an Ising model in a transverse field. We consider both a
global and a local quench, by focusing mainly on the thermodynamic limit. We
find that, while for a global quench there is a symmetric non-contextual
representation with a Gaussian probability distribution of work, for a local
quench we can get quantum contextuality as signaled by a negative fourth moment
of the work. Furthermore, we examine the critical features related to a quantum
phase transition and the role of the initial quantum coherence as useful
resource.
|
Gianluca Francica, Luca Dell'Anna
|
2023-02-22T10:07:49Z
|
http://arxiv.org/abs/2302.11255v3
|
# Quasiprobability distribution of work in the Ising model
###### Abstract
A complete understanding of the statistics of the work done by quenching a parameter of a quantum many-body system is still lacking in the presence of an initial quantum coherence in the energy basis. In this case, the work can be represented by a class of quasiprobability distributions. Here, we try to clarify the genuinely quantum features of the process by studying the work quasiprobability for an Ising model in a transverse field. We consider both a global and a local quench, by focusing mainly on the thermodynamic limit. We find that, while for a global quench there is a symmetric non-contextual representation with a Gaussian probability distribution of work (apart from subdominant terms), for a local quench we can get quantum contextuality as signaled by a negative fourth moment of the work. Furthermore, we examine the universal features related to a quantum phase transition and the role of the initial quantum coherence as useful resource.
## I Introduction
In the last years out-of-equilibrium processes generated by quenching a parameter of a closed quantum system have been extensively investigated: Outstanding experiments of this kind have been realized with ultra-cold atoms [1; 2; 3], and theoretical problems concerning many-body systems have been examined, such as thermalization and integrability [4; 2], the universality of the dynamics across a critical point [5] and the statistics of the work done [6]. In particular, the work statistics can be described in terms of the two-projective measurement scheme [7] if the initial state is incoherent, i.e., there is no initial quantum coherence in the energy basis. In contrast, when the initial state is not incoherent there may not be a probability distribution for the work done as proven by a no-go theorem [8]. This is related to the quantum contextuality as discussed in Ref. [9]. In simple terms, the problem is similar to looking for a probability distribution in phase space for a quantum particle in a certain quantum state. Since position and momentum are not compatible observables, in general we get a quasiprobability, e.g., the well-known Wigner quasiprobability [10]. Concerning the work, which in a thermally isolated quantum system is equal to the energy change of the system, the role of position and momentum is played by the initial and final Hamiltonian of the system. Several attempts have been made to describe the work statistics, among these, quasiprobabilities have been defined in terms of full-counting statistics [11] and weak values [12], which can be viewed as particular cases of a more general quasiprobability introduced in Ref. [13]. In general, if some fundamental conditions need to be satisfied, the work will be represented by a class of quasiprobability distributions [14]. Determining the possible representations of the work has a fundamental importance: If there is some quasiprobability that is a non-negative probability, there can be a non-contextual classical representation of the protocol, i.e., the process can be not genuinely quantum.
Here, we focus on the statistics of the work done by quenching a parameter of a many-body system starting from a nonequilibrium state having coherence in the energy basis. Although some investigations on the coherence effects have already been carried out, e.g., in Refs. [15] and [16] the full-counting statistics and weak values quasiprobabilities have been examined, the work statistics still remains rather uninvestigated especially in many-body systems. Thus, after discussing the statistics of work and the quantum contextuality in general in Sec. II, we focus on an Ising model, which we introduce in Sec. III. Our aim is to derive some general features of global and local quenches present in the thermodynamic limit thanks to the initial coherence. Furthermore, we are interested to clarify what are the universal features of the work related to a quantum phase transition: Although several studies have been performed for initial incoherent states (e.g., for the Ising model see Refs. [17] and [18]), also the initial coherence plays a role, as found in Ref. [15], which is not entirely clear. Thus, we focus on a global quench starting from a coherent Gibbs state in Sec. IV, where we show that, unlike a system of finite size, in the thermodynamic limit the symmetric quasiprobability representation of the work tends to be non-contextual, in particular we get a Gaussian probability distribution, even if there are also other quasiprobabilities that take negative values. Concerning the effects of the quantum phase transition, we find that the coherent contribution to the average work is a universal function. In contrast, for a local quench, since the work is not extensive, there are initial states such that all the quasiprobabilities of the class can take negative values as signaled by a negative fourth moment of the work (see Sec. V). Then, these processes remain genuinely quantum also in the thermodynamic limit. Furthermore, we also try to clarify the role of initial quantum coherence as useful resource for the work extraction in Sec. VI, showing that, even when the protocol tends to be non-contextual, the initial coherence still plays an active role. In the end, we summarize and discuss further our results in Sec. VII.
## II Work statistics
We consider a quantum quench, so that the system is initially in the state \(\rho_{0}\) and the time evolution is described by the unitary operator \(U_{t,0}\) which is generated by the time-dependent Hamiltonian \(H(\lambda_{t})\) where the control parameter \(\lambda_{t}\) is changed in the time interval \([0,\tau]\). In detail, \(U_{t,0}=\mathcal{T}\,\mathbf{e}^{-i\int_{0}^{t}H(\lambda_{t})ds}\), where \(\mathcal{T}\) is the time order operator and the Hamiltonian can be expressed as
\(\sum_{k}E_{k}(\lambda_{t})|E_{k}(\lambda_{t})\rangle\langle E_{k}(\lambda_{t})|\) where \(|E_{k}(\lambda_{t})\rangle\) is the eigenstate with eigenvalue \(E_{k}(\lambda_{t})\) at the time \(t\). For brevity, we define \(E_{i}=E_{i}(\lambda_{0})\) and \(E_{k}^{\prime}=E_{k}(\lambda_{\tau})\). The average work \(\langle w\rangle\) done on the system in the time interval \([0,\tau]\) can be identified with the average energy change
\[\langle w\rangle=\mathrm{Tr}\left\{(H^{(H)}(\lambda_{\tau})-H(\lambda_{0})) \rho_{0}\right\}\,, \tag{1}\]
where, given an operator \(A(t)\) we define the Heisenberg time evolved operator \(A^{(H)}(t)=U_{t,0}^{\dagger}A(t)U_{t,0}\). In general, the work performed in the quench can be represented by a quasiprobability distribution of work. If we require that (W1) the quasiprobability distribution reproduces the two-projective measurement scheme in the case of initial incoherent states (i.e., for states \(\rho_{0}\) such that \(\rho_{0}=\Delta(\rho_{0})\), where we have defined the dephasing map \(\Delta(\rho_{0})=\sum_{i}|E_{i}\rangle\langle E_{i}|\rho_{0}|E_{i}\rangle \langle E_{i}|\)), (W2) the average calculated with respect to the quasiprobability is equal to Eq. (1), and (W3) the second moment is equal to
\[\langle w^{2}\rangle=\mathrm{Tr}\left\{(H^{(H)}(\lambda_{\tau})-H(\lambda_{0} ))^{2}\rho_{0}\right\}\,, \tag{2}\]
the quasiprobability distribution belongs to a defined class [14; 13], i.e., it takes the form
\[p_{q}(w) = \sum_{k,j,i}\mathrm{Re}\{\langle E_{i}|\rho_{0}|E_{j}\rangle \langle E_{j}|U_{\tau,0}^{\dagger}|E_{k}^{\prime}\rangle\langle E_{k}^{\prime }|U_{\tau,0}|E_{i}\rangle\} \tag{3}\] \[\times\delta(w-E_{k}^{\prime}+qE_{i}+(1-q)E_{j})\,,\]
where \(q\) is a real parameter. Our aim is to investigate this quasiprobability for a many-body system. We can focus on the characteristic function which is defined as \(\chi_{q}(u)=\langle e^{iuw}\rangle\) and reads
\[\chi_{q}(u)=\frac{1}{2}\left(X_{q}(u)+X_{1-q}(u)\right)\,,\]
where we have defined
\[X_{q}(u)=\mathrm{Tr}\left\{e^{-iuqH(\lambda_{0})}\rho_{0}e^{-iu(1-q)H(\lambda _{0})}e^{iuH^{(H)}(\lambda_{\tau})}\right\}\,. \tag{4}\]
The moments of work are \(\langle w^{n}\rangle=(-i)^{n}\bar{a}_{u}^{n}\chi_{q}(0)\), and the higher moments for \(n>2\) depend on the particular representation. In particular we get
\[\langle w^{n}\rangle=(-i)^{n}\bar{a}_{u}^{n}\chi_{q}(0)=\frac{(-i)^{n}\bar{a} _{u}^{n}X_{q}(0)}{2}+\frac{(-i)^{n}\bar{a}_{u}^{n}X_{1-q}(0)}{2}\,, \tag{5}\]
where (see Appendix A)
\[(-i)^{n}\bar{a}_{u}^{n}X_{q}(0) = \sum_{k=0}^{n}(-1)^{n-k}\binom{n}{k}\sum_{l=0}^{n-k}\binom{n-k}{l }q^{n-k-l}(1-q)^{l} \tag{6}\] \[\times\mathrm{Tr}\left\{\rho_{0}H(\lambda_{0})^{l}(H^{(H)}( \lambda_{\tau}))^{k}H(\lambda_{0})^{n-k-l}\right\}\,.\]
We can consider the problem if there is a classical representation, i.e., if there is a non-contextual hidden variables model which satisfies the conditions about the reproduction of the two-projective-measurement scheme, the average and the second moment. To introduce the concept of contextuality at an operational level (see, e.g., Refs. [9; 19]), we consider a set of preparations procedures \(P\) and measurements procedures \(M\) with outcomes \(k\), so that we will observe \(k\) with probability \(p(k|P,M)\). We aim to reproduce the statistics by using a set of states \(\zeta\) that are random distributed in the set \(\mathcal{Z}\) with probability \(p(\zeta|P)\) every time the preparation \(P\) is performed. If, for a given \(\zeta\), we get the outcome \(k\) with the probability \(p(k|\zeta,M)\), we are able to reproduce the statistics if
\[p(k|P,M)=\int_{\mathcal{Z}}p(\zeta|P)p(k|\zeta,M)d\zeta\,, \tag{7}\]
and the protocol is called universally non-contextual if \(p(\zeta|P)\) is a function of the quantum state alone, i.e., \(p(\zeta|P)=p(\zeta|\rho_{0})\), and \(p(k|\zeta,M)\) depends only on the positive operator-valued measurement element \(M_{k}\) associated to the corresponding outcome of the measurement \(M\), i.e., \(p(k|\zeta,M)=p(k|\zeta,M_{k})\). In our case, the outcome \(k\) corresponds to the work \(w_{k}\), and if the protocol is non-contextual the work distribution can be expressed as
\[p(w)=\sum_{k}p(k|P,M)\delta(w-w_{k})\,, \tag{8}\]
where \(p(k|P,M)\) is given by Eq. (7) with \(p(\zeta|P)=p(\zeta|\rho_{0})\) and \(p(k|\zeta,M)=p(k|\zeta,M_{k})\), so that for a negative quasiprobability of work we cannot have a non-contextual protocol. Thus, a process that cannot be reproduced within any non-contextual protocol will exhibit genuinely non-classical features. If all the quasiprobabilities in the class take negative values, the protocol is contextual, whereas if there is a quasiprobability which is non-negative, there can be a non-contextual representation. We recall that for an initial incoherent state \(\rho_{0}=\Delta(\rho_{0})\), we get the two-projective measurement scheme that is non-contextual [9]. In contrast, the presence of initial quantum coherence in the energy basis can lead to a contextual protocol. Let us investigate the effects of the initial quantum coherence by considering a Ising model in a transverse field.
## III Model
We consider a chain of \(L\) spin 1/2 described by the Ising model in a transverse field with Hamiltonian
\[H(\lambda)=-\lambda\sum_{i=1}^{L}\sigma_{i}^{z}-\sum_{i=1}^{L}\sigma_{i}^{x} \sigma_{i+1}^{x}\,, \tag{9}\]
where we have imposed periodic boundary conditions \(\sigma_{L+1}^{\alpha}=\sigma_{1}^{\alpha}\), and \(\sigma_{i}^{\alpha}\) with \(\alpha=x,y,z\) are the Pauli matrices on the site \(i\). We note that the parity \(P=\prod_{i=1}^{L}\sigma_{i}^{z}\) is a symmetry of the model, i.e., it commutates with the Hamiltonian. The Hamiltonian can be diagonalized by performing the Jordan-Wigner transformation
\[a_{i}=\left(\prod_{j<i}\sigma_{j}^{z}\right)\sigma_{i}^{-}\,, \tag{10}\]
where the fermionic operators \(a_{i}\) satisfy the anti-commutation relations \(\{a_{i},a_{j}^{\dagger}\}=\delta_{i,j}\), \(\{a_{i},a_{j}\}=0\). We get the Hamiltonian of fermions
\[H(\lambda) = -\lambda\sum_{i=1}^{L}(2a_{i}^{\dagger}a_{i}-1)-\sum_{i=1}^{L-1}(a _{i}^{\dagger}-a_{i})(a_{i+1}+a_{i+1}^{\dagger}) \tag{11}\] \[+P(a_{L}^{\dagger}-a_{L})(a_{1}+a_{1}^{\dagger})\,, \tag{12}\]
where the parity reads \(P=e^{i\pi N}\) and \(N=\sum_{i=1}^{L}a_{i}^{\dagger}a_{i}\) is the number operator. We consider the projector \(P_{\pm}\) on the sector with parity \(P=\pm 1\), then the Hamiltonian reads
\[H(\lambda)=P_{+}H_{+}(\lambda)P_{+}+P_{-}H_{-}(\lambda)P_{-}\,. \tag{13}\]
For the sector with odd parity \(P=-1\), we get the Kitaev chain
\[H_{-}(\lambda)=-\lambda\sum_{i=1}^{L}(2a_{i}^{\dagger}a_{i}-1)-\sum_{i=1}^{L}( a_{i}^{\dagger}-a_{i})(a_{i+1}+a_{i+1}^{\dagger}) \tag{14}\]
with periodic boundary conditions \(a_{L+1}=a_{1}\). We perform a Fourier transform \(a_{j}=1/\sqrt{L}\sum_{k}e^{-ikj}a_{k}\), where \(k=2\pi n/L\) with \(n=-(L-1)/2,\ldots,(L-1)/2\) for \(L\) odd and \(n=-L/2+1,\ldots,L/2\) for \(L\) even. Thus, the Hamiltonian reads
\[H_{-}(\lambda)=\sum_{k}\Psi_{k}^{\dagger}\left[-(\lambda+\cos k)\sigma^{z}+ \sin k\sigma^{y}\right]\Psi_{k}\,, \tag{15}\]
where we have defined the Nambu spinor \(\Psi_{k}=(a_{k},a_{k}^{\dagger})^{T}\). In particular the Hamiltonian can be written as \(H_{-}(\bar{\lambda})=\sum_{k}\Psi_{k}^{\dagger}\vec{d}_{k}\cdot\vec{\sigma} \Psi_{k}\), which, in the diagonal form, reads
\[H_{-}(\lambda)=\sum_{k}\epsilon_{k}\left(a_{k}^{\dagger}\alpha_{k}-\frac{1}{2 }\right)=\sum_{k}\epsilon_{k}a_{k}^{\dagger}\alpha_{k}+E_{-}\,, \tag{16}\]
where \(E_{-}=-\sum_{k}\epsilon_{k}/2\). In detail we have performed a rotation with respect to the \(x\)-axis with an angle \(\theta_{k}\) between \(\vec{d}_{k}\) and the \(z\)-axis, corresponding to the Bogoliubov transformation \(\alpha_{k}=\cos(\theta_{k}/2)a_{k}-i\sin(\theta_{k}/2)a_{-k}^{\dagger}\), where \(\epsilon_{k}=2||\vec{d}_{k}||\), or more explicitly,
\[\epsilon_{k}=2\sqrt{(\lambda+\cos k)^{2}+\sin^{2}k}\,. \tag{17}\]
For the sector with even parity \(P=1\), we get the Hamiltonian \(H_{+}(\lambda)\) which is equal to the one in Eq. (14) with antiperiodic boundary conditions \(a_{L+1}=-a_{1}\), thus the only difference is in the momenta \(k\) which are \(k=2\pi(n-1/2)/L\). Let us consider \(L\) even. Thus, in the even parity sector, \(k\in K_{+}\), for each \(k\) there is \(-k\), and the eigenstates of the Hamiltonian are the states
\[\alpha_{k_{1}}^{\dagger}\cdots\alpha_{k_{2m}}^{\dagger}|\tilde{0}_{+}\rangle \tag{18}\]
with \(k_{i}\in K_{+}\) and \(|\tilde{0}_{+}\rangle\) the vacuum state of \(\alpha_{k}\) with \(k\in K_{+}\). In contrast, in the odd parity sector, \(k\in K_{-}\), for each \(k\) there is \(-k\) except for \(k=0\) and \(\pi\). For \(\lambda<-1\) we get \(\alpha_{0}=a_{0}\) and \(\alpha_{\pi}=a_{\pi}^{\dagger}\), and for \(|\lambda|<1\) we get \(\alpha_{0}=a_{0}^{\dagger}\) and \(\alpha_{\pi}=a_{\pi}^{\dagger}\), and for \(|\lambda|<1\) we get \(\alpha_{0}=a_{0}^{\dagger}\) and \(\alpha_{\pi}=a_{\pi}\). Then, for \(|\lambda|>1\) the vacuum state \(|\tilde{0}_{-}\rangle\) of \(\alpha_{k}\) with \(k\in K_{-}\) has even parity, and the eigenstates of the Hamiltonian are the states
\[\alpha_{k_{1}}^{\dagger}\cdots\alpha_{k_{2m+1}}^{\dagger}|\tilde{0}_{-}\rangle \tag{19}\]
with \(k_{i}\in K_{-}\). Conversely, for \(|\lambda|<1\) the vacuum state \(|\tilde{0}_{-}\rangle\) of \(\alpha_{k}\) with \(k\in K_{-}\) has odd parity since has the fermion \(a_{0}\) but not \(a_{\pi}\), and the eigenstates of the Hamiltonian are the states
\[\alpha_{k_{1}}^{\dagger}\cdots\alpha_{k_{2m}}^{\dagger}|\tilde{0}_{-}\rangle \tag{20}\]
with \(k_{i}\in K_{-}\). Then, for \(|\lambda|<1\) both the states \(|\tilde{0}_{+}\rangle\) and \(|\tilde{0}_{-}\rangle\) are eigenstates of the Hamiltonian with energies \(E_{+}\) and \(E_{-}\), so that the ground-state is two-fold degenerate in the thermodynamic limit. Thus, at the points \(\lambda=\pm 1\) we get a second-order quantum phase transition.
## IV Global quench
We start to focus on a sudden global quench of the transverse field \(\lambda\), i.e., \(\lambda\) is suddenly changed from the value \(\lambda_{0}\) to \(\lambda_{\tau}\), so that \(\tau\to 0\) and \(U_{\tau,0}=I\). To investigate the role of initial quantum coherence, we focus on a coherent Gibbs state
\[|\Psi_{G}(\beta)\rangle=\frac{1}{\sqrt{Z}}\sum_{j}e^{-\beta E_{j}/2+i\theta_{j} }|E_{j}\rangle\,, \tag{21}\]
where \(Z=Z(\lambda_{0})\) and \(Z(\lambda)\) is the partition function defined as \(Z(\lambda)=\mathrm{Tr}\left\{e^{-\beta H(\lambda)}\right\}\). Of course, the incoherent part of the state \(|\Psi_{G}(\beta)\rangle\) is \(\Delta(|\Psi_{G}(\beta)\rangle\langle\Psi_{G}(\beta)|)=\rho_{G}(\beta)\), where \(\rho_{G}(\beta)\) is the Gibbs state \(\rho_{G}(\beta)=e^{-\beta H(\lambda_{0})}/Z\). With the aim to calculate the characteristic function for an arbitrary size \(L\), from Eq. (4) by using the relations \(\sum_{s}P_{s}=I\), \(P_{s}^{2}=P_{s}\), \([P_{s},H(\lambda)]=0\) and \([P_{s},H_{\pm}(\lambda)]=0\) it is easy to see that
\[X_{q}(u)=\sum_{s}\mathrm{Tr}\left\{e^{-iuqH_{s}(\lambda_{0})}P_{s}\rho_{0}P_{s }e^{-iu(1-q)H_{s}(\lambda_{0})}e^{iuH_{s}^{(H)}(\lambda_{\pm})}\right\}\,. \tag{22}\]
We get \(P_{s}\rho_{0}P_{s}=P_{s}\rho_{0}^{s}\), where for the Gibbs state \(\rho_{0}^{s}=e^{-\beta H_{s}(\lambda_{0})}/Z\) and for the coherent Gibbs state \(\rho_{0}^{s}=|\Psi_{G}^{s}\rangle\langle\Psi_{G}^{s}|\). In particular, we get
\[|\Psi_{G}^{s}\rangle=\frac{1}{\sqrt{Z}}\otimes_{k\in K_{+}}\left(e^{\frac{\mu_{ k}}{4}}|\tilde{0}_{k}\rangle+e^{-\frac{\mu_{k}}{4}+i\tilde{\psi}_{k}}|\tilde{1}_{k} \rangle\right)\,, \tag{23}\]
where we consider a phase such that \(\phi_{-k}=\phi_{k}\), with \(|\tilde{n}_{k}\rangle=(a_{k}^{\dagger})^{\mu_{k}}|\tilde{0}_{k}\rangle\), where \(\epsilon_{k}=\epsilon_{k}(\lambda_{0})\), \(\alpha_{k}=\alpha_{k}(\lambda_{0})\) and \(|\tilde{0}_{k}\rangle\) is the vacuum state for the fermion \(\alpha_{k}\). As shown in Appendix B, we get
\[X_{q}(u)=\frac{1}{2}\sum_{s}X_{q}^{s}(u)+\eta_{s}X_{q}^{s}(u)\,, \tag{24}\]
where we have defined \(\eta_{s}=s\langle\tilde{0}_{s}|e^{i\pi N}|\tilde{0}_{s}\rangle\) which is \(\eta_{+}=1\) and \(\eta_{-}=-1\) for \(|\lambda_{0}|>1\) and \(\eta_{-}=1\) for \(|\lambda_{0}|<1\), and
\[X_{q}^{s}(u)=\frac{1}{Z}\sum_{k\in K_{+}:k\geq 0}X_{q}^{(k)}(u)\,. \tag{25}\]
In detail, for \(k>0\) and \(k\neq\pi\), we get
\[X_{q}^{(k)}(u)=X_{q}^{(k),th}(u)+X_{q}^{(k),coh}(u)\,, \tag{26}\]
where \(X_{q}^{(k),th}(u)\) is the incoherent contribution, which reads
\[X_{q}^{(k),th}(u) =2\bigg{(}\cos((u-i\beta)\epsilon_{k})\cos(u\epsilon_{k}^{\prime })+\sin((u-i\beta)\epsilon_{k})\] \[\quad\times\sin(u\epsilon_{k}^{\prime})\hat{d}_{k}\cdot\hat{d}_{k }^{\prime}+1\bigg{)} \tag{27}\]
and \(X_{q}^{(k),coh}(u)\) is the coherent contribution, which reads
\[X_{q}^{(k),coh}(u)=-2i\sin(u\epsilon_{k}^{\prime})\sin(u(2q-1)\epsilon_{k}-2 \phi_{k})(\hat{d}_{k}\times\hat{d}_{k}^{\prime})_{x}\,, \tag{28}\]
where, for brevity we have defined \(\epsilon_{k}^{\prime}=\epsilon_{k}(\lambda_{r})\), \(\vec{d}_{k}=\vec{d}_{k}(\lambda_{0})\) and \(\vec{d}_{k}^{\prime}=\vec{d}_{k}(\lambda_{r})\). Furthermore, we have
\[X_{q}^{\prime s}(u)=\frac{1}{Z}\prod_{k\in K_{s},k\geq 0}X_{q}^{\prime(k)}(u) \tag{29}\]
with
\[X_{q}^{\prime(k)}(u)=X_{q}^{(k)}(u)-4\,. \tag{30}\]
In contrast, for \(k=0\) and \(k=\pi\), we get
\[X_{q}^{(0,\pi)}(u) =2\cosh\left(\frac{\beta\epsilon_{0,\pi}-iu(s_{0,\pi}\epsilon_{0,\pi}^{\prime}-\epsilon_{0,\pi})}{2}\right)\,, \tag{31}\] \[X_{q}^{\prime(0,\pi)}(u) =2\sinh\left(\frac{\beta\epsilon_{0,\pi}-iu(s_{0,\pi}\epsilon_{0,\pi}^{\prime}-\epsilon_{0,\pi})}{2}\right)\,, \tag{32}\]
where \(s_{\pi}=-1\) if \(|\lambda_{0}|<1\) and \(\lambda_{r}>1\) or \(|\lambda_{r}|<1\) and \(\lambda_{0}>1\), whereas \(s_{\pi}=1\), and \(s_{0}=-1\) if \(|\lambda_{0}|<1\) and \(\lambda_{r}<-1\) or \(|\lambda_{r}|<1\) and \(\lambda_{0}<-1\), whereas \(s_{0}=1\), while the partition function is
\[Z=\frac{1}{2}\sum_{s}\prod_{k\in K_{s}}2\cosh(\beta\epsilon_{k}/2)+\eta_{s} \prod_{k\in K_{s}}2\sinh(\beta\epsilon_{k}/2)\,. \tag{33}\]
If the initial quantum coherence does not contribute, i.e., \(X_{q}^{(k),coh}(u)=0\), we get \(X_{q}^{(k)}(u)=X_{q}^{(k),th}(u)\) and the characteristic function is the one of the initial Gibbs state \(\rho_{\rm G}(\beta)\). We get \(X_{q}^{(k),coh}(u)=0\) for \(q=1/2\) and \(\phi_{k}=n\pi/2\), and in this case the quasiprobability is non-negative, in particular it is equivalent to the two-projective-measurement scheme which is non-contextual. For \(q=1/2\) the initial quantum coherence contributes only for \(\phi_{k}\neq n\pi/2\) with \(n\) integer. In this case the quasiprobability can take negative values. However, in the thermodynamic limit the negativity of the quasiprobability is always subdominant for \(q=1/2\), and we get a Gaussian probability distribution of work. In simple terms this is a consequence of the extensiveness of the work due to the global quench. To prove it, we note that in the thermodynamic limit we get \(Z=\prod_{k\in K_{s}}Z_{k}\) with \(Z_{k}=2\cosh(\beta\epsilon_{k}/2)\), then
\[X_{q}(u)=\prod_{k\in K_{s},k\geq 0}\frac{X_{q}^{(k)}(u)}{Z_{k}^{2}}\,. \tag{34}\]
Basically, in the thermodynamic limit the model is equivalent to the system of fermions with Hamiltonian \(H_{+}\). We can write \(X_{q}(u)=\exp(Lg_{q}(u))\), where
\[g_{q}(u)=\frac{1}{2\pi}\int_{0}^{\pi}\ln\left(\frac{X_{q}^{(k)}(u)}{Z_{k}^{2}} \right)dk \tag{35}\]
is intensive, so that the work is extensive, i.e., \(\langle{\rm w}^{n}\rangle\sim L^{n}\). Then, as \(L\to\infty\) we can consider
\[X_{q}(u)\sim e^{L\left(\lambda_{o}g_{q}(0)u+\frac{1}{2}\phi_{o}^{2}\phi_{q}(0) u^{2}\right)} \tag{36}\]
since in the calculation of the Fourier transform of \(X_{q}(u)\) the dominant contribution of the integral is near \(u=0\), so that we can expand \(g_{q}(u)\) in Taylor series about \(u=0\), and thus the neglected terms in Eq. (36) do not contribute in the asymptotic formula of the quasiprobability \(p_{q}(w)\). We note that, although the characteristic function \(\chi_{q}(u)\) depends on \(q\), the first two moments do not depend on \(q\). In particular, we note that the relative fluctuations of work scale as \(\sigma_{w}/\langle w\rangle\sim 1/\sqrt{L}\), where we have defined the variance \(\sigma_{w}^{2}=\langle w^{2}\rangle-\langle w\rangle^{2}\). By noting that \(\partial_{u}g_{q}(0)\) does not depend on \(q\) and \(\partial_{u}^{2}g_{1-q}(0)=\partial_{u}^{2}g_{q}^{*}(0)\), we get the quasiprobability of work
\[p_{q}(w)\sim\frac{1}{\sqrt{2\pi}}{\rm Re}\left(\frac{e^{-\frac{(w-\phi_{k})^{2 }}{2\eta_{q}}}}{\sqrt{\phi_{q}}}\right), \tag{37}\]
where \(\bar{w}=-i\partial_{u}g_{q}(0)L\) and \(p_{q}=-\partial_{u}^{2}g_{q}(0)L\). In particular the average work is \(\langle w\rangle=\bar{w}\) and the variance \(\sigma_{w}^{2}\) is the real part of \(v_{q}\), i.e., \(v_{q}=\sigma_{w}^{2}+i\tau_{q}\). As shown in Fig. 1, for \(q\neq 1/2\) the asymptotic formula of the quasiprobability can take negative values due to the presence of the imaginary part \(r_{q}\). In contrast, for \(q=1/2\), we get \(\chi_{1/2}(u)=X_{1/2}(u)\), from which \(\sigma_{w}^{2}=-\partial_{u}^{2}g_{1/2}(0)L\), i.e., \(r_{1/2}=0\) and thus we get the Gaussian probability distribution
\[p_{1/2}(w)\sim\frac{e^{-\frac{(w-\phi_{k})^{2}}{2\sigma_{w}^{2}}}}{\sqrt{2\pi} \sigma_{w}}\,. \tag{38}\]
It is worth noting that the protocol tends to be non-contextual. To prove it, we consider the operator
\(H^{(H)}(\lambda_{r})-H(\lambda_{0})\), and the probability distribution
\[p(\Delta E)=\sum_{\mu}\langle\Delta E_{\mu}|\rho_{0}|\Delta E_{\mu}\rangle\delta( \Delta E-\Delta E_{\mu})\,, \tag{39}\]
where \(|\Delta E_{\mu}\rangle\) is the eigenstate of \(\Delta H\) with eigenvalue \(\Delta E_{\mu}\). Of course \(p(\Delta E)\) is non-contextual, and it is easy to see that \(p_{1/2}(w)\sim p(w)\) as \(L\to\infty\). In particular for the quench considered, we have \(\Delta H=(\lambda_{r}-\lambda_{0})S_{x}\), where \(S_{x}=\sum_{j=1}^{L}\sigma_{j}^{2}\), so that the symmetric representation for \(q=1/2\) tends to be equivalent to the distribution probability of the transverse magnetization \(S_{x}\). We emphasize that for small sizes \(L\) the quasiprobability at \(q=1/2\) can take negative values, but for large \(L\) it is well described by the Gaussian probability distribution in Eq. (38) (see Fig. 2). The negativity of the quasiprobability \(p_{q}(w)\) for \(q\neq 1/2\) can be characterized by the integral
\[\mathcal{N}\equiv\int|p_{q}(w)|dw\sim\frac{(\sigma_{w}^{4}+r_{q}^{2})^{\frac{ 1}{4}}}{\sigma_{w}}\,, \tag{40}\]
which is equal to one if \(p_{q}(w)\geq 0\). In our case, \(\mathcal{N}=1\) implies that \(r_{q}=0\) and thus \(p_{q}(w)\geq 0\). However, we note that \(\mathcal{N}=1\) does not imply in general that \(p_{q}(w)\geq 0\) (see Appendix C). In the end we note that the effects related to the negativity of the quasiprobability start to affect the statistics from the fourth moment, which reads \(\langle w^{4}\rangle\sim\tilde{w}^{4}+6\tilde{w}^{2}\sigma_{w}^{2}+3\sigma_{w} ^{4}-3r_{q}^{2}\). In contrast the first three moments do not depend on \(r_{q}\), explicitly they read \(\langle w\rangle=\tilde{w}\), \(\langle w^{2}\rangle=\tilde{w}^{2}+\sigma_{w}^{2}\) and \(\langle w^{3}\rangle\sim\tilde{w}^{3}+3\tilde{w}\sigma_{w}^{2}\). In particular, the kurtosis is \(\text{Kurt}\equiv((w-\langle w\rangle)^{4})/\sigma_{w}^{4}\sim 3-3r_{q}^{2}/ \sigma_{w}^{4}\) which is always smaller than 3 if \(r_{q}\neq 0\), i.e., the distribution is more 'flat' than the normal one. We note that if \(\tilde{w}\neq 0\), since \(\tilde{w}\sim L\) and \(\sigma_{w}^{2}\sim L\), the fourth moment is always positive. On the other hand, for \(\tilde{w}=0\), the fourth moment reads \(\langle w^{4}\rangle\sim 3\sigma_{w}^{4}-3r_{q}^{2}\) and becomes negative for \(r_{q}>\sigma_{w}^{2}\), so that in this regime the negativity for \(q\neq 1/2\) will be strong. To conclude our investigation concerning the global quench, we note that the average work reads
\[\tilde{w}=\frac{(\lambda_{0}-\lambda_{r})L}{\pi}\int_{0}^{\pi}\frac{(\lambda_ {0}+\cos k)\sinh(\beta\epsilon_{k})+\sin k\sin(2\phi_{k})}{\epsilon_{k}\cosh^ {2}(\beta\epsilon_{k}/2)}dk \tag{41}\]
and the variance reads
\[\sigma_{w}^{2} =\frac{(\lambda_{0}-\lambda_{r})^{2}L}{\pi}\int_{0}^{\pi}\frac{1 }{\cosh^{4}(\beta\epsilon_{k}/2)}\Bigg{(}\cosh^{2}(\beta\epsilon_{k}/2)\] \[\times\cosh(\beta\epsilon_{k})-\frac{2}{\epsilon_{k}^{2}}\big{(} \sin k\sin(2\phi_{k})+(\lambda_{0}+\cos k)\] \[\times\sinh(\beta\epsilon_{k})\big{)}^{2}\Bigg{)}dk\;. \tag{42}\]
Both \(\tilde{w}\) and \(\sigma_{w}^{2}\) are not regular at \(|\lambda_{0}|=1\) for \(\phi_{k}=\phi\neq n\pi/2\) due to the presence of a quantum phase transition (see Fig. 3). Furthermore, concerning the negativity of the quasiprobability of work, we have
\[r_{q}=\frac{2(1-2q)(\lambda_{r}-\lambda_{0})L}{\pi}\int_{0}^{\pi}\frac{\sin k \cos(2\phi_{k})}{\cosh^{2}(\beta\epsilon_{k}/2)}dk\,, \tag{43}\]
Figure 2: The histogram of the work distribution. We put \(L=10\) in the top panel, \(L=50\) in the bottom panel, \(q=1/2\), \(\beta=1\), \(\lambda_{r}=1.5\), \(\lambda_{0}=0.5\) and \(\phi_{k}=\pi/4\). The red line corresponds to the Gaussian distribution probability in Eq. (38). The histograms are calculated by using the characteristic function of Eq. (24).
which is regular. We deduce that the protocol admits a non-contextual description, i.e., \(r_{q}=0\), for any \(q\) and \(\phi_{k}=(2n+1)\pi/4\) or for \(q=1/2\). In particular, for \(q\neq 1/2\) we can have an interference effect due to the imaginary part \(r_{q}\), which is larger for \(\lambda_{0}\approx\lambda_{\tau}\) so that the quasiprobability takes negative values. In the end, to investigate the universal features of the work which can be related to the presence of the quantum phase transition, we introduce the energy scale \(J\) such the Hamiltonian reads
\[H_{J}(\lambda)=-J\lambda\sum_{i=1}^{L}\sigma_{i}^{z}-J\sum_{i=1}^{L}\sigma_{i} ^{x}\sigma_{i+1}^{x}\,. \tag{44}\]
We focus on \(\lambda_{0}\approx 1\) and we start to consider the average work given by Eq. (41) multiplied by \(J\). Then, we change variable \(k^{\prime}=\pi-k\) in the integral and we define \(\kappa=k^{\prime}/a\), and the renormalized couplings \(J=c/(2a)\) and \(\lambda_{0}=1-mca\). In the scaling limit \(a\to 0\), we get
\[\tilde{w}\sim\frac{J(\lambda_{0}-\lambda_{\tau})aL}{2\pi}\int_{0}^{\pi\over a }\frac{\kappa\sin(2\phi_{\pi})-cm\sinh(\beta c\omega_{\kappa})}{\omega_{ \kappa}\cosh^{2}(\beta c\omega_{\kappa}/2)}d\kappa\,, \tag{45}\]
where \(\omega_{\kappa}=\sqrt{\kappa^{2}+c^{2}m^{2}}\). We note that the integral extended to the interval \([0,\infty)\) does not converge. Thus the integral is not determined only by small \(\kappa\), and the behavior is not universal. Similarly, concerning the variance \(\sigma_{w}^{2}\), the integral extended to the interval \([0,\infty)\) does not converge, so that it is not universal. However, the coherent contribution to the average work defined as
\[\tilde{w}_{coh}=\tilde{w}-\tilde{w}_{th}\,, \tag{46}\]
where \(\tilde{w}_{th}\) is the average work corresponding to the initial state \(\rho_{0}=\rho_{G}(\beta)\), is given by the term proportional to \(\sin(2\phi_{\pi})\) in Eq. (45), i.e.,
\[\tilde{w}_{coh}\sim\frac{J(\lambda_{0}-\lambda_{\tau})aL}{2\pi}\int_{0}^{\pi \over a}\frac{\kappa\sin(2\phi_{\pi})}{\omega_{\kappa}\cosh^{2}(\beta c\omega _{\kappa}/2)}d\kappa\,. \tag{47}\]
In this case we can extend the integral to the interval \([0,\infty)\), so that the coherent contribution \(\tilde{w}_{coh}\) is universal. From Eq. (47), by noting that
\[\int_{0}^{\infty}\frac{y}{\sqrt{1+y^{2}}\cosh^{2}(x\sqrt{1+y^{2}}/2)}dy=\frac {4}{(1+e^{|x|})|x|}\,, \tag{48}\]
the coherent contribution to the average work can be expressed as
\[\tilde{w}_{coh}\sim\frac{(\lambda_{0}-\lambda_{\tau})\sin(2\phi_{\pi})L}{\pi \beta}g_{FD}(\beta mc^{2})\,, \tag{49}\]
where we have defined the Fermi-Dirac distribution \(g_{FD}(x)=1/(1+e^{|x|})\) and \(mc^{2}=2J(1-\lambda_{0})\). In the end, let us consider the limit of high temperatures \(\beta\to 0\), so that we get
\[g_{q}(u) = \frac{1}{2\pi}\int_{0}^{\pi}\ln\frac{1}{2}\Bigg{(}\cos(u\epsilon _{k})\cos(u\epsilon_{k}^{\prime})+\sin(u\epsilon_{k})\sin(u\epsilon_{k}^{ \prime}) \tag{50}\] \[\times\hat{d}_{k}\cdot\hat{d}_{k^{\prime}}^{\prime}+1-i\sin(u \epsilon_{k}^{\prime})\sin(u(2q-1)\epsilon_{k}-2\phi_{k})\] \[\times(\hat{d}_{k}\times\hat{d}_{k^{\prime}}^{\prime})_{x}\Big{]} dk\,.\]
For \(\phi_{k}=\phi\), we get the closed form of the derivatives
\[\partial_{u}g_{q}(0) = -\frac{i(\lambda_{\tau}-\lambda_{0})}{2\pi|\lambda_{0}|}\sin(2 \phi)(1+|\lambda_{0}|-|1-|\lambda_{0}||)\,, \tag{51}\] \[\partial_{u}^{2}g_{q}(0) = -(\lambda_{\tau}-\lambda_{0})^{2}\Bigg{(}1-\frac{1}{8\lambda_{0}^ {2}}\left(1+\lambda_{0}^{2}-(1+|\lambda_{0}|)|1-|\lambda_{0}||\right)\] (52) \[\times\sin^{2}(2\phi)\Bigg{)}-\frac{4i}{\pi}(\lambda_{\tau}- \lambda_{0})(1-2q)\cos(2\phi)\,,\]
from which it is evident that the work statistics is not regular at \(|\lambda_{0}|=1\) for \(\phi\neq n\pi/2\). Of course in this limit we can extract the work \(W_{ex}=-\langle w\rangle\), equals to
\[W_{ex}=\frac{(\lambda_{\tau}-\lambda_{0})L}{2\pi|\lambda_{0}|}\sin(2\phi)(1+| \lambda_{0}|-|1-|\lambda_{0}||)\,, \tag{53}\]
only because of the presence of the initial coherence, otherwise for an initial Gibbs state we will get \(\langle w\rangle=0\).
## V Local quench
Things change drastically when the work is non-extensive, e.g., for a local quench. We focus on the case of a sudden quench in the transverse field, i.e., the initial Hamiltonian is \(H=H(\lambda_{0})\) and we perform a sudden quench of the transverse field in a site \(l\), so that the final Hamiltonian is \(H^{\prime}=H(\lambda_{0})-\epsilon\sigma_{l}^{z}\). Since we are interested only to large sizes \(L\), we describe the model with the corresponding fermionic Hamiltonian \(H_{+}\). Here we are interested to investigate how contextuality can emerge in a local quench, thus we focus on the states \(|\Psi_{1}(\beta)\rangle\) and \(|\Psi_{2}(\beta)\rangle\), which are defined as
\[|\Psi_{1}(\beta)\rangle=\frac{e^{\frac{\beta}{4}\sum_{k}\epsilon_{k}}}{\sqrt{ Z_{1}}}\exp\left(\sum_{k}e^{-\frac{\beta\epsilon_{k}}{2}+i\phi_{k}}\alpha_{k}^{ \dagger}\right)|\tilde{0}_{+}\rangle \tag{54}\]
and
\[|\Psi_{2}(\beta)\rangle = \frac{e^{\frac{\beta}{2}\sum_{k}\epsilon_{k}}}{\sqrt{Z_{2}}}\Bigg{(} 1+\sum_{k}e^{-\frac{\beta\epsilon_{k}}{2}+i\phi_{k}}\alpha_{k}^{\dagger} \tag{55}\] \[+\frac{1}{2}\sum_{k,k^{\prime}}s_{k,k^{\prime}}e^{-\frac{\beta \epsilon_{k}+\epsilon_{k^{\prime}}}{2}+i(\phi_{k}+\phi_{k^{\prime}})}\alpha_{k}^ {\dagger}\alpha_{k^{\prime}}^{\dagger}\Bigg{)}|\tilde{0}_{+}\rangle\,,\]
where \(s_{k,k^{\prime}}=1\) if \(k>k^{\prime}\), \(s_{k,k^{\prime}}=-1\) if \(k<k^{\prime}\) and \(s_{k,k}=0\), and \(Z_{1}\) and \(Z_{2}\) are normalization factors such that \(Z\sim Z_{2}\sim Z_{1}\) as \(\beta\to\infty\). Indeed, \(|\Psi_{G}(\beta)\rangle\sim|\Psi_{2}(\beta)\rangle\sim|\Psi_{1}(\beta)\rangle\) as \(\beta\to\infty\). In general, for these initial states, the function \(X_{q}(u)\) can be calculated with the help of Grassmann variables (see Appendix E). While for the initial state \(|\Psi_{1}(\beta)\rangle\), we find that the fourth moment of work is positive, for the initial state \(|\Psi_{2}(\beta)\rangle\), we find that the fourth moment of work can be negative for \(\beta\) small enough (see Fig. 4). This suggests that to get a contextual protocol with a negative fourth moment we need to start from an initial state which involves at least couples of quasiparticles. This result is corroborated by considering states like \(|\Psi_{1}(\beta)\rangle\) but with random coefficients, for which we get a non-negative fourth moment for the local quench.
## VI Initial quantum coherence
To conclude we investigate further the role of initial coherence by focusing on an initial state \(\rho_{0}\) with a thermal incoherent part, i.e., \(\Delta(\rho_{0})=\rho_{G}(\beta)\). In general, we have the equality (see Ref. [13])
\[\left\langle e^{-\beta\omega-C}\right\rangle=e^{-\beta\Delta F}\,, \tag{56}\]
where \(\Delta F=F(\lambda_{r})-F(\lambda_{0})\) is the change in the equilibrium free energy, where \(F(\lambda)=-\beta^{-1}\ln Z(\lambda)\), and \(C\) is the random quantum coherence that has the probability distribution
\[p_{c}(C)=\sum_{i,n}R_{n}\left|\langle E_{i}|R_{n}\rangle\right|^{2}\delta(C+ \ln\langle E_{i}|\rho_{0}|E_{i}\rangle-\ln R_{n})\,, \tag{57}\]
where we have considered the decomposition \(\rho_{0}=\sum_{n}R_{n}|R_{n}\rangle\langle R_{n}|\). In detail, the average of \(C\) is the relative entropy of coherence \(\langle C\rangle=S(\Delta(\rho_{0}))-S(\rho_{0})\), where \(S(\rho)\) is the von Neumann entropy defined as \(S(\rho)=-\mathrm{Tr}\left\{\rho\ln\rho\right\}\), and we have the equality \(\left\langle e^{-C}\right\rangle=1\). In particular, from Eq. (56), we get the inequality \(\left\langle w\right\rangle\geq\Delta F-\beta^{-1}\langle C\rangle\), and we note that Eq. (56) reduces to the Jarzynski equality [20]\(\left\langle e^{-\beta\omega}\right\rangle=e^{-\beta\Delta F}\) when \(\rho_{0}=\rho_{G}(\beta)\). From Eq. (56) we get
\[\Delta F=\beta^{-1}\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n!}\kappa_{n}(s)\,, \tag{58}\]
where \(\kappa_{n}(s)\) is the nth cumulant of \(s=\beta\omega+C\) which of course it can be expressed in terms of expectation values of work and coherence: The cumulants \(\kappa_{n}(C)\) of \(C\) cancel in the sum due to the equality \(\left\langle e^{-C}\right\rangle=1\), and only work cumulants \(\kappa_{n}(w)\) (e.g., the variance \(\sigma_{w}^{2}\)) and correlation terms (e.g., the covariance \(\sigma_{w,C}=\left\langle wC\right\rangle-\left\langle w\right\rangle\langle C\rangle\)) are present. For instance, if work and coherence are uncorrelated, we get \(\kappa_{n}(s)=\beta^{n}\kappa_{n}(w)+\kappa_{n}(C)\) and so \(\Delta F=\sum_{n=1}^{\infty}(-1)^{n+1}\beta^{n-1}\kappa_{n}(w)/n!\) and the coherence does not appear. If we consider a Gaussian probability distribution for the random variable \(s\), we get
\[\Delta F=\left\langle w\right\rangle-\frac{\beta\sigma_{w}^{2}}{2}-\sigma_{w, C}\,. \tag{59}\]
For a given free energy change \(\Delta F\), from Eq. (59) we see that the average work extracted \(W_{ex}=-\left\langle w\right\rangle\) in the process increases as the fluctuation of work becomes weak, i.e., the variance \(\sigma_{w}^{2}\) decreases, and the work and coherence become strong negative correlated, i.e., \(\sigma_{w,C}<0\) decreases, which clarifies the role of initial quantum coherence as useful resource. However, we note that Eq. (59) cannot be exactly satisfied for a global quench because we have to take into account also higher work cumulants and correlations which will contribute to the series in Eq. (58). Indeed, only the coarse-grained probability distribution of \(s\) tends to a Gaussian profile for large \(L\) (e.g., see Fig. 2 for the work). In particular, if we focus on the high temperature limit \(\beta\to 0\), Eq. (58) reduces to
\[\left\langle w\right\rangle=\Delta F+\sum_{k=1}^{\infty}\frac{i^{k+1}}{k!} \partial_{t}^{k}\partial_{u}G(0,0)\,, \tag{60}\]
where we have defined the function \(G(u,t)=\ln\langle e^{iu\omega+tIC}\rangle\). The derivatives are correlation terms, e.g., \(\partial_{t}\partial_{u}G(0,0)=-\sigma_{w,C}\), \(\partial_{t}^{2}\partial_{u}G(0,0)=2i\langle C\rangle\sigma_{w,C}-i\sigma_{w, C}^{-1}\) and \(\partial_{t}^{3}\partial_{u}G(0,0)=3(2\langle C\rangle^{2}-\langle C^{2} \rangle)\sigma_{w,C}-3\langle C\rangle\sigma_{w,C}{}^{-1}+\sigma_{w,C}{}^{-1}\). For the initial state \(\rho_{0}=\eta|\Psi_{G}(0)\rangle\langle\Psi_{G}(0)|+(1-\eta)\rho_{G}(0)\), we get the characteristic function of the coherence
\[\left\langle e^{itC}\right\rangle=D^{it}\left(\left(\eta+\frac{1-\eta}{D} \right)^{it+1}+(D-1)\left(\frac{1-\eta}{D}\right)^{it+1}\right)\,, \tag{61}\]
where \(D\) is the dimension of the Hilbert space. Furthermore, by considering
\[\left\langle e^{iu\omega+tIC}\right\rangle =\mathrm{Tr}\{\rho_{0}e^{it\ln\rho_{0}}e^{-iuH/2-it\ln\Delta(\rho_ {0})/2}e^{iuH^{\prime}}\] \[\times e^{-iuH/2-it\ln\Delta(\rho_{0})/2}\}\,, \tag{62}\]
where for brevity we have defined \(H=H(\lambda_{0})\) and \(H^{\prime}=H^{(H)}(\lambda_{r})\), we get
\[-i\partial_{u}G(0,t)=\frac{\left(\eta+\frac{1-\eta}{D}\right)^{it+1}w_{1}+(D-1 )\left(\frac{1-\eta}{D}\right)^{it+1}w_{2}}{\left(\eta+\frac{1-\eta}{D}\right) ^{it+1}+(D-1)\left(\frac{1-\eta}{D}\right)^{it+1}}\,, \tag{63}\]
where \(w_{1}=\left\langle\Psi_{G}(0)|(H^{\prime}-H)|\Psi_{G}(0)\right\rangle\) is the average work done starting from the coherent Gibbs state, which can be expressed as \(w_{1}=\left(\left\langle w\right\rangle-(1-\eta)\Delta F\right)/\eta\), and \(w_{2}=D\Delta F-w_{1}\), where \(\Delta F=\mathrm{Tr}\left\{H(\lambda_{r})-H(\lambda_{0})\right\}/D\). Thus, the terms in Eq. (60) can be obtained by calculating the derivatives of
Eq. (63) with respect to \(t\). We note that for the Ising model we get \(\Delta F=0\), so that in this limit the work extracted, i.e., Eq. (53) multiplied by \(\eta\), completely comes from the correlations between work and coherence. Of course, the same situation occurs for a cyclic change of any Hamiltonian, i.e., such that \(H(\lambda_{\tau})=H(\lambda_{0})\).
## VII Conclusions
We investigated the effects of the initial quantum coherence in the energy basis to the work done by quenching a transverse field of a one-dimensional Ising model. The work can be represented by considering a class of quasiprobability distributions. To study how the work statistics changes with the increasing of the system size, we calculated the exact formula of the characteristic function of work for an arbitrary size by imposing periodic boundary conditions. Then, we focused on the thermodynamic limit, and we showed that by neglecting subdominant terms, for the symmetric value \(q=1/2\) we get a Gaussian probability distribution of work, and so a non-contextual protocol. However, for \(q\neq 1/2\), the quasiprobability of work can take negative values depending on the initial state. In contrast, for a local quench there are initial states such that any quasiprobability representation in the class is contextual as signaled by a negative fourth moment. Concerning the universal features, we showed that the coherent contribution to the average work is a universal function when the initial state is a coherent Gibbs state. In the end, beyond the fundamental purposes of the paper, it is interesting to understand if the contextuality can be related to some advantages from a thermodynamic point of view, however further investigations are needed to going in this direction. In particular, although the protocol tends to be non-contextual in the thermodynamic limit for a global quench, the initial quantum coherence can be still a useful resource for the work extraction in the protocol when it is correlated with the work.
## Acknowledgements
The authors acknowledge financial support from the project BIRD 2021 "Correlations, dynamics and topology in long-range quantum systems" of the Department of Physics and Astronomy, University of Padova.
## Appendix A Work moments
Let us derive a closed formula for the work moments. We define \(H=H(\lambda_{0})\) and \(H^{\prime}=H^{(H)}(\lambda_{\tau})\). The nth work moment can be calculated as
\[\langle w^{n}\rangle=(-i)^{n}\partial_{u}^{n}\chi_{q}(0)=\frac{(-i)^{n} \partial_{u}^{n}\chi_{q}(0)}{2}+\frac{(-i)^{n}\partial_{u}^{n}\chi_{1-q}(0)}{2} \tag{64}\]
To calculate \((-i)^{n}\partial_{u}^{n}\chi_{q}(0)\), we note that
\[X_{q}(u)=\mathrm{Tr}\left\{\rho_{0}(u)e^{iuH^{\prime}}\right\} \tag{65}\]
where we have defined
\[\rho_{0}(u)=e^{-iuqH}\rho_{0}e^{-iu(1-q)H} \tag{66}\]
Then
\[(-i)^{n}\partial_{u}^{n}\chi_{q}(u)=\sum_{k=0}^{n}\binom{n}{k}\mathrm{Tr} \left\{(-i)^{n-k}\partial_{u}^{n-k}\rho_{0}(u))H^{nk}e^{iuH^{\prime}}\right\} \tag{67}\]
where we have noted that \((-i)^{k}\partial_{u}^{k}e^{iuH^{\prime}}=H^{nk}e^{iuH^{\prime}}\). It is easy to see that
\[(-i)^{n}\partial_{u}^{n}\rho_{0}(u)=(-1)^{n}\sum_{k=0}^{n}\binom{n}{k}(qH)^{ n-k}\rho_{0}(u)((1-q)H)^{k} \tag{68}\]
from which
\[(-i)^{n}\partial_{u}^{n}\chi_{q}(0) =\sum_{k=0}^{n}(-1)^{n-k}\binom{n}{k}\sum_{l=0}^{n-k}\binom{n-k}{l }q^{n-k-l}(1-q)^{l}\] \[\times\mathrm{Tr}\left\{H^{n-k-l}\rho_{0}H^{l}H^{\prime k}\right\} \tag{69}\]
## Appendix B Quasiproability of work
We consider two different initial states, a Gibbs state \(\rho_{G}=e^{-\beta H(\lambda_{0})}/Z\), and a coherent Gibbs state \(|\Psi_{G}\rangle\). In particular, for \(\phi_{j}=0\), the state \(|\Psi_{G}^{*}\rangle\) in Eq. (23) reads
\[|\Psi_{G}^{*}\rangle=\frac{1}{\sqrt{Z}}\otimes_{k\in K_{s}}\left(e^{\frac{ \theta_{k}}{4}}|\tilde{0}_{k}\rangle+e^{-\frac{\theta_{k}}{4}}|\tilde{1}_{k} \rangle\right) \tag{70}\]
It can be expressed as
\[|\Psi_{G}^{*}\rangle =\frac{1}{\sqrt{Z}}\left(\otimes_{k>0}|\Psi_{k}\rangle\right) \tag{71}\] \[|\Psi_{G}^{*}\rangle =\frac{1}{\sqrt{Z}}\left(\otimes_{k>0}|\Psi_{k}\rangle\right) \otimes|\Psi_{0}\rangle\otimes|\Psi_{\pi}\rangle \tag{72}\]
where \(|\Psi_{k}\rangle=\left(|\tilde{0}_{k}\rangle+e^{-\frac{\theta_{k}}{2}}|\tilde{ 1}_{k}\rangle\right)\otimes\left(e^{\frac{\theta_{k}}{2}}|\tilde{0}_{-k} \rangle+|\tilde{1}_{-k}\rangle\right)\). Thus, by noting that \(P_{s}=(I+se^{i\pi N})/2\), and \(e^{i\pi N}=\langle\tilde{0}_{s}|e^{i\pi N}|\tilde{0}_{s}\rangle e^{i\pi\sum_{ k\in K_{s}}a_{u}^{\dagger}a_{u}}\), we get
\[X_{q}(u) =\frac{1}{2}\sum_{s}\mathrm{Tr}\left\{e^{-iuqH_{s}(\lambda_{0})} \rho_{0}^{s}e^{-iu(1-q)H_{s}(\lambda_{0})}e^{iuH_{s}^{(H)}(\lambda_{\tau})}\right\}\] \[+\eta_{s}\mathrm{Tr}\left\{e^{-iuqH_{s}(\lambda_{0})}e^{i\pi\sum_ {k\in K_{s}}a_{u}^{\dagger}a_{u}}\rho_{0}^{s}e^{-iu(1-q)H_{s}(\lambda_{0})}\right.\] \[\left.\times e^{iuH_{s}^{(H)}(\lambda_{\tau})}\right\} \tag{73}\]
where we have defined \(\eta_{s}=s\langle\tilde{0}_{s}|e^{i\pi N}|\tilde{0}_{s}\rangle\). Let us focus on the first term in the sum over \(s\), which is
\[X_{q}^{s}(u)=\mathrm{Tr}\left\{e^{-iuqH_{s}(\lambda_{0})}\rho_{0}^{s}e^{-iu(1- q)H_{s}(\lambda_{0})}e^{iuH_{s}^{(H)}(\lambda_{\tau})}\right\} \tag{74}\]
Then, e.g., for \(s=-\), to evaluate the trace we can consider the basis formed by the vectors \(|\{n_{k}\}\rangle=(\delta_{k>0}|n_{k}n_{-k}\rangle)\otimes|n_{0}\rangle\otimes|n_ {\pi}\rangle\), with \(n_{k}=0,1\), where \(|n_{k}n_{-k}\rangle=(a_{k}^{\dagger})^{n_{k}}(a_{k}^{\dagger})^{n_{-k}}|0_{k}0_ {-k}\rangle\), where \(|0_{k}\rangle\) is the vacuum state for the fermion \(a_{k}\). Of course \(\{|n_{k}n_{-k}\rangle\}\) generates an invariant dynamically subspace, and in this subspace the Hamiltonian \(H_{\rm s}(\lambda)\) is the matrix \(H_{\rm k}(\lambda)\) such that
\[H_{\rm k}(\lambda)|0_{k}0_{-k}\rangle = 2(\lambda+\cos k)|0_{k}0_{-k}\rangle-2i\sin k|1_{k}1_{-k}\rangle \tag{21}\] \[H_{\rm k}(\lambda)|1_{k}1_{-k}\rangle = -2(\lambda+\cos k)|1_{k}1_{-k}\rangle+2i\sin k|0_{k}0_{-k}\rangle\] (22) \[H_{\rm k}(\lambda)|0_{k}1_{-k}\rangle = 0\] (23) \[H_{\rm k}(\lambda)|1_{k}0_{-k}\rangle = 0 \tag{24}\]
However, it is convenient to consider the initial eigenstates \(|\tilde{n}_{k}\tilde{n}_{-k}\rangle\) such that
\[H_{\rm k}(\lambda_{0})|\tilde{n}_{k}\tilde{n}_{-k}\rangle=(\epsilon_{k}n_{k}+ \epsilon_{k}(n_{-k}-1))|\tilde{n}_{k}\tilde{n}_{-k}\rangle \tag{25}\]
For our two initial states it is equal to
\[X_{q}^{\rm s}(u)=\frac{1}{Z}\prod_{k\in\mathcal{K}_{u}\pm k\geq 0}X_{q}^{(k)}(u) \tag{26}\]
For the Gibbs state, for \(k>0\) and \(k\neq\pi\), we have
\[X_{q}^{(k)}(u) = \sum_{n_{k},n_{-k}}e^{(-iu-\beta)(\epsilon_{k}n_{k}+\epsilon_{k}( n_{-k}-1))} \tag{27}\] \[\times\langle\tilde{n}_{k}\tilde{n}_{-k}|U_{\pi,0}^{\dagger}e^{iuH _{\rm k}(\lambda_{\tau})}U_{r,0}|\tilde{n}_{k}\tilde{n}_{-k}\rangle\]
To evaluate \(X_{q}^{(k)}(u)\), we note that
\[e^{iuH_{\rm k}(\lambda_{\tau})}=e^{-iu\epsilon_{k}^{\prime}\tilde{a}_{k}^{ \prime}\cdot\vec{\tau}}=(\cos(u\epsilon_{k}^{\prime})I-i\sin(u\epsilon_{k}^{ \prime})\tilde{d}_{k}^{\prime}\cdot\vec{\tau})\oplus I \tag{28}\]
where \(\tilde{d}_{k}^{\prime}=\tilde{d}_{k}(\lambda_{\tau})\), \(\epsilon_{k}^{\prime}=\epsilon_{k}(\lambda_{\tau})\), \(\tau_{3}=|0_{k}0_{-k}\rangle\langle 0_{k}0_{-k}|-|1_{k}1_{-k}\rangle\langle 1_{k}1_{-k}|\), and so on. We have to calculate
\[\langle\tilde{n}_{k}\tilde{n}_{-k}|U_{r,0}^{\dagger}e^{iuH_{\rm k }(\lambda_{\tau})}U_{r,0}|\tilde{n}_{k}\tilde{n}_{-k}\rangle=\cos(u\epsilon_{ k}^{\prime})\] \[-i\sin(u\epsilon_{k}^{\prime})(\tilde{n}_{k}\tilde{n}_{-k}|U_{r, 0}^{\dagger}\tilde{d}_{k}^{\prime}\cdot\vec{\tau}U_{r,0}|\tilde{n}_{k}\tilde{ n}_{-k}\rangle \tag{29}\]
with \((n_{k},n_{-k})=(0,0)\) and \((n_{k},n_{-k})=(1,1)\), while \(\langle\tilde{0}_{k}\tilde{1}_{-k}|U_{r,0}^{\dagger}e^{iuH_{\rm k}(\lambda_{ \tau})}U_{r,0}|\tilde{0}_{k}\tilde{1}_{-k}\rangle=(\tilde{1}_{k}\tilde{0}_{-k} |U_{r,0}^{\dagger}e^{iuH_{\rm k}(\lambda_{\tau})}U_{r,0}|\tilde{1}_{k}\tilde{ 0}_{-k}\rangle=1\). In particular, since \(\tilde{d}_{k}^{\prime}\cdot\vec{\tau}\) is traceless, we get \(\langle\tilde{0}_{k}\tilde{0}_{-k}|U_{r,0}^{\dagger}\tilde{d}_{k}^{\prime} \cdot\vec{\tau}U_{r,0}|\tilde{0}_{k}\tilde{0}_{-k}\rangle+\langle\tilde{1}_{k} \tilde{1}_{-k}|U_{r,0}^{\dagger}\tilde{d}_{k}^{\prime}\cdot\vec{\tau}U_{r,0}| \tilde{1}_{k}\tilde{1}_{-k}\rangle=0\), from which we get \(X_{q}^{(k)}(u)=X_{q}^{(k),th}(u)\) with
\[X_{q}^{(k),th}(u)=2\bigg{(}\cos((u-i\beta)\epsilon_{k})\cos(ue _{k}^{\prime})+\sin((u-i\beta)\epsilon_{k})\] \[\times\sin(ue_{k}^{\prime})\langle\tilde{0}_{k}\tilde{0}_{-k}|U_{ r,0}^{\dagger}\tilde{d}_{k}^{\prime}\cdot\vec{\tau}U_{r,0}|\tilde{0}_{k}\tilde{0}_{-k} \rangle\rangle+1\bigg{)} \tag{30}\]
In contrast, for the coherent Gibbs state, for \(k>0\) and \(k\neq\pi\) we get
\[X_{q}^{(k)}(u)=\langle\Psi_{k}(q-1)|U_{r,0}^{\dagger}e^{iuH_{\rm k }(\lambda_{\tau})}U_{r,0}|\Psi_{k}(q)\rangle \tag{31}\]
where
\[|\Psi_{k}(q)\rangle=\Big{(}|\tilde{0}_{k}\rangle+e^{-iuq\epsilon_{k}-\frac{ \rho\epsilon_{k}}{2}}|\tilde{1}_{k}\rangle\Big{)}\otimes\Big{(}e^{iuq\epsilon _{k}+\frac{\rho\epsilon_{k}}{2}}|\tilde{0}_{-k}\rangle+|\tilde{1}_{-k}\rangle \Big{)} \tag{32}\]
Thus, we get
\[X_{q}^{(k)}(u) = 2\bigg{(}\cos((u-i\beta)\epsilon_{k})\cos(u\epsilon_{k}^{\prime })-\frac{i}{2}\sin(ue_{k}^{\prime}) \tag{33}\] \[\times\langle\tilde{\Psi}_{k}(q-1)|U_{r,0}^{\dagger}\tilde{d}_{k}^{ \prime}\cdot\vec{\tau}U_{r,0}|\tilde{\Psi}_{k}(q)\rangle+1\bigg{)}\]
where \(|\tilde{\Psi}_{k}(q)\rangle=e^{iuq\epsilon_{k}+\beta\epsilon_{k}}|\tilde{0}_{k} \tilde{0}_{-k}\rangle+e^{-iuq\epsilon_{k}-\beta\epsilon_{k}/2}|\tilde{1}_{k} \tilde{1}_{-k}\rangle\). We get
\[X_{q}^{(k)}(u)=X_{q}^{(k),th}(u)+X_{q}^{(k),coh}(u) \tag{34}\]
where the coherent contribution is
\[X_{q}^{(k),coh}(u) = -2i\sin(ue_{k}^{\prime}){\rm Re}\big{(}e^{-iu(2q-1)\epsilon_{k}} \tag{35}\] \[\times\langle\tilde{0}_{k}\tilde{0}_{-k}|U_{r,0}^{\dagger}\tilde{d} _{k}^{\prime}\cdot\vec{\tau}U_{r,0}|\tilde{1}_{k}\tilde{1}_{-k}\rangle\big{)}\]
To calculate the second term in the sum over \(s\) in Eq. (19), we note that
\[e^{iu\sum_{k\in\mathcal{K}_{u}}a_{k}^{\prime}\alpha_{k}} = (-1)^{\frac{k}{2}}e^{iu\sum_{k\in\mathcal{K}_{u}}\big{(}a_{k}^{ \prime}\alpha_{k}-\frac{1}{2}\big{)}}\] \[= (-1)^{\frac{k}{2}}e^{iq\pi\sum_{k\in\mathcal{K}_{u}}\big{(}a_{k}^{ \prime}\alpha_{k}-\frac{1}{2}\big{)}} \tag{36}\] \[\times e^{i(1-q)\pi\sum_{k\in\mathcal{K}_{u}}\big{(}a_{k}^{ \prime}\alpha_{k}-\frac{1}{2}\big{)}}\]
then the second term is \(\eta_{s}X_{q}^{\prime s}(u)\), where \(X_{q}^{\prime s}(u)\) is obtained by multiplying \(X_{q}^{s}(u)\) by \((-1)^{\frac{1}{2}}\) and by performing the substitution \(u\epsilon_{k}\mapsto u\epsilon_{k}-\pi\) so that
\[X_{q}^{\prime s}(u)=\frac{1}{Z}\prod_{k\in\mathcal{K}_{u}\pm 0}X_{q}^{\prime(k)}(u) \tag{37}\]
with
\[X_{q}^{\prime(k)}(u)=X_{q}^{(k)}(u
Let us focus on the thermodynamic limit. For \(|\lambda_{0}|>1\), we get \(Z\sim\prod_{k\in K_{k}}Z_{k}\) with \(Z_{k}=2\cosh(\beta\epsilon_{k}/2)\), and \(X_{q}(u)\sim X_{q}^{\star}(u)\). Then \(X_{q}(u)\) is the product of the characteristic functions having quasiprobability distributions
\[p_{q}^{(k)}(w)=\frac{1}{Z_{k}^{2}}\int\frac{e^{-iuw}}{2\pi}X_{q}^{(k)}(u)du \tag{108}\]
thus the quasiprobability distribution of work reads
\[p_{q}(w) =\frac{1}{2}\int\left(\prod_{k>0}p_{q}^{(k)}(w_{k})+\prod_{k>0}p_ {1-q}^{(k)}(w_{k})\right)\] \[\times\delta\left(w-\sum_{k>0}w_{k}\right)\prod_{k>0}dw_{k} \tag{109}\]
We note that the average work can be calculated as
\[\langle w\rangle=-i\partial_{u}\chi_{q}(0)=-i\sum_{k>0}\frac{1}{Z_{k}^{2}} \partial_{u}X_{q}^{(k)}(0) \tag{110}\]
On the other hand, for \(|\lambda_{0}|<1\), we get \(Z\sim\prod_{k\in K_{k}}Z_{k}+\prod_{k\in K_{k}}Z_{k}^{\prime}\) with \(Z_{k}^{\prime}=2\sinh(\beta\epsilon_{k}/2)\), and \(X_{q}(u)\sim X_{q}^{\star}(u)+X_{q}^{\prime\star}(u)\) from which
\[X_{q}(u)=\gamma\prod_{k>0}\frac{X_{q}^{(k)}(u)}{Z_{k}^{2}}+(1-\gamma)\prod_{k >0}\frac{X_{q}^{\prime(k)}(u)}{Z_{k}^{\prime 2}} \tag{111}\]
where \(\gamma=(\prod_{k>0}Z_{k}^{2})/Z\). In the thermodynamic limit, we get
\[\begin{split}&\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
integer. Then, we can determinate the histogram by calculating the probability
\[p_{n}=\int_{I_{n}}p_{q}(w)dw=\frac{\Delta w}{2\pi}\int\chi_{q}(u)\mathrm{sinc} \left(\frac{u\Delta w}{2}\right)e^{-iu\omega_{n}}du \tag{100}\]
where \(\mathrm{sinc}(x)=\sin(x)/x\). To calculate the integral we can focus on the interval \([-2\pi K/\Delta w,2\pi K/\Delta w]\) with \(K\) large enough. Of course \(p_{q}(w_{n})\approx p_{n}/\Delta w\) for \(\Delta w\) small enough.
## Appendix C Negativity
To prove that \(\mathcal{N}=1\) does not imply in general that \(p_{q}(w)\geq 0\), we write \(p_{q}(w)=p(w)+\delta p(w)\) where \(p(w)\geq 0\), \(\int p(w)dw=1\) and \(\int\delta p(w)dw=0\). If \(p_{q}(w)\geq 0\) for \(w\in I\) and \(p_{q}(w)<0\) for \(w\in I^{\prime}\), then \(\delta p(w)<0\) for \(w\in I^{\prime}\) and \(I=I_{+}\cup L\) such that \(\delta p(w)\geq 0\) for \(w\in I_{+}\) and \(\delta p(w)<0\) for \(w\in I_{-}\). Then, from \(\mathcal{N}=1\), we get the condition \(p(I)-p(I^{\prime})+\delta p(I_{+})+\delta p(L_{-})-\delta p(I^{\prime})=1\), where \(p(I)=\int_{I}p(w)dw\) and so on, thus we get the system
\[\begin{cases}\rho(I)+p(I^{\prime})=1\\ \rho(I)\geq 0\\ \rho(I^{\prime})\geq 0\\ \delta p(I_{+})+\delta p(I_{-})+\delta p(I^{\prime})=0\\ \delta p(I_{+})\geq 0\\ \delta p(I_{-})<0\\ \delta p(I^{\prime})<0\\ \rho(I)-p(I^{\prime})+\delta p(I_{+})+\delta p(I_{-})-\delta p(I^{\prime})=1 \end{cases} \tag{101}\]
which admits as solution any \(p(w)\) such that \(0\leq p(I)<1\) and \(p(I^{\prime})=1-p(I)\) and any \(\delta p(w)\) such that \(\delta p(I_{+})>(1-p(I)+p(I^{\prime}))/2\), \(\delta p(I_{-})=(1-2\delta p(I_{+})-p(I)+p(I^{\prime}))/2\) and \(\delta p(I^{\prime})=-\delta p(I_{-})-\delta p(I_{+})\).
## Appendix D Imaginary part \(r_{q}\)
For an arbitrary quench, from Eq. (101), we get
\[r_{q} =\frac{(2q-1)L}{2\pi}\int_{0}^{\pi}\frac{\epsilon_{k}\epsilon_{k }^{\prime}}{\cosh^{2}(\beta\epsilon_{k}/2)}(\cos(2\phi_{k})(\hat{d}_{k}\times \hat{d}_{k}^{\prime})_{x}\] \[\quad+\sin(2\phi_{k})(\hat{d}_{k}\times\hat{d}_{k}^{\prime})_{y})dk \tag{102}\]
which is zero for \(q=1/2\). Let us focus on an arbitrary initial state \(|\Psi\rangle\) instead of \(|\Psi_{G}\rangle\), we will get the states
\[|\Psi_{k}\rangle=\sum_{n_{k},n_{-k}}c_{n_{k}n_{-k}}|\tilde{n}_{k}\tilde{n}_{- k}\rangle \tag{103}\]
from which \(X_{q}^{(k)}(u)\) can be calculated from Eq. (100) with
\[|\Psi_{k}(q)\rangle =c_{00}e^{iuq\epsilon_{k}}|\tilde{0}_{k}\tilde{0}_{-k}\rangle+c_ {11}e^{-iuq\epsilon_{k}}|\tilde{1}_{k}\tilde{1}_{-k}\rangle\] \[\quad+c_{01}|\tilde{0}_{k}\tilde{1}_{-k}\rangle+c_{10}|\tilde{1} _{k}\tilde{0}_{-k}\rangle \tag{104}\]
Thus, \(X_{q}^{(k)}(u)\) reads
\[X_{q}^{(k)}(u)=X_{no}^{(k)}(u)+\delta X_{q}^{(k)}(u) \tag{105}\]
where \(X_{no}^{(k)}(u)\) does not depend on \(q\), and
\[\delta X_{q}^{(k)}(u) =-2i\sin(u\epsilon_{k}^{\prime})\mathrm{Re}\left(c_{00}^{*}c_{11} e^{-iu(2q-1)\epsilon_{k}}\left(i(\hat{d}_{k}\times\hat{d}_{k}^{\prime})_{x}\right.\right.\] \[\quad\left.\left.+(\hat{d}_{k}\times\hat{d}_{k}^{\prime})_{y} \right)\right) \tag{106}\]
Then, \(\partial_{u}^{2}\delta X_{q}^{(k)}(0)\) is imaginary and \(\partial_{u}^{2}\delta X_{q}^{(k)}(0)\propto(1-2q)\). Similarly, it is easy to see that \(\partial_{u}^{2}X_{no}^{(k)}(0)\) is real. Furthermore, \(\partial_{u}X_{q}^{(k)}(0)\) is imaginary, then \(r_{q}\) is obtained by calculating an integral with respect to \(k\) of \(\partial_{u}^{2}\delta X_{q}^{(k)}(0)\) divided by a certain normalization factor, so that we get \(r_{q}\propto(1-2q)\), which is zero for \(q=1/2\). This is in agreement with the fact it is always possible to get a non-contextual description for \(q=1/2\). Of course, this result is quite general and does not depend on the particular system, but it follows when for a global quench \(X_{q}(u)\) in the thermodynamic limit is given by Eq. (36) for a certain intensive function \(g_{q}(u)\).
## Appendix E General quadratic form in Fermi operators
We consider the initial Hamiltonian
\[H=\sum_{i,J}\left(a_{i}^{\dagger}A_{ij}a_{j}+\frac{1}{2}\left(a_{i}^{\dagger}B _{ij}a_{j}^{\dagger}+H.c.\right)\right)-\frac{1}{2}\sum_{i}A_{ii} \tag{107}\]
where \(A\) and \(B\) are real matrices such that \(A^{T}=A\) and \(B^{T}=-B\). The Hamiltonian can be diagonalized by performing the transformation
\[a_{k}=\sum_{i}g_{ki}a_{i}+h_{ki}a_{i}^{\dagger} \tag{108}\]
so that
\[H=\sum_{k}\epsilon_{k}\left(a_{k}^{\dagger}a_{k}-\frac{1}{2}\right) \tag{109}\]
In detail the matrices \(g\) and \(h\) are such that \(\phi=g+h\) and \(\psi=g-h\), where \(\phi\) and \(\psi\) are orthogonal matrices such that \(\psi^{T}\epsilon\phi=A+B\), where \(\epsilon\) is the diagonal matrix with entries \(\epsilon_{k}\). The final time-evolved Hamiltonian is \(H^{\prime}\) with matrices \(A^{\prime}\) and \(B^{\prime}\), and will be diagonalized by performing the transformation
\[a_{k}^{\prime}=\sum_{i}g_{ki}^{\prime}a_{i}+h_{ki}^{\prime}a_{i}^{\dagger} \tag{110}\]
so that
\[H^{\prime}=\sum_{k}\epsilon_{k}^{\prime}\left(a_{k}^{\prime\dagger}a_{k}^{\prime }-\frac{1}{2}\right) \tag{111}\]
Let us proceed with our investigation by considering the initial state
\[|\Psi_{1}\rangle=\frac{e^{\frac{\tilde{\theta}}{2}\sum_{k}\epsilon_{k}}}{\sqrt{Z_ {1}}}\exp\left(\sum_{k}e^{-\frac{\tilde{\theta}\epsilon_{k}}{2}}\alpha_{k}^{ \dagger}\right)|\tilde{0}\rangle \tag{100}\]
We will consider \(\beta\rightarrow\infty\) so that \(Z_{1}\sim Z=\prod_{k}2\cosh(\beta\epsilon_{k}/2)\) and \(|\Psi_{1}\rangle\sim|\Psi_{G}\rangle\). We aim to calculate
\[X_{q}(u)=\langle\Psi_{1}|e^{-\mu u(1-q)H}e^{iuH}e^{-iuqH}|\Psi_{1}\rangle \tag{101}\]
We consider the vacuum state \(|\tilde{0}^{\prime}\rangle\) of the fermions \(\alpha_{k}^{\prime}\), we get the relation
\[|\tilde{0}\rangle=Ke^{\frac{1}{2}\sum_{k,k^{\prime}}G_{kk^{\prime}}\alpha_{k}^ {\prime\prime}\alpha_{k^{\prime}}^{\prime\prime}}|\tilde{0}^{\prime}\rangle \tag{102}\]
where \(G\) is solution of the equation \(\tilde{g}G+\tilde{h}=0\), where \(\tilde{g}=gg^{\prime T}+hh^{\prime T}\) and \(\tilde{h}=gh^{\prime T}+hg^{\prime T}\). In particular,
\[\alpha_{k}=\sum_{k^{\prime}}\tilde{g}_{kk^{\prime}}\alpha_{k^{\prime}}^{\prime \prime}+\tilde{h}_{kk^{\prime}}\alpha_{k^{\prime}}^{\prime\prime} \tag{103}\]
We get
\[X_{q}(u)=|K|^{2}\frac{e^{(\beta+iu)\sum_{k}\epsilon_{k}/2-iu\sum _{k}\epsilon_{k}^{\prime}/2}}{Z_{1}}\langle\tilde{0}^{\prime}|\exp\left(- \frac{1}{2}\sum_{k,k^{\prime}}G_{kk^{\prime}}\right.\] \[\left.\times\alpha_{k}^{\prime}\alpha_{k^{\prime}}^{\prime}\right) \exp\left(\sum_{k}u_{k}\alpha_{k}^{\prime}+v_{k}\alpha_{k}^{\prime\prime} \right)\exp\left(\sum_{k}u_{k}^{\prime}\alpha_{k}^{\prime\prime}+v_{k}^{ \prime}\alpha_{k}^{\prime}\right)\] \[\times\exp\left(\frac{1}{2}\sum_{k,k^{\prime}}\tilde{G}_{kk^{ \prime}}\alpha_{k}^{\prime\prime}\alpha_{k^{\prime}}^{\prime\prime}\right)| \tilde{0}^{\prime}\rangle \tag{104}\]
where \(\tilde{G}_{kk^{\prime}}=G_{kk^{\prime}}e^{iu(\epsilon_{k}^{\prime}+\epsilon_{ k^{\prime}}^{\prime})}\) and
\[u_{k} =\sum_{k^{\prime}}e^{-(\beta/2+iu(1-q))\epsilon_{k^{\prime}}} \tilde{g}_{kk^{\prime}} \tag{105}\] \[v_{k} =\sum_{k^{\prime}}e^{-(\beta/2+iu(1-q))\epsilon_{k^{\prime}}} \tilde{h}_{kk^{\prime}}\] (106) \[u_{k}^{\prime} =\sum_{k^{\prime}}e^{-(\beta/2+iuq)\epsilon_{k^{\prime}}+iu_{k}^{ \prime}}\tilde{g}_{kk^{\prime}}\] (107) \[v_{k}^{\prime} =\sum_{k^{\prime}}e^{-(\beta/2+iuq)\epsilon_{k^{\prime}}-iu_{k}^{ \prime}}\tilde{h}_{kk^{\prime}} \tag{108}\]
We note that
\[\exp\left(\sum_{k}u_{k}\alpha_{k}^{\prime}+v_{k}\alpha_{k}^{\prime \prime}\right)\exp\left(\sum_{k}u_{k}^{\prime}\alpha_{k}^{\prime\prime}+v_{k} ^{\prime}\alpha_{k}^{\prime}\right)=1\] \[+\sum_{k,k^{\prime}}u_{k}u_{k^{\prime}}^{\prime}\alpha_{k}^{ \prime\prime}\alpha_{k^{\prime}}^{\prime\prime}+u_{k}v_{k^{\prime}}^{\prime} \alpha_{k}^{\prime}\alpha_{k^{\prime}}^{\prime}+v_{k}u_{k^{\prime}}^{\prime} \alpha_{k}^{\prime\prime}\alpha_{k^{\prime}}^{\prime\prime}-v_{k}^{\prime}v_{k ^{\prime}}\alpha_{k}^{\prime}\alpha_{k^{\prime}}^{\prime\prime}\] \[+\sum_{k}v_{k}v_{k}^{\prime}+\cdots \tag{109}\]
where we have omitted terms linear in the Fermi operators. Then, the overlap in Eq. (104) can be easily calculated by using the coherent states \(|\xi\rangle\) such that \(\alpha_{k}^{\prime}|\xi\rangle=\xi_{k}|\xi\rangle\). By using the identity \(\int d\xi^{*}d\xi e^{-\sum_{k}\xi_{k}^{*}\xi_{k}}|\xi\rangle\langle\xi|=1\), we get
\[X_{q}(u)\sim|K|^{2}\frac{e^{(\beta+iu)\sum_{k}\epsilon_{k}/2-iu \sum_{k}\epsilon_{k}^{\prime}/2}}{Z_{1}}\bigg{[}\int d\xi^{*}d\xi\] \[\times\exp\left(-\frac{1}{2}\sum_{kk^{\prime}}G_{kk^{\prime}}\xi _{k}\xi_{k^{\prime}}+\sum_{k,k^{\prime}}(u_{k}u_{k^{\prime}}^{\prime}\xi_{k} \xi_{k}^{*}+u_{k}u_{k^{\prime}}^{\prime}\xi_{k}\xi_{k^{\prime}}\xi_{k^{\prime}}\right.\] \[\left.+v_{k}u_{k^{\prime}}^{\prime}\xi_{k}^{*}\xi_{k^{\prime}}^{*} -v_{k^{\prime}}^{\prime}\theta_{k^{\prime}}\xi_{k}\xi_{k^{\prime}}^{*}\right)- \sum_{k}\xi_{k}^{*}\xi_{k}+\frac{1}{2}\sum_{k,k^{\prime}}\tilde{G}_{kk^{\prime}} \xi_{k}^{*}\xi_{k^{\prime}}^{*}\bigg{)}\] \[+\sum_{k}v_{k}v_{k}^{\prime}\int d\xi^{*}d\xi\exp\left(-\frac{1}{ 2}\sum_{k,k^{\prime}}G_{kk^{\prime}}-\sum_{k}\xi_{k}^{*}\xi_{k}\right.\] \[\left.+\frac{1}{2}\sum_{k,k^{\prime}}\tilde{G}_{kk^{\prime}}\xi_{ k}^{*}\xi_{k^{\prime}}^{*}\right)\bigg{]} \tag{110}\]
By performing the integral, we get
\[X_{q}(u)\sim Ce^{iu\sum_{k}(\epsilon_{k}-\epsilon_{k}^{\prime})/2}\bigg{(}\sqrt{ \det(\Gamma(u))}+\sqrt{\det(\Gamma_{0}(u))}\sum_{k}v_{k}v_{k}^{\prime}\bigg{)} \tag{111}\]
where
\[\Gamma_{0}(u)=\left(\begin{array}{cc}G&-I\\ I&-\tilde{G}\end{array}\right) \tag{112}\]
and
\[\Gamma(u)=\left(\begin{array}{cc}G-M_{1}&-I-M_{2}\\ I+M_{2}^{T}&-\tilde{G}-M_{3}\end{array}\right)=\Gamma_{0}(u)+M(u) \tag{113}\]
where \(M_{1,kk^{\prime}}=u_{k}v_{k^{\prime}}^{\prime}-u_{k^{\prime}}v_{k}^{\prime}\), \(M_{2,kk^{\prime}}=u_{k}u_{k^{\prime}}^{\prime}-v_{k}^{\prime}v_{k^{\prime}}\) and \(M_{3,kk^{\prime}}=v_{k}u_{k^{\prime}}^{\prime}-v_{k^{\prime}}u_{k}^{\prime}\). The constant \(C\) can be determined by requiring that \(X_{q}(\tilde{0})=1\). The exact expression of \(X_{q}(u)\) can be obtained by expanding \(\sqrt{\det(\Gamma(u))}\) at the first order in \(M(u)\), i.e.,
\[X_{q}(u) =Ce^{iu\sum_{k}(\epsilon_{k}-\epsilon_{k}^{\prime})/2}\sqrt{\det( \Gamma_{0}(u))}\bigg{(}1+\frac{1}{2}\text{Tr}\left\{\Gamma_{0}^{-1}(u)M(u)\right\}\] \[\quad+\sum_{k}v_{k}v_{k}v_{k}^{\prime}\bigg{)} \tag{114}\]
Concerning the coherent Gibbs state, we get
\[|\Psi_{G}\rangle\sim\frac{e^{\frac{\tilde{\theta}}{2}\sum_{k}\epsilon_{k}}}{\sqrt{Z }}\left(1+\sum_{k}e^{-\frac{\tilde{\theta}\epsilon_{k}}{2}}\alpha_{k}^{\dagger}+ \sum_{k\leq k^{\prime}}e^{-\frac{\tilde{\theta}(\epsilon_{k}+\epsilon_{k^{ \prime}})}{2}}\alpha_{k}^{\dagger}\alpha_{k^{\prime}}^{\dagger}\right)|\tilde{0}\rangle \tag{115}\]
We define
\[u_{kq} =e^{-(\beta/2+iu(1-q))\epsilon_{k}}\tilde{g}_{kq}\] (116) \[v_{kq} =e^{-(\beta/2
\[\int d\theta\theta_{i}\theta_{j}e^{-\frac{1}{2}\theta^{T}\Gamma_{0} \theta}\sim\frac{1}{\epsilon}\left(\sqrt{\det(\Gamma_{0}-\epsilon X_{ij})}- \sqrt{\det(\Gamma_{0})}\right) \tag{108}\]
by evaluating the limit \(\epsilon\to 0\), we get Eq. (107). Similarly, we have the identity
\[\int d\theta\theta_{i}\theta_{j}\theta_{k}\theta e^{-\frac{1}{2} \theta^{T}\Gamma_{0}\theta}=-\frac{1}{2}\mathrm{Tr}\left\{\Gamma_{0}^{-1}X_{ ij}\Gamma_{0}^{-1}X_{kl}\right\}\sqrt{\det(\Gamma_{0})}\] \[+\frac{1}{4}\mathrm{Tr}\left\{\Gamma_{0}^{-1}X_{ij}\right\}\mathrm{ Tr}\left\{\Gamma_{0}^{-1}X_{kl}\right\}\sqrt{\det(\Gamma_{0})} \tag{109}\]
To prove it, we consider that
\[\int d\theta\theta_{i}\theta_{j}\theta_{k}\theta e^{-\frac{1}{2} \theta^{T}\Gamma_{0}\theta}=\frac{1}{\epsilon}\Bigg{(}\int d\theta\theta_{i} \theta_{j}(1+\epsilon\theta_{k}\theta_{l})e^{-\frac{1}{2}\theta^{T}\Gamma_{0 }\theta}\] \[-\int d\theta\theta_{i}\theta_{j}e^{-\frac{1}{2}\theta^{T} \Gamma_{0}\theta}\Bigg{)} \tag{110}\]
which, in the limit \(\epsilon\to 0\) can be evaluated with the help of the identity in Eq. (107). We get
\[\int d\theta\theta_{i}\theta_{j}\theta_{k}\theta e^{-\frac{1}{2} \theta^{T}\Gamma_{0}\theta}\sim\frac{1}{2\epsilon}\Bigg{(}\mathrm{Tr}\left\{ \Gamma_{0}^{-1}X_{ij}\right\}\sqrt{\det(\Gamma_{0})}\] \[-\mathrm{Tr}\left\{(\Gamma_{0}-\epsilon X_{kl})^{-1}X_{ij}\right\} \sqrt{\det(\Gamma_{0}-\epsilon X_{kl})}\Bigg{)} \tag{111}\]
by evaluating the limit \(\epsilon\to 0\), we get Eq. (107). In the end, we consider the initial state in Eq. (106), which is
\[|\Psi_{2}\rangle=\frac{e^{\frac{\theta}{\epsilon}\sum_{k}\epsilon_{k}}}{\sqrt{ Z_{2}}}\left(1+\sum_{k}e^{-\frac{\rho_{k}}{2}}a_{k}^{\dagger}+\frac{1}{2}\sum_{k,k^{ \prime}}s_{k,k^{\prime}}e^{-\frac{\rho(\epsilon_{k}\cdot\epsilon_{k^{\prime })}}{2}}a_{k}^{\dagger}a_{k}^{\dagger}\right)|\tilde{0}\rangle \tag{112}\]
By using the identities in Eqs. (107) and (110), we get
\[X_{q}(u) = Ce^{\mu\sum_{k}(\epsilon_{k}-\epsilon_{k}^{\prime})/2}\sqrt{ \det(\Gamma_{0})}\Bigg{(}1+\frac{1}{2}\mathrm{Tr}\left\{\Gamma_{0}^{-1}(u)M( u)\right\} \tag{113}\] \[+\sum_{k}v_{k}v_{k}^{\prime}+\frac{1}{2}\mathrm{Tr}\left\{\Gamma _{0}^{-1}(u)(V(u)-V^{\prime}(u))\right\}\] \[+\frac{1}{2}\mathrm{Tr}\left\{V_{2}-V_{2}^{\prime}\right\}-\frac{ 1}{4}\mathrm{Tr}\left\{V_{2}\right\}\mathrm{Tr}\left\{V_{2}^{\prime}\right\}\] \[-\frac{1}{4}\mathrm{Tr}\left\{V_{2}\right\}\mathrm{Tr}\left\{ \Gamma_{0}^{-1}(u)V^{\prime}(u)\right\}\] \[-\frac{1}{4}\mathrm{Tr}\left\{V_{2}^{\prime}\right\}\mathrm{Tr} \left\{\Gamma_{0}^{-1}(u)V(u)\right\}-\frac{1}{2}\mathrm{Tr}\left\{V_{3}V_{1} ^{\prime}\right\}\] \[+\frac{1}{2}\mathrm{Tr}\left\{\Gamma_{0}^{-1}(u)V^{\prime\prime}( u)\right\}+\frac{1}{2}\mathrm{Tr}\left\{\Gamma_{0}^{-1}(u)V(u)\Gamma_{0}^{-1}(u)V^{ \prime}(u)\right\}\] \[-\frac{1}{4}\mathrm{Tr}\left\{\Gamma_{0}^{-1}(u)V(u)\right\} \mathrm{Tr}\left\{\Gamma_{0}^{-1}(u)V^{\prime}(u)\right\}\Bigg{)}\]
where we have defined
\[V^{\prime\prime}(u)=\left(\begin{array}{cc}V_{2}V_{1}^{\prime}+V_{1}^{ \prime}V_{2}^{T}&V_{2}V_{2}^{\prime}-V_{1}^{\prime}V_{3}\\ V_{3}V_{1}^{\prime}-V_{2}^{\prime}V_{2}^{T}&V_{3}V_{2}^{\prime}+V_{2}^{\prime}V _{3}\end{array}\right) \tag{114}\]
If we introduce a relative phase \(\phi_{k}\), we have to multiply \(u_{kq}\) and \(v_{kq}\) by \(e^{-i\phi_{k}}\) and \(u_{kq}^{\prime}\) and \(v_{kq}^{\prime}\) by \(e^{i\phi_{k}}\).
If \(A\) and \(B\) are complex matrices, we get \(g\) and \(h\) complex. In this case we have same formulas, with \(\tilde{g}=gg^{\prime\dagger}+hh^{\prime T}\) and \(\tilde{h}=gh^{\prime T}+hg^{\prime\dagger}\), and in \(\Gamma_{0}\) in Eq. (105), we have \(G^{*}\) instead of \(G\), and in \(u_{k}\), \(v_{k}\), \(u_{kq}\) and \(v_{kq}\) we have \(\tilde{g}^{*}\) and \(\tilde{h}^{*}\) instead of \(\tilde{g}\) and \(\tilde{h}\).
## Appendix F Measuring the characteristic function
The characteristic function can be measured as observed in Ref. [13]. Here we note the detector can be a qubit in the initial state \(\rho_{D}(t_{i})\) with Hamiltonian \(H_{D}=o|e\rangle\langle e|\). We
consider the interactions with the system described by \(H_{l}=-\delta_{e}|e\rangle\langle e|-\delta_{g}|g\rangle\langle g|\) and \(H^{\prime}_{l}=-\delta^{\prime}_{e}|e\rangle\langle e|-\delta^{\prime}_{g}|g \rangle\langle g|\), where \(|g\rangle\) is the ground-state of the qubit and \(|e\rangle\) is the excited state. The total system is in the initial state \(\rho_{D}(t_{i})\otimes\rho_{0}\) at the initial time \(t_{i}=-t_{D}\), in the time interval \((-t_{D},0)\) the time-evolution is generated by the total Hamiltonian \(H_{tot}=H(\lambda_{0})+H_{D}+H_{l}\). Then, in the time interval \((0,\tau)\) the qubit and the system do not interact and the quench is performed. Finally, in the time interval \((\tau,\tau+t^{\prime}_{D})\) the time-evolution is generated by the total Hamiltonian \(H^{\prime}_{tot}=H(\lambda_{\tau})+H_{D}+H^{\prime}_{l}\). The coherence of the qubit at the final time \(t_{f}=\tau+t^{\prime}_{D}\) reads
\[\langle e|\rho_{D}(t_{f})|g\rangle=\langle e|\rho_{D}(t_{i})|g \rangle e^{-i\omega(t_{f}-t_{i})}\mathrm{Tr}\bigg{\{}e^{-i(1-\delta_{g})t_{D}H (\lambda_{0})}\rho_{0}\] \[\times e^{i(1-\delta_{g})t_{D}H(\lambda_{0})}U^{\dagger}_{\tau, 0}e^{i(\delta^{\prime}_{g}-\delta^{\prime}_{g})t^{\prime}_{D}H(\lambda_{\tau} )}U_{\tau,0}\bigg{\}} \tag{11}\]
from which we can determine \(X_{q}(u)\).
|
2308.04165
|
Univalent Functions involving Generalized Hypergeometric Series
|
The main objective of the present article is to make interconnection between
the Generalized Hyergeometric series and some subclasses of normalized analytic
functions with positive(Tailor's) coefficients in the open unit disc
$\mathbb{D} =\{z:\, |z|<1\}$.
|
K. Chandrasekran, D. J. Prabhakaran
|
2023-08-08T09:56:30Z
|
http://arxiv.org/abs/2308.04165v1
|
# Univalent functions involving generalized hypergeometric series
###### Abstract.
The main objective of the present article is to make interconnection between the Generalized Hypergeometric series and some subclasses of normalized analytic functions with positive(Tailor's) coefficients in the open unit disc \(\mathbb{D}=\{z:\,|z|<1\}\).
Key words and phrases:Generalized Hypergeometric Series, Univalent Functions, Starlike Functions, Convex Functions and Alexander Integral Operator.
Final Version as on **08-08-2023.**
For the function \(f(z)\) is given by (1) in \(\mathcal{A}\) and \(g\in\mathcal{A}\) given by \(g(z)=z+\sum_{n=2}^{\infty}\,b_{n}\,z^{n}\) (both \(f\) and \(g\) are analytic functions in \(\mathbb{D}\)), the _convolution (Hadamard Product)_ of \(f\) and \(g\) is defined by
\[f(z)*g(z)=z+\sum_{n=2}^{\infty}\,a_{n}\,b_{n}\,z^{n},z\in\mathbb{D}.\]
The subclass \(\mathcal{V}\) of \(\mathcal{A}\) consisting of functions of the form
\[f(z)=z+\sum_{n=2}^{\infty}\,a_{n}\,z^{n},\,z\in\mathbb{D},\,\,\mbox{with}\,\,a _{n}\geq 0,\,n\in\mathbb{N},\,n\geq 2.\]
In [11], Uralegaddi et.l., introduced the following two classes of \(\mathcal{S}\) which are stated as:
The class \(\mathcal{M}(\alpha)\) of _starlike functions of order \(\alpha\)_, with \(1<\alpha\leq\frac{4}{3}\), defined by
\[\mathcal{M}(\alpha)=\left\{f\in\mathcal{A}:\Re\left(\frac{zf^{\prime}(z)}{f(z )}\right)<\alpha,\,z\in\mathbb{D}\right\}\]
The class \(\mathcal{N}(\alpha)\) of _convex functions of order \(\alpha\)_, with \(1<\alpha\leq\frac{4}{3}\), defined by
\[\mathcal{N}(\alpha)=\left\{f\in\mathcal{A}:\Re\left(1+\frac{zf^{\prime\prime} (z)}{f^{\prime}(z)}\right)<\alpha,\,z\in\mathbb{D}\right\}=\,\,\left\{f\in \mathcal{A}:zf^{\prime}(z)\in\mathcal{M}(\alpha)\right\}\]
In this work, we consider the two subclasses \(\mathcal{M}(\lambda,\alpha)\) and \(\mathcal{N}(\lambda,\alpha)\) of \(\mathcal{S}\) to discuss some inclusion properties based on generalized hypergeometric function. These two subclasses was introduced by Bulboaca and Murugusundaramoorthy [3], which are stated as follows:
**Definition 1.1**.: [3] _For some \(\alpha\,\left(1<\alpha\leq\frac{4}{3}\right)\) and \(\lambda\,\left(0\leq\lambda<1\right)\), the functions of the form (1) be in the subclass \(\mathcal{M}(\lambda,\alpha)\) of \(\mathcal{S}\) is_
\[\mathcal{M}(\lambda,\alpha) = \left\{f\in\mathcal{A}:\Re\left(\frac{zf^{\prime}(z)}{(1-\lambda )f(z)+\lambda z\,f^{\prime}(z)}\right)<\alpha,\,z\in\mathbb{D}\right\}\]
**Definition 1.2**.: [3] _For some \(\alpha\,\left(1<\alpha\leq\frac{4}{3}\right)\) and \(\lambda\,\left(0\leq\lambda<1\right)\), the functions of the form (1) be in the subclass \(\mathcal{N}(\lambda,\alpha)\) of \(\mathcal{S}\) is_
\[\mathcal{N}(\lambda,\alpha) = \left\{f\in\mathcal{A}:\Re\left(\frac{f^{\prime}(z)+zf^{\prime \prime}(z)}{f^{\prime}(z)+\lambda z\,f^{\prime\prime}(z)}\right)<\alpha,\,z\in \mathbb{D}\right\}\]
Also, let \(\mathcal{M}^{*}(\lambda,\alpha)\equiv\mathcal{M}(\lambda,\alpha)\cap\mathcal{V}\) and \(\mathcal{N}^{*}(\lambda,\alpha)\equiv\mathcal{N}(\lambda,\alpha)\cap\mathcal{V}\).
**Definition 1.3**.: [8] _A function \(f\in\mathcal{A}\) is said to be in the class \(\mathcal{R}^{\tau}(A,B)\), with \(\tau\in\mathbb{C}\backslash\{0\}\) and \(-1\leq B\leq A\leq 1\), if it satisfies the inequality_
\[\left|\frac{f^{\prime}(z)-1}{(A-B)\tau-B[f^{\prime}(z)-1]}\right|<1,z\in \mathbb{D}\]
Dixit and Pal [8] introduced the Class \(\mathcal{R}^{\tau}(A,B)\). Which is stated as in the definition 1.3. If we substitute \(\tau=1\), \(A=\beta\,\) and \(B=-\beta\), \((0<\beta\leq 1)\) in the definition 1.3, then we obtain the class of functions \(f\in\mathcal{A}\) satisfying the inequality
\[\left|\frac{f^{\prime}(z)-1}{f^{\prime}(z)+1}\right|<\beta,\,z\in\mathbb{D}\]
which was studied by Padmanabhan [10] and others subsequently.
**Lemma 2**.: [9] _For some \(\alpha\,(1<\alpha\leq\frac{4}{3})\) and \(\lambda\,(0\leq\lambda<1)\), and if \(f\in\mathcal{V}\), then \(f\in\mathcal{M}^{*}(\lambda,\alpha)\) if and only if_
\[\sum_{n=2}^{\infty}\,[n-(1+n\lambda-\lambda)\alpha]a_{n} \leq \alpha-1. \tag{3}\]
**Lemma 4**.: [9] _For some \(\alpha\,(1<\alpha\leq\frac{4}{3})\) and \(\lambda\,(0\leq\lambda<1)\), and if \(f\in\mathcal{V}\), then \(f\in\mathcal{N}^{*}(\lambda,\alpha)\) if and only if_
\[\sum_{n=2}^{\infty}\,n\,[n-(1+n\lambda-\lambda)\alpha]a_{n} \leq \alpha-1. \tag{5}\]
**Definition 1.4**.: [1] _The generalized hypergeometric series is defined by_
\[{}_{p}F_{q}(z)=_{p}F_{q}\left(\begin{array}{ccc}a_{1},&a_{2},\cdots&a_{p}\\ b_{1},&b_{2},\cdots&b_{q}\end{array};z\right)=\sum_{n=0}^{\infty}\left(\frac{( a_{1})_{n}\cdots(a_{p})_{n}}{(b_{1})_{n}\cdots(b_{q})_{n}(1)_{n}}\right)z^{n}. \tag{6}\]
This series converges absolutely for all \(z\) if \(p\leq q\) and for \(|z|<1\) if \(p=q+1\), and it diverges for all \(z\neq 0\) if \(p>q+1\). For \(|z|=1\) and \(p=q+1\), the series \({}_{p}F_{q}\left(z\right)\) converges absolutely if \(Re(\sum b_{i}-\sum a_{i})>0.\) The series converges conditionally if \(z=e^{i\theta}\neq 1\) and \(-1<Re(\sum b_{i}-\sum a_{i})\leq 0\) and diverges if \(Re(\sum b_{i}-\sum a_{i})\leq-1.\)
We consider the linear operator \(\mathcal{I}_{q}^{p}(f):\mathcal{A}\rightarrow\mathcal{A}\) is defined by the convolution product
\[\mathcal{I}_{q}^{p}(f)(z) = z\,_{p}F_{q}\left(z\right)*f(z)=z+\sum_{n=2}^{\infty}A_{n}\,z^{ n},\,z\in\mathbb{D}\]
with \(A_{1}=1\) and for \(n>1\),
\[A_{n} = \left(\frac{(a_{1})_{n-1}\cdots(a_{p})_{n-1}}{(b_{1})_{n-1}\cdots( b_{q})_{n-1}(1)_{n-1}}\right)\,a_{n}.\]
Motivated by the results in connections between various subclasses of analytic univalent functions, by using hypergeometric functions [4, 5, 6, 7, 11], and Poisson distributions [3] we obtain the necessary and sufficient conditions for \({}_{p}F_{q}\left(z\right)\) hypergeometric function to be in the classes \(\mathcal{M}^{*}(\lambda,\alpha)\) and \(\mathcal{N}^{*}(\lambda,\alpha)\) and information regarding the image of function \({}_{p}F_{q}\left(z\right)\) hypergeometric function belonging to \(\mathcal{R}^{\tau}(A,B)\) by applying the convolution operator.
Now, we list out the work done by us in details. In section 2, we obtain the necessary and sufficient conditions on the parameters for the function \(F(z)\) to be in the classes \(\mathcal{M}^{*}(\lambda,\alpha)\) and \(\mathcal{N}^{*}(\lambda,\alpha)\) and information regarding the image of functions \(F(z)\) belonging to \(\mathcal{R}^{\tau}(A,B)\) by applying the convolution operator in open unit disc \(\mathbb{D}\).
We find the necessary and sufficient conditions for the function \(G(z)\) to be in the classes \(\mathcal{M}^{*}(\lambda,\alpha)\) and \(\mathcal{N}^{*}(\lambda,\alpha)\) and information regarding the image of functions \(G(z)\) belonging to \(\mathcal{R}^{\tau}(A,B)\) by applying the convolution operator in open unit disc \(\mathbb{D}\) in section 3.
At the end of the each section, we have listed out only the books and the research articles which are used directly to prove our main results in the present research work.
|
2302.06729
|
STREET: A Multi-Task Structured Reasoning and Explanation Benchmark
|
We introduce STREET, a unified multi-task and multi-domain natural language
reasoning and explanation benchmark. Unlike most existing question-answering
(QA) datasets, we expect models to not only answer questions, but also produce
step-by-step structured explanations describing how premises in the question
are used to produce intermediate conclusions that can prove the correctness of
a certain answer. We perform extensive evaluation with popular language models
such as few-shot prompting GPT-3 and fine-tuned T5. We find that these models
still lag behind human performance when producing such structured reasoning
steps. We believe this work will provide a way for the community to better
train and test systems on multi-step reasoning and explanations in natural
language.
|
Danilo Ribeiro, Shen Wang, Xiaofei Ma, Henry Zhu, Rui Dong, Deguang Kong, Juliette Burger, Anjelica Ramos, William Wang, Zhiheng Huang, George Karypis, Bing Xiang, Dan Roth
|
2023-02-13T22:34:02Z
|
http://arxiv.org/abs/2302.06729v1
|
# Street: A Multi-Task Structured Reasoning and Explanation Benchmark
###### Abstract
We introduce street, a unified multi-task and multi-domain natural language reasoning and explanation benchmark. Unlike most existing question-answering (QA) datasets, we expect models to not only answer questions, but also produce step-by-step structured explanations describing how premises in the question are used to produce intermediate conclusions that can prove the correctness of a certain answer. We perform extensive evaluation with popular language models such as few-shot prompting GPT-3 and fine-tuned T5. We find that these models still lag behind human performance when producing such structured reasoning steps. We believe this work will provide a way for the community to better train and test systems on multi-step reasoning and explanations in natural language.
## 1 Introduction
A long-term pursuit in Artificial Intelligence is to endow machines with the ability to reason and manipulate premises to reach conclusions and perform tasks. Initially, most reasoning systems performed multi-step operations over symbolic or probabilistic knowledge (Newell & Simon, 1956; McCarthy et al., 1960; Siler & Buckley, 2005), and even though these systems were able to perform complex tasks (Vernon et al., 2007; Metaxiotis et al., 2002; Ribeiro & Forbus, 2021), there were still shortcomings when it comes to encoding such knowledge, learning reasoning rules and dealing with ambiguity (Bell, 1985; Ribeiro et al., 2019). Some recent works in the field of _question-answering_ (QA) have demonstrated that language models can bypass some of these issues and learn to reason directly over natural language (Clark et al., 2020), allowing for more flexible and adaptable reasoning capabilities. Another advantage of performing multi-step reasoning over natural language is that it allows for more inspectable outputs, improving the explainability of models that are otherwise regarded as black box systems (Jain & Wallace, 2019; Rajani et al., 2019; Danilevsky et al., 2020). Despite the recent progress, we notice that there is still a gap in resources for training and evaluating general reasoning capabilities over natural language.
To facilitate research in this direction we propose the _STructured REasoning and Explanation Multi-Task_ benchmark (or street for short), containing a collection of tasks in various domains including quantitative reasoning (math questions), analytical reasoning (logic puzzle questions), and deductive reasoning (common-sense and science questions). We build upon existing QA datasets by adding multi-premise, multi-step, structured explanations in the form of _reasoning graphs_, as depicted in Figure 1. The street benchmark contains 35.8k questions, each of which is accompanied by a reasoning graph, either created by expert annotators or programmatically. When combined, all reasoning graphs contain a total of 151.1k reasoning steps (or textual entailments), of which 14.7k were created by our expert annotators. We carefully selected the tasks such that most of the relevant knowledge required to answer the questions is contained within the question or context themselves. Therefore, we focus on the reasoning problem, with a greater number of reasoning steps (an average of 7.8 reasoning steps per answer) and a more complex reasoning structure than previous datasets. These properties differentiate our work from single-step reasoning such as Natural Language Inference
(NLI) (Bowman et al., 2015; Williams et al., 2018; Zellers et al., 2018) or multi-hop QA (Yang et al., 2018; Chen et al., 2021) that require specific factual knowledge retrieval.
In our proposed evaluation, the models are expected to not only answer the questions, but also generate the reasoning graphs (including the textual intermediate steps) that explains their output answer. With that in mind, we design a few evaluation metrics to verify if the generated reasoning graphs match the expected golden data. We perform extensive evaluation using some popular language models of various sizes, namely T5 (Raffel et al., 2020) and GPT-3 (Brown et al., 2020), either fine-tuning on training data or using few-shot prompting. Our experiments show that even though these models can achieve high solving rates on many of the original QA datasets, they still struggle to generate coherent and relevant reasoning graphs and appear to be far below human performance.
Our main contributions are as follows: (1) We define reasoning graphs, which are structured chains of reasoning in natural language that provide explainability to the output of models on QA tasks. (2) We propose street, a multi-task and multi-domain benchmark containing questions requiring diverse types of reasoning skills. The answers in the dataset contain annotated or generated reasoning graphs. (3) We evaluate the performance of LMs such as fine-tuned T5 and few-shot prompting with GPT-3 on our proposed task. Our results suggest there is still room for improving language models for generating complex multi-step reasoning explanations.
## 2 Task and Data
### Task definition
In the standard definition, a question-answering task \(\mathbf{T}=(C,Q,O,A,R)\) has the following components: an optional context \(C\) such as a passage or problem description; a question \(Q\) that might reference the context; answer options \(O=(o_{1},o_{2},\dots,o_{K})\) in the case of \(K\)-way multiple choice questions; an expected answer \(A\) (where \(A\in O\), if options are present). Some MRC tasks (Ling et al., 2017; Camburu et al., 2018; Rajani et al., 2019; Cobbe et al., 2021) also provide rationales \(R\) as free-form explanations of the answer \(A\).
Figure 1: Two examples from our proposed STREET benchmark. The questions are derived from the Grade School Math (GSM8K) and Analytical Reasoning - Law School Admission Test (ARG-LSAT) tasks. The QA components (e.g. question, context, and answers options) are broken into _textual logical units_, or TLUs. These TLUs are connected to form a _reasoning graph_. Our proposed benchmark builds upon existing QA datasets by adding structured reasoning explanations that shows how one can derive the answer to a given question.
To generate a more fine-grained explanation, we modify the above formulation so that the data also contains a structured, step-by-step explanation of the answer, as depicted in Figure 1. To this end, we define **textual logical units** (TLU), which are essentially a sequence of tokens from elements in \(\mathbf{T}\) that defines premises that will possibly be referenced in the reasoning steps. More formally, a TLU for a QA component in \(T\in\mathbf{T}\) is a list of the sequence of tokens in \(T\). For instance, given the tokens \(T=(t_{1},t_{2},\ldots,t_{|T|})\), the TLU of \(T\) is defined as the list of spans \(L_{T}=((l_{1},l_{2}),(l_{2}+1,l_{3}),\ldots,(l_{|L_{T}|-1})+1,|l_{|L_{T}|})\) where \(l_{i}\in\{x\mid 1\leq x\leq|T|\}\) and \(l_{i}>l_{j}\) for \(i>j\). Each pair \((l_{i},l_{j})\in L_{T}\) represents the sequence of tokens \((t_{(l_{i})},t_{(l_{i}+1)},\ldots,t_{(l_{j})})\). The TLUs can be extracted automatically from the text by using a simple algorithm, e.g., breaking down paragraphs by punctuation mark. The algorithm we used to create the datasets can be found in Appendix A.5. Note that the TLUs for the rationale \(L_{R}\) can be thought of as a sequence of step-by-step explanations.
Given the TLUs, we also define a structured explanation, which we call **reasoning graph**. Each element in \(L_{R}\) (referred here as reasoning steps or intermediate conclusions) will be connected to a set of other TLUs (or premises) that contain sufficient information supporting the reasoning step. The reasoning graph can be defined as a set of vertices and edges \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where the nodes are TLUs such that \(\mathcal{V}\subseteq(L_{C}\cup L_{Q}\cup L_{O}\cup L_{A}\cup L_{R})\) and the edges \(\mathcal{E}\subseteq\mathcal{V}\times(L_{O}\cup L_{A}\cup L_{R})\) are pairs of nodes. The starting node of an edge is a premise of a reasoning step, while the ending node is the output of a reasoning step, i.e., an intermediate conclusion or an answer. Note that in this case the explanation graph \(\mathcal{G}\) is **directed** (edges go from premises to conclusions) and **acyclic** (each conclusion should only be generated once). Our reasoning graph formulation shares some similarities to Entailment Trees from Dalvi et al. (2021). However, our benchmark does not require a pre-assigned corpus of textual facts or a hypothesis (which must be included with the data). Furthermore, reasoning graphs allow for other QA elements (e.g., context, answer options, and expected answer) and represents the reasoning using the less restrictive directed acyclic graphs (a tree data structure can't easily be used to represent the examples from Figure 1).
### Data Source and Annotation
With the goal of testing complex reasoning capabilities, we build upon existing QA datasets in which solutions require multiple reasoning steps. Since our focus is testing reasoning capabilities, we disregard datasets that require domain knowledge to solve the problem. Instead, we focus on the ones containing most of the information within the context, question, and answer options. We categorize the reasoning tasks according to their level of existing structured reasoning steps, which we describe below.
The first category, comprised of the science question dataset AI2 Reasoning Challenge (ARC) (Clark et al., 2018), already contains annotated structured reasoning steps provided by EntailmentBank (Dalvi et al., 2021). EntailmentBank comes with an external source of knowledge (Jansen et al., 2018) from which premises could be retrieved to generate explanations. Since the retrieval of premises is out of the scope of this work, we directly add the gold premises to the context \(C\) of the QA task1.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Task** & **Task** & **\# Original** & **\# Used** & **\# Reasoning** & **Answer** \\
**Name** & **Domain** & **Questions** & **Questions** & **Steps** & **Type** \\ \hline ARC & Science & 7,787 & 1,840 & 5,881 & 4-Way MC \\ SCONE & Processes & 14,574 & 14,574 & 130,482 & State Pred. \\ GSM8K & Math & 8,792 & 1,030 & 4,666 & Number \\ AQUA-RAT & Math & 101,449 & 1,152 & 7,179 & 5-Way MC \\ AR-LSAT & Logic & 2,046 & 500 & 2,885 & 5-Way MC \\ \hline TOTAL & — & 134,648 & 19,096 & 151,093 & — \\ \hline \hline \end{tabular}
\end{table}
Table 1: The different tasks used to create the proposed benchmark. In the answer types, “\(K\)-Way MC” stands for multiple choice answer with \(K\) options.
The second category uses the Sequential Context-Dependent Execution dataset (SCONE) (Long et al., 2016). The questions in SCONE describe a sequence of actions that modify a given toy world (e.g., list of objects and their relative position), and the expected answer is the final world state. We extract the reasoning steps programmatically as this dataset also provide the intermediate world states for each question.
The third category of tasks, namely GSM8K (Cobbe et al., 2021) and AQUA-RAT (Ling et al., 2017) contain **unstructured** rationales (in natural language) showing the chain of reasoning used to solve the given questions. In this category, we further annotate the datasets. First, we split the context, question, answer options, and rationales into TLUs, assigning a number ID to each of them. Afterwards, we ask human annotators to create the structure of the reasoning steps, assigning which premises were used in each entailment step. Note that some entailment steps might not have any premises (e.g., when stating a known fact as in "one hour has 60 minutes").
Finally, the last category is comprised of the AR-LSAT dataset (Zhong et al., 2021), which is a relatively complex reasoning task (transformer-based models are shown to struggle in this task) and does not come with any rationale or explanation. Therefore, we annotate the rationales and reasoning structure from scratch. We ask expert annotators to solve the questions given the context and the answer. While writing down the step-by-step explanations, they are also asked to assign which premises are used to reach each intermediate conclusion.
### Dataset Details
A summary of all datasets in our proposed benchmark can be found in Table 1. Note that not all questions in the original datasets were used due to the significant amount of time required for annotation. Examples of the final data for each dataset can be found in Appendix A.1.
Annotation Details:Each dataset is pre-processed and QA components are broken down into TLUs. For all datasets except ARC and SCONE, we ask human annotators to further annotate data points by first creating multi-step rationales (this is skipped if the dataset already contains rationales), and then connect each rationale step to the premises that support that conclusion (in the user-interface the annotators select a set of numbers for each rationale step). Note that human annotators are given an unlimited amount of time to complete each task, and they are mostly comprised of experts with undergraduate or graduate level education, as opposed to randomly selected online workers.
For quality control, we performed two passes for each reasoning step, using a third pass to break ties when needed. As an indicator of annotation quality, we compute the annotation agreement using Fleiss Kappa \(\kappa\)(Fleiss, 1971). Each directed edge in the reasoning graph is regarded as binary question (edge should be present or not). Finally, the first two annotation passes are used to compute \(\kappa=0.79\), indicating "substantial agreement" among annotators. With a total of 14,730 reasoning steps annotated (for GSM8K, AQUA-RAT, and AR-LSAT), we estimate a total of 1,467 (paid) work hours. Further annotation details can be found in Appendix A.2.
Data Statistics:We analyze the data of all combined datasets to obtain some insights into the scale and reasoning complexity. Figure 2 shows the distribution of the "number of reasoning steps" among the data points in each annotated dataset. Note that most of the tasks contain a larger number of
Figure 2: A histogram containing the distribution of data points and the number of reasoning steps for each annotated dataset (training, development, and testing split combined), truncated to a maximum of 15 steps. The distribution varies among datasets, with an average of 7.8 steps across all data points.
reasoning steps compared to previous multi-hop QA datasets (Jhamtani and Clark, 2020; Yang et al., 2018; Geva et al., 2021; Chen et al., 2021). For the most part, multi-hop QA questions only contain up to two "hops" (or reasoning steps), while street has an average of 7.8 steps per question, with 26.4% of the questions containing more than 10 reasoning steps. The number of incoming edges for each node (a.k.a "in-degree" or "valency", which is the number of directed edges with such node as destination) in the reasoning graph is usually between one and three, with questions containing nodes with more than five incoming edges. Further data statistics and dataset analysis are available in Appendices A.3 and A.4.
## 3 Baseline Models
We measure the performance of various models on our multi-task reasoning and explanation benchmark. We show results separately for each of the tasks, where we mainly evaluate (1) the standard QA accuracy (i.e., can the model predict the correct answer?) and (2) the models' ability to generate the reasoning graph. The evaluation metrics will be detailed in section 4.1.
To this end, we use two approaches to solve the structured reasoning task. The first approach is fully supervised, where we fine-tune a T5 model (Raffel et al., 2020), which is similar to other work on reasoning datasets (Tafjord et al., 2021; Dalvi et al., 2021; Ribeiro and Forbus, 2021). The second approach uses the much larger GPT-3 language model (Brown et al., 2020). Instead of fine-tuning GPT-3, we use few-shot prompting. These large language models have been shown to have strong step-by-step reasoning generation capabilities (Wei et al., 2022; Wang et al., 2022) even if just provided with a handful of examples.
### Reasoning Graph Encoding
Following prior work with structured input and output (Chen et al., 2020; Tafjord et al., 2021; Dalvi et al., 2021; Neves Ribeiro et al., 2022), we linearize the reasoning graph such that it can be generated by the language model as a sequence of tokens. First, each one of the candidate premises (i.e., context, question, and answer options) are assigned an id token. Then we sort the reasoning graph's steps according to a valid topological order (i.e., all premises must be part of the linearized string before adding a reasoning step node). For tasks where answer types are multiple-choice or number, the last node will contain the text with the value of the predicted answer, such as "The answer is A)" or "The answer is 15". The text encoding for the GSM8K example in Figure 1 can be seen below:
$question$ = (1) Natalia sold clips to 48 of her friends in April, and then (2) shes sold half as many clips in May. (3) How many clips did Natalia sell altogether in April and May?
$proof$ = (1) & (2) -> (4): Natalia sold 48/2 = 24 clips in May; (1) & (3) & (4) -> (5): Natalia sold 48+24 = 72 clips altogether in April and May; (3) & (5) -> (6): The answer is 72;
The SCONE task is a special case where we do not expect the generative models to output the state of every tracked object in the answer node. Instead, the answer is extracted from the intermediate nodes of the reasoning graph (examples are shown in Appendix A.1)
### Supervised Training
For full supervision, we fine-tune the T5-large model (770 million parameters) on the training data for each task separately. The model is fine-tuned for up to 30 epochs, and we select the check-point with the highest answer accuracy on the development data at the end of each training epoch. The training is done using a machine with four NVIDIA Tesla V100-SXM2, and the Hugging Face2 pre-trained T5-model distribution. Further implementation details are available in Appendix C. During inference, we use beam search with a beam size of 5 to generate the reasoning graph and the answer for a given question.
### Few-shot Prompting
For few-shot prompting we use GPT-3 (Brown et al., 2020) by accessing the OpenAI's API 3. The API provides access to a few model variants. For our experiments we use the largest advertised model, namely text-davinci-002 (175B parameters). During inference, we select up to 5 examples (depending on the tasks and models, fewer prompt examples might be provided due to the encoder token size limit) as prompts for the model, following the encoding method from Section 3.1, except we remove the expected reasoning graph from the target question. During generation, we use greedy decoding and do not set any maximum or minimum output size, expecting the model to predict the end of the structured output.
Footnote 3: [https://openai.com/api/](https://openai.com/api/)
## 4 Experiments
### Evaluation Metrics
We specify the 3 main categories of evaluation metrics described below to evaluate the answer to the questions and the generated reasoning graph.
Answer Accuracy:This metric measures the ability of generative models to predict the correct answer to a question. Exact match is used to evaluate the models on tasks with _multi-choice_ or _numerical_ answers. For the tasks with state prediction (i.e., SCONE), we use the combined state for each object as the expected answer. The answer accuracy will be an upper bound for the other metrics since any generated reasoning graph with an incorrect answer is also labeled as incorrect.
Reasoning Graph Accuracy:This metric compares the predicted and golden reasoning graphs in terms of both the graph structure and the content of the intermediate conclusion nodes. The comparison between the predicted graph \(\mathcal{G}_{p}=(\mathcal{V}_{p},\mathcal{E}_{p})\) and golden graph \(\mathcal{G}_{g}=(\mathcal{V}_{g},\mathcal{E}_{g})\) starts with aligning the nodes in \(\mathcal{V}_{p}\) and \(\mathcal{V}_{g}\). In this matching, we use the premises as anchors, and the reasoning step nodes are matched according to their ancestors in a topological ordering. Given two matched reasoning step nodes \(v_{p}\in\mathcal{V}_{p}\) and \(v_{g}\in\mathcal{V}_{g}\), we use textual similarity function \(\sigma(text(v_{p}),text(v_{g}))\) to test if two reasoning step nodes are equivalent. The textual similarity function varies depending on the QA Task. More details on the matching algorithm and the different text similarity functions used are available in Appendix D. Note that this is a strict metric, and small deviations from the golden reasoning graph will render the predicted graph incorrect.
Reasoning Graph Similarity:The reasoning graph similarity \(sim(\mathcal{G}_{p},\mathcal{G}_{g})\) is a "softer" metric that compares the predicted and golden reasoning graphs using the graph edit distance function \(\delta(\mathcal{G}_{p},\mathcal{G}_{g})\). The function \(\delta\) uses _insertion_, _deletion_ and _substitution_ as elementary graph edit operators over nodes and edges. The text similarity function \(\sigma\) is used to test if two nodes match. The cost of any edit operation is \(1\). However, if the generated answer is incorrect, the similarity is set to \(0\) (i.e., the edit cost for the "answer node" \(L_{A}\) is \(\infty\)). The Graph Similarity function is normalized (the output is in the range \([0,1]\)) and can be computed as:
\[sim(\mathcal{G}_{p},\mathcal{G}_{g})=1-\left[\frac{\delta(\mathcal{G}_{p}, \mathcal{G}_{g})}{max(|N_{p}|+|E_{p}|,|N_{g}|+|E_{g}|)}\right] \tag{1}\]
In general, computing the graph edit distance can be computationally expensive as the problem is _NP-complete_(Abu-Aisheh et al., 2015). For this reason, we compute an approximation of this value by using the implementation from the networkx4 library.
Footnote 4: [https://networkx.org/](https://networkx.org/)
### Results
The main experiment results can be found in Table 2. The results for the SCONE task are averaged across the different sub-tasks (namely Alchemy, Scene, and Tangrams). Further experiment results with different model sizes and generalization settings can be found in Appendix B.
There are a few takeaways. First, we notice that T5 [large] (fine-tuned) either outperforms or is on par with GPT-3 [davinci] (few-shot) on ARC and SCONE across all metrics, while the opposite is true for the math domain tasks GSM8K and AQUA-RAT. Both methods perform poorly on AR-LSAT. This result is consistent with the results from Zhong et al. (2021), which shows that language models still struggle with more complex analytical reasoning tasks.
Second, the results of fine-tuned T5 [large] show that the Reasoning Graph Accuracy and Reasoning Graph Similarity are substantially better on both ARC and SCONE (with perfect generation for 17.1% and 60.0% of the correct answers, respectively), while close to zero for the remaining tasks. Both ARC and SCONE are the tasks with best answer accuracy compared to "random guess". This suggests a trend that the higher the answer accuracy for a task, the more likely the model is able to produce a correct reasoning graph explaining their answer.
Third, the GPT-3 [davinci] model struggles with tasks outside of the math domain. Noticeably, the SCONE results are far worse when compared to T5 [large]. This has been observed previously by Wei et al. (2022), where few-shot prompting models can struggle with toy "symbolic reasoning" tasks. This often happens when models are not of sufficient size or if the prompt examples are out of domain (i.e., evaluation examples have more steps than prompt examples).
To better visualize and understand the quality of the generated output, we plot the reasoning graph similarity for each task in Figure 3. This plot only considers outputs in which answers match the golden answers for the QA tasks. For reference, we also estimate the human performance by asking expert annotators to write the reasoning graph from scratch, given only the context, question, and the expected answer (broken down into TLUs) for 100 randomly selected questions from the test set across the different tasks.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Model** & **ARC** & **SCONE** & **GSM8K** & **AQUA-RAT** & **AR-LSAT** \\ \hline \multicolumn{6}{c}{**Answer Accuracy**} \\ \hline Random Guess & 25.0 & — & — & 20.0 & 20.0 \\ T5 [large] (fine-tuned) & 93.5 & 69.6 & 10.4 & 28.7 & 28.0 \\ GPT-3 [davinci] (few-shot) & 72.9 & 02.3 & 34.8 & 40.2 & 19.0 \\ \hline \multicolumn{6}{c}{**Reasoning Graph Accuracy**} \\ \hline T5 [large] (fine-tuned) & 17.1 & 60.0 & 00.7 & 00.0 & 00.0 \\ GPT-3 [davinci] (few-shot) & 01.7 & 01.2 & 00.7 & 00.0 & 00.0 \\ \hline \multicolumn{6}{c}{**Graph Similarity**} \\ \hline T5 [large] (fine-tuned) & 44.1 & 67.0 & 05.4 & 00.9 & 00.3 \\ GPT-3 [davinci] (few-shot) & 15.1 & 01.9 & 16.0 & 05.2 & 01.1 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The main results on the test set across the different tasks and different evaluation metrics. Numbers are in percentage. The “Random Guess” results are included to facilitate visualization since different tasks have different answer types.
Figure 3: A histogram containing the reasoning graph similarity of baseline models as well as human performance on a randomly selected subset of the test data.
xfThe baseline models perform well compared to human performance on both SCONE and GSM8K. Most noticeably, T5 [large] seems to generate the best structured explanations outputs for SCONE, which is expected since this task is more formulaic and contains the most training examples. On the other hand, the baseline models seem to perform much worse on ARC, AQUA-RAT and AR-LSAT, with human-generated reasoning graphs having over 200% higher scores compared to baselines. The human results on AQUA-RAT, and AR-LSAT are somewhat lower than on the other tasks, primarily due to the diversity of the possible valid outputs (there are multiple ways one can explain an answer). In general, automatic evaluation of generated text is still a challenge (Celikyilmaz et al., 2020) in the field of natural language understanding.
#### 4.2.1 Error Analysis
To better understand the model's mistakes, we manually analyze 100 randomly selected outputs where the answers are incorrect. Since each street task has their own peculiarities, we analyze the error patterns for each of them separately.
ARCIn this task, both baseline models seem to struggle with large outputs (i.e., more than 5 reasoning steps), which leads to a common error pattern where generated reasoning graphs do not contain the answer to the question (\(\approx 62\%\)).
SCONETo our surprise, GPT-3 [davinci] often fails to execute basic operations in this task. It generates an incorrect conclusion in first reasoning step for \(\approx 83\%\) of the analyzed outputs. This could be due to the limited number of few-shot examples (as a consequence of the input size limit) or because this task is outside of its pre-training data distribution. On the other hand, the T5 [large] seems to make fewer mistakes, with \(\approx 74\%\) of all reasoning steps matching the golden data.
Gsm8kEven among incorrect answers, both models are often able to output partially correct proofs. A small percentage of the incorrect steps are due to calculation error (\(\approx 12\%\) in GPT-3 [davinci] outputs and \(\approx 30\%\) in T5 [large] outputs). In \(\approx 37\%\) of the generated outputs, the models seem to have misunderstood the question or applied the wrong formula in one of the steps.
Aqua-RatIn this task, both models hallucinate the predicted answer into the last reasoning step (\(\approx 28\%\)), even when it does not follow the step's equation. Similarly to GSM8K, a small percentage of the incorrect steps are due to calculation errors (\(\approx 12\%\)).
AR-LSATBoth models struggle with this task. Most of the correct answers are due to random chance, and (\(\approx 33\%\)) of the generated outputs don't contain an answer to the question. In particular, GPT-3 [davinci] often just copies the TLUs from the question without making any meaningful conclusions.
## 5 Related Work
Complex Reasoning in Question-AnsweringModeling complex reasoning is an important challenge and a crucial component of natural language understanding. In the context of question-answering (QA), initial efforts to emulate complex reasoning used symbolic representations and problem solvers (Forbus & De Kleer, 1993; Platonov et al., 2020). With recent advances in pre-trained language models, reasoning over natural language has became more tractable as these models can more easily handle ambiguity and learn the reasoning rules implicitly (Clark et al., 2018; Tafjord et al., 2021). As one of the simpler forms of reasoning, textual entailment was extensively studied, and many datasets and tasks have been proposed (Bowman et al., 2015; Williams et al., 2018; Zellers et al., 2018). To address multi-step reasoning over language, many multi-hop QA (MHQA) datasets and methods were proposed (Yang et al., 2018; Dua et al., 2019; Xiong et al., 2021; Chen et al., 2021). Common limitations of MHQA that we try to address in this paper include (1) a small number of reasoning steps (usually up to three) and (2) simplistic evaluation, allowing models to correctly predict answers by exploiting spurious correlations (Tang et al., 2021). Datasets such as CLUTRR (Sinha et al., 2019) and RuleTaker D* (Clark et al., 2020) better addressed the multi-step and structured aspect reasoning with explanations. However, they contain mostly synthetic data and tasks that are relatively easy to solve with current language models.
Explainable Question-AnsweringDue to the black-box nature of neural networks (Danilevsky et al., 2020), many approaches were proposed to improve the explainability of QA systems. They include explanation graphs (Jansen et al., 2018), generating a free-form natural language explanations (Camburu et al., 2018; Rajani et al., 2019; Ayyubi et al., 2020), and chain of reasoning explanations (Jhantani & Clark (2020). Most noticeably, Dalvi et al. (2021) introduced the concept of _Entailment Trees_, containing multi-premise textual entailment steps. Entailment Trees differ from our proposed Reasoning Graphs in three main points (1)) were designed to be used mostly for explanation, not representing the answer to the questions directly (2) they require an external corpus of premises (3) they use the concept of hypothesis (as a combination of question + answer) that needs to be annotated as input. We believe reasoning graphs are a more flexible representation for explaining answers in the context of QA.
Multi-Task Language UnderstandingBenchmarks are an important way to measure progress in natural language understanding. Datasets that contain multiple tasks have the advantage of testing the generality of models. The GLUE (Wang et al., 2018) and SUPER-GLUE (Wang et al., 2019) contain tasks such as reading comprehension and natural language inference. The Massive Multitask Language Understanding benchmark (Hendrycks et al., 2021) contains various QA problems extracted from the internet. BIG-Bench (Srivastava et al., 2022) contains over 200 tasks drawing on problems involving linguistics, math, common-sense reasoning, and others. These datasets arguably test the model's ability to perform reasoning to some degree. However, most of their evaluation revolves around answering questions instead of systematically testing the reasoning required to answer such questions.
## 6 Conclusion
We aim to enable machines to perform multi-step reasoning while explaining their answers. We believe that teaching machines how to manipulate premises and reach conclusions can be an important step towards true language understanding. With that in mind, we introduce street, a new multi-task reasoning and explanation resource covering various forms of reasoning in the context of question-answering. We hope this benchmark will allow for a more systematic evaluation of the reasoning capabilities of natural language systems. Future avenues of research include exploring the reasoning capabilities and knowledge retrieval and using supervised models trained on multi-step reasoning data to bootstrap unsupervised learning for multi-step reasoning.
|
2301.11705
|
FedPH: Privacy-enhanced Heterogeneous Federated Learning
|
Federated Learning is a distributed machine-learning environment that allows
clients to learn collaboratively without sharing private data. This is
accomplished by exchanging parameters. However, the differences in data
distributions and computing resources among clients make related studies
difficult. To address these heterogeneous problems, we propose a novel
Federated Learning method. Our method utilizes a pre-trained model as the
backbone of the local model, with fully connected layers comprising the head.
The backbone extracts features for the head, and the embedding vector of
classes is shared between clients to improve the head and enhance the
performance of the local model. By sharing the embedding vector of classes
instead of gradient-based parameters, clients can better adapt to private data,
and communication between the server and clients is more effective. To protect
privacy, we propose a privacy-preserving hybrid method that adds noise to the
embedding vector of classes. This method has a minimal effect on the
performance of the local model when differential privacy is met. We conduct a
comprehensive evaluation of our approach on a self-built vehicle dataset,
comparing it with other Federated Learning methods under non-independent
identically distributed(Non-IID).
|
Kuang Hangdong, Mi Bo
|
2023-01-27T13:32:17Z
|
http://arxiv.org/abs/2301.11705v2
|
# FedPH: Privacy-enhanced Heterogeneous Federated Learning +
###### Abstract
Federated Learning is a distributed machine-learning environment that allows clients to learn collaboratively without sharing private data. This is accomplished by exchanging parameters. However, the differences in data distributions and computing resources among clients make related studies difficult. To address these heterogeneous problems, we propose a novel Federated Learning method. Our method utilizes a pre-trained model as the backbone of the local model, with fully connected layers comprising the head. The backbone extracts features for the head, and the embedding vector of classes is shared between clients to improve the head and enhance the performance of the local model. By sharing the embedding vector of classes instead of gradient-based parameters, clients can better adapt to private data, and communication between the server and clients is more effective. To protect privacy, we propose a privacy-preserving hybrid method that adds noise to the embedding vector of classes. This method has a minimal effect on the performance of the local model when differential privacy is met. We conduct a comprehensive evaluation of our approach on a self-built vehicle dataset, comparing it with other Federated Learning methods under non-independent identically distributed(Non-IID).
Keywords:Heterogeneous Differential privacy Non-IID.
## 1 Introduction
Data is the fuel that powers machine learning. However, in the real world, data is often distributed across various locations, making it impossible to send private data to a central server for model training due to personal privacy concerns and data protection laws [1].
To address these challenges, the concept of Federated Learning was introduced [2], where multiple clients perform machine learning tasks with the help of a central server. Private data is kept local and is never exchanged or transferred.
Federated Learning involves server aggregation and parameter updates [3] and has been successfully applied in various domains such as healthcare [4], mobile internet [5, 6], and finance [7].
The distribution of private data among different clients may result in non-independent and identically distributed(Non-IID), leading to data heterogeneity. Federated Learning researchers face a challenge in ensuring that the local model performs well when the local objective is far from the global objective, as the gradient-based aggregation method may not be effective in the presence of data heterogeneity [3]. Various studies have attempted to address this issue, such as FedProx [9], which limits local updates based on the \(L_{2}\) distance between the local and global models, and FedDyn [8], which proposes a dynamic regularizer for each client at each round. However, experiments show that these methods are not effective for Non-IID datasets, as most Federated Learning methods update parameters synchronously based on the gradient space, without considering the possibility that the global model may not perform well with Non-IID datasets. Therefore, Personalized Federated Learning that personalizes the local model is crucial [10]. Personalized Federated Learning introduces a new paradigm for collaborative learning by sharing feature embedding vectors.
Model heterogeneity is a significant challenge in Federated Learning due to the inconsistency in local model structures caused by differences in clients' computing resources. However, existing methods are not designed to handle such heterogeneity, as they rely on local model consistency for aggregation. To address this challenge, some researchers, such as Arivazhagan et al. [11], proposes using a personalization layer for local models, while others, like Sattler et al. [12], suggest creating different models for various user groups. However, these approaches may not sufficiently account for data heterogeneity, particularly when the private data exists feature-shifted. To overcome model heterogeneity, it is possible to leverage the sharing of feature-embedding information in addition to data heterogeneity.
Although sharing embedding information is a common method to address model heterogeneity in Federated Learning, it may not provide sufficient data privacy assurances [13]. Local differential privacy has been integrated with Federated Learning to classify images and analyze natural language [14], but reducing the privacy budget does not guarantee improved model performance. To resolve these issues, Sun et al. [15] proposed adding noise to parameters based on their value range, although the gradient explosion issue may occur during backpropagation with fewer clients. In contrast to local differential privacy, we propose a novel privacy-preserving approach that minimizes impact on the local model, ensuring that private data remains local and using multi-key semi-homomorphic encryption and differential privacy to protect data privacy.
These are our primary contributions, in brief:
1. We propose FedPH, an approach that effectively addresses the heterogeneity issue and significantly reduces communication costs by utilizing the pre-trained model as the backbone of the local model and adopting an aggregation approach to communicate embedding information.
2. We propose a novel privacy protection strategy that minimizes the impact on local model performance while ensuring differential privacy.
3. We create a vehicle dataset that considers the influence of diverse weather conditions on vehicle classification. Our results demonstrate that FedPH outperforms baseline methods.
## 2 Related Work
### Federated Learning
McMahan et al. [2] introduced FedAvg, which is a Federated Learning method that consists of four main steps for updating model parameters. In each round, clients initially obtain the global model from the server, then update their local model through gradient descent using their own private data. Next, clients send their updated local model to the server, which aggregates them to create a new global model for the next round.
Many studies have attempted to improve FedAvg to better handle Non-IID data. However, most of these studies focus on distribution bias resulting from either class imbalance or sample size imbalance [8, 9, 16]. Yet, the model's classification accuracy and convergence stability can be severely impacted when private data is distributed across multiple domains, such as with feature shifts [3] in autonomous driving where different environmental distributions (e.g. weather) cause client data to differ from that of other clients. However, the issue is often more nuanced, with label shifts [3] also occurring in widely distributed private data.
To address Non-IID in Federated Learning, Li et al. [17] introduced a normalizing layer to the local model, while Luo et al. [18] proposed Disentangled Federated Learning, which separates cross-invariant and domain-specific attributes into two complementary branches. However, these methods have limitations in accounting for local model heterogeneity and may involve a large number of parameters in the communication process between the server and clients.
### Privacy Preserving
#### 2.2.1 Differential Privacy
Differential privacy is a mathematical definition of privacy that can be used to prove that published data satisfies a certain private property. It is a property of algorithms, not data. For communication based on gradient space, Zhu et al. [19] proposed a method to intercept gradient information and reconstruct the training data. Differential privacy limits the influence of an individual and reduces the attacker's inference ability [20]. The formal definition [21] for differential privacy is defined as
Definition 1: For the adjacent datasets \(D\) and \(D^{\prime}\), all possible outputs are \(O\), and the mechanism \(F\) satisfies
\[\frac{Pr[F(D)\in O]}{Pr[F(D^{\prime})\in O]}\leq e^{\epsilon} \tag{1}\]
To satisfy differential privacy, noise is added to the output of the algorithm \(f\). This noise is proportional to the sensitivity of the output, where sensitivity measures the maximum change in the output due to the inclusion of a single instance of data. The sensitivity \(S_{f}\)[21] of Algorithm \(f\) is defined as
\[S_{f}=\max_{D,D^{\prime}:d(D,D^{\prime})\leq 1}|f(D)-f(D^{\prime})| \tag{2}\]
where \(d(D,D^{\prime})\) represents the distance between two datasets \(D\) and \(D^{\prime}\).
One of the mechanisms to achieve differential privacy is the Gaussian mechanism. The Gaussian mechanism [22] is defined as
\[F(D)=f(D)+N(0,S_{f}^{2}\sigma^{2}) \tag{3}\]
where \(N(0,S_{f}^{2}\sigma^{2})\) is the gaussian distribution with mean 0 and standard deviation \(S_{f}\sigma\). The gaussian mechanism to function \(f\) of sensitivity \(S_{f}\) satisfies \((\varepsilon,\delta)\)-differential privacy if \(\varepsilon\) and \(\delta\) satisfies certain conditions [22].
#### 2.2.2 Homomorphic Encryption
It guarantees the following properties
\[Enc(m_{1})\circ Enc(m_{1})=Enc(m_{1}+m_{2}) \tag{4}\]
where \(\circ\) means "composition" of functions. The scheme is used for privacy protection. Because an untrusted server can perform operations directly on encrypted values. The additive homomorphic scheme is the Paillier cryptosystem [23].
Damgard et al. [24] proposes a threshold variant of the Paillier cryptosystem, which allows a group of clients to share a key while ensuring that any subset of clients smaller than a predefined threshold can not decrypt the data.
## 3 FedPH
### Problem Formulation
In this section, we begin with the Federated Learning Framework in general, characterize the issue, and describe the global objective.
#### 3.1.1 General Federated Learning Framework
According to FedAvg, the global objective of the general Federated Learning Framework for \(m\) clients is
\[\min_{w_{1},w_{2},...,w_{m}}\frac{1}{m}\sum_{i=1}^{m}\frac{|D_{i}|}{N}L_{i}(w_ {i};D_{i}) \tag{5}\]
where \(w_{i}\) is the local model parameters for the \(i\)-th client and finally \(w_{1}=w_{2}=...=w_{m}\); \(m\) is the number of clients; \(L_{i}\) is the local loss function for the \(i\)-th client; \(D_{i}\) is the local dataset for the \(i\)-th client; \(|D_{i}|\) is the sample size of the local dataset for the \(i\)-th client; \(N\) is the total number of samples for all clients.
The local models in \(w_{1}=w_{2}=...=w_{m}\) are assumed to be isomorphic, which implies that the clients' computational capabilities are equivalent. If the data distributed in clients are heterogeneous, the local models could not perform well. We suggest a novel Federated Learning approach to address the above issues.
#### 3.2.2 Proposed Federated Learning Framework
We suggested Federated Learning method permits \(w_{1}\neq w_{2}\neq...\neq w_{m}\), which is different from most Federated Learning methods. FedPH is shown in Figure 1.
The pre-trained backbone is fixed for the \(i\)-th client, and the local dataset \(D_{i}\) is not shared. At least two components make up the local model. (1) Encoder \(r(\cdot;\phi^{*}):x^{d}\to x^{d_{a}}\), The \(i\)-client inputs the raw data \(x^{d}\) to the fixed backbone, and gets the feature vector \(x^{d_{a}}\), which maps the raw data \(x\) of size \(d\) to a feature vector of size \(d_{a}\). (2) Projection \(h(\cdot;\theta_{i}):x^{d_{a}}\to x^{d_{b}}\), The \(i\)-client inputs \(x^{d_{a}}\) to the unfixed network and gets \(x^{d_{b}}\), which is the mapping process for the embedding space.
Figure 1: (1) The server distributes the global embedding vectors to the clients. (2) Using their private data and the global embedding vectors, clients update their local model and local embedding vectors. (3) Clients send their local embedding vectors back to the server. (4) The server updates the global embedding vectors using the received local embedding vectors.
Definition 2: \(r\) represents the embedding function of the backbone. \(x\) represents a sample from the local dataset. \(\phi^{*}\) represents the parameters of the pre-trained backbone. To map the backbone output to another embedding space for \(i\)-client, the projection network \(h\) parameterized by \(\theta_{i}\) is used. The output of the projection network is computed as
\[z(x)=h(r(x,\phi^{*});\theta_{i}) \tag{6}\]
### Method
We propose to share embedding vectors between the server and clients to improve the performance of local models. Compared to sharing information through the gradient space, sharing through embedding vectors has several advantages: (1) it requires fewer parameters than sharing models, making it more computationally and communicationally efficient for privacy protection, (2) it uses the embedding vectors as regularization parameters, reducing the impact of data heterogeneity on local model accuracy, and (3) it does not require isomorphic local models as the embedding vectors are used for aggregation.
#### 3.2.1 Local Embedding Vectors
We decide to use the embedding vectors as information carriers to extract features from private data. The mean of sample projections from the same class \(j\) serves as the representative for the embedding vectors \(C_{i}^{j}\) for the \(i\)-th client.
\[C_{i}^{j}=\frac{1}{|D_{i,j}|}\sum_{(x,y)\in D_{i,j}}z(x) \tag{7}\]
where \(C_{i}^{j}\) denotes the \(j\)-class embedding vector of the \(i\)-th client; \(D_{i}^{j}\) denotes the \(j\)-class samples of the \(i\)-th client. The local embedding vectors are transferred to the server for information aggregation when the \(i\)-th client has finished the calculation locally.
#### 3.2.2 Global Embedding Vectors
After receiving the local embedding vector sets \(\{C_{i}\}_{i=1}^{m}\), the server calculates the global prototype as
\[\overline{C}^{j}=\frac{1}{|N_{j}|}\sum_{i=1}^{m}\frac{|D_{i,j}|}{N_{j}}\cdot C _{i}^{j} \tag{8}\]
where \(N_{j}\) denotes the set of the \(j\)-class samples among all clients. \(|N_{j}|\) denotes the number of \(N_{j}\). The global embedding vector set denotes as \(\overline{C}=\left\{\overline{C}^{1},\overline{C}^{2}...\right\}\). Through the server, the global embedding vector aggregates the information from the local embedding vectors.
\[N_{j}=\sum_{i=1}^{m}D_{i}^{j} \tag{9}\]
Reducing Noise with THE
The threshold homomorphic encryption (THE) algorithm plays a crucial role in the privacy-preserving hybrid method for noise reduction, as depicted in Figure 2.
Lemma 1: \(f(D)+N(0,S_{f}^{2}\sigma^{2})\) _satisfies \((\varepsilon,\delta)\)-differential privacy._
where the normal distribution \(N(0,S_{f}^{2}\sigma^{2})\) has a mean of \(0\) and a standard deviation of \(S_{f}\sigma\).
Proof: Each client is encrypted using THE proposed in [24]. \(t\) specifies the minimum number of honesty. The threshold is set to \(\overline{t}=m-t+1\), and the noise can be reduced by \(t-1\) times. Each client can return \(Enc(C_{i}^{j}+N(0,S_{f}^{2}\frac{\sigma^{2}}{t-1}))\), instead of returning \(Enc(C_{i}^{j}+N(0,S_{f}^{2}\sigma^{2}))\). The server first aggregates and then decrypts. The result is \(\sum_{i=1}^{m}C_{i}^{j}+Y^{j}\) where \(Y^{j}=N(0,S_{f}^{2}\frac{m\sigma^{2}}{t-1})\). Since \(t-1<m\), the noise in the decrypted value is larger than needed to satisfy differential privacy. In addition, THE scheme guarantees that it can not be decrypted even if the maximum number of colluders is \(\overline{t}\).
Figure 2: Compared to centralized differential privacy (CDP), local differential privacy (LDP) typically requires higher noise levels to achieve the same level of privacy protection. To mitigate this issue, we propose a threshold homomorphic encryption (THE) approach that enables LDP with reduced noise levels.
## 4 Local Objective
The local loss is composed of two parts, as illustrated in Figure 3. The first part is the cross-entropy loss used in supervised learning, denoted by \(L_{S}\). The second part is the contrastive loss of the embedding vectors, denoted by \(L_{R}\).
Suppose the \(i\)-client is executing the local training. During local training, the \(i\)-client receives the global embedding vectors from the server and updates the local model as well as the local embedding vectors. We extract the embedding vectors from the raw sample \(x\) according to the local model (\(C^{y}=z(x)=h(r(x,\phi^{*});\theta_{i})\)). Since the global embedding vectors can be better represented, our goal is to reduce the distance between \(C^{y}\) and \(\overline{C}^{j}\) (\(y=j\)) and increase the distance between \(C^{y}\) and \(\overline{C}^{j}\) (\(y\neq j\)). Similar to the NT-Xent loss [25], we define the contrastive loss of embedding vectors as
\[L_{R}=-log(\frac{exp(dis(C^{y},C^{j})/t)}{exp(dis(C^{y},C^{j})/t)+\sum_{y\neq j }exp(dis(C^{y},C^{j})/t)}) \tag{10}\]
where \(t\) denotes a temperature parameter. The measurement distance function can be \(L_{1}\), \(L_{2}\), and cosine. The loss of a batch \((x,y)\) is computed by
\[L=L_{S}(\omega_{i};(x,y))+\lambda\cdot L_{R}(\phi^{*};\theta_{i};\overline{C} ;(x,y)) \tag{11}\]
where \(\lambda\) is a hyper-parameter to control the weight of embedding vector contrastive loss. The local objective is to minimize
\[\min E_{(x,y)\sim D_{i}}[L_{S}(\omega_{i};(x,y))+\lambda\cdot L_{R}(\phi^{*}; \theta_{i};\overline{C};(x,y))] \tag{12}\]
Figure 3: The local loss
Algorithm 1 outlines our proposed Federated Learning approach. During local training, clients utilize stochastic gradient descent to update their personalized local model and local embedding vectors using private data, with the objective function defined in Eq.(12). At each round, the server sends the global embedding vectors to clients and updates them via a weighted average.
```
1:number of communication rounds \(L\), number of clients \(m\), number of local epochs \(E\), global embedding vectors \(\overline{C}\), local embedding vectors \(C\), the minimum number of honesty \(t\), the maximum number of colluders \(\overline{t}\), randomly selecting \(\overline{t}\) clients from \(m\) clients to form a set \(P\)
2:
3:The final global embedding vectors \(\overline{C}^{L}\)
4:Server executes:
5:Initialize the global embedding vectors \(\overline{C}^{1}\)
6:for\(l=1,2,...,L\)do
7:for\(i=1,2,...,m\)do
8:\(r_{i}\longleftarrow\) LocalTraining\((i,\overline{C}^{l})\)
9:endfor
10:Aggregate local embedding vectors by \(r=r_{1}\circ r_{2}\circ...\circ r_{m}\)
11:for\(i\in P\)do
12:\(r=Dec_{sk_{i}}(r)\)
13:endfor
14:Update global embedding vectors by \(\bar{C}^{l+1}=\frac{r}{m}\)
15:endfor
16:LocalTraining(\(i\),\(\overline{C}^{l}\))
17:for epoch \(i=1,2,...,E\)do
18:for each batch \((x,y)\in D_{i}\)do
19: Compute local embedding vectors by Eq.7
20: Compute loss by Eq.11 using local embedding vectors
21: Update local model parameters according to the loss
22:endfor
23:endfor
24:return\(Enc_{pk}(C_{i}+N(0,S_{f}^{2}\frac{\sigma^{2}}{t-1}))\)
```
**Algorithm 1** FedPH
## 5 Experiments
### Experimental Setup
We compare FedPH to three other Federated Learning methods: FedAvg [2], FedProx [9], and FedProto [16]. We also establish a baseline method, SOLO, in which clients are trained on private data without using Federated Learning.
Our experiments were performed on a custom vehicle dataset consisting of 5,000 images that depicted six different types of vehicles and five different weather conditions, as illustrated in Figure 4. As the weather conditions varied, there were feature shifts observed in the data. We generated label shifts among clients by using the Dirichlet distribution. While there were many Non-IID classes in our dataset, both feature and label shifts are common occurrences in real-world scenarios, as depicted in Figure 5.
We employ a fully connected layer as the projection head, another fully connected layer as the decision component, and use the pre-trained ResNet-18 [26] as the encoder. It is worth noting that all baselines also adopt the network architecture of FedPH.
We use PyTorch to implement FedPH and the other baseline methods. For all approaches, we adopt the SGD optimizer with a learning rate of 0.001, SGD momentum of 0.5, and SGD weight decay of 0.0001. The batch size is set to 32, and a pre-trained network serves as the backbone for all methods. For the contrastive loss of FedPH, we measure the distance between the local and global
Figure 4: The horizontal axis represents different weather classes, with each client corresponding to a specific class. The vertical axis represents vehicle classes, which correspond to private data classes.
embedding vectors using cosine distance and set the temperature parameter to 1.
### Accuracy
In the vehicle dataset with a Non-IID setting, Federated Learning methods have shown better accuracy than SOLO, as demonstrated in Figure 6. Among the compared methods, FedPH has demonstrated the best performance, outperforming FedAvg by an average of 2.5% on supervised learning tasks. Although the precision of FedProto is comparable to that of FedPH, the introduction of contrastive loss results in our suggested FedPH surpassing FedProto by an average of 1%, as presented in Table 1. This suggests that FedPH is effective in mitigating the negative effects of Non-IID.
In FedPH, the embedding vector is a shared parameter between the server and clients that effectively captures feature representations of high-dimensional
\begin{table}
\begin{tabular}{|l|l|} \hline Method & 5 clients \\ \hline SOLO & 88.4\% \(\pm\) 1.61\% \\ FedAvg & 89.6\% \(\pm\) 1.78\% \\ FedProx & 90.4\% \(\pm\) 1.52\% \\ FedProto & 91.1\% \(\pm\) 0.37\% \\ FedPH & **92.1\% \(\pm\) 0.24\%** \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of top-1 accuracy
Figure 5: Label and feature distributions of the private data vary among clients.
data, removing irrelevant information. By incorporating contrastive loss as a regular term in the local loss function, the embedding vectors of similar data are further shortened, resulting in significant performance gains in the decision layer of the local model. As a result, FedPH achieves superior results.
### Communication Efficiency
Due to the limitations of the current communication infrastructure, Federated Learning encounters significant challenges related to communication costs. Therefore, we monitored the size of the parameters for each round of communication.
Table 2 shows that FedPH has significantly fewer parameters than other methods, and is much more efficient in terms of communication. This suggests that when there is high model heterogeneity, sharing more parameters does not
\begin{table}
\begin{tabular}{|l|l|} \hline Method & Params \\ \hline FedAvg & 33200 \\ FedProx & 33200 \\ FedProto & **384** \\ FedPH & **384** \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of parameter size
Figure 6:
necessarily lead to better outcomes. Thus, it is important to determine which components should be shared in order to optimize the current system.
### Model Heterogeneity
In the configuration with model heterogeneity, small variations in model structure between clients are considered, with some having 2 or 3 fully connected layers. Due to differing model parameters, it becomes challenging to average the parameters.
Figure 7 illustrates how FedPH can achieve consistency among different clients. Unlike traditional Federated Learning methods that rely on model averaging, FedPH utilizes a more personalized approach to better fit private data in terms of both value and shape of model parameters. By abandoning model averaging, FedPH avoids potential issues related to model heterogeneity and achieves greater robustness.
### Privacy-preserving
To track changes in Federated Learning's performance, we integrate it with the privacy-preserving method. More specifically, we perturb the local embedding
Figure 7: FedPH-mh is a variation of FedPH that specifically addresses the challenge of local model heterogeneity.
vectors by adding noise with Gaussian distribution. We make sure that \((\epsilon,\delta)\)-differential privacy is satisfied by the aggregated embedding vectors.
In this experiment, the threshold was set to 3 and \(\delta\) was set to \(10e-5\). Relaxing the privacy guarantees(increasing \(\epsilon\)) reduces the associated loss, as shown in the left half of Figure 8. Applying the threshold homomorphic encryption approach reduces the impact of the noise required to satisfy differential privacy on the model, as shown in the right half of Figure 8.
According to Table 3, we find that selecting embedding vectors as aggregate parameters in privacy preservation is faster than selecting model parameters. It is important to note that this table only records the encryption process for one communication round. However, this advantage will be further amplified in multiple communication rounds.
\begin{table}
\begin{tabular}{|l|l|} \hline parameter & time \\ \hline model parameters & 3.372\(\pm\)0.0159s \\ embedding vectors & **0.039\(\pm\)0.004s** \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of time consumption
Figure 8: Comparing the right half of figure with \(\epsilon=5\)
In conclusion, FedPH integrates a privacy-preserving method that effectively protects privacy without visibly impacting performance and conserves computing resources.
## 6 Conclusion
In this study, we propose a novel Federated Learning method that combines differential privacy and threshold homomorphic encryption to protect local data privacy while minimizing the impact on local model accuracy, ensuring both privacy and security. Our approach achieves excellent privacy protection and accurate prediction results in heterogeneous contexts. Unlike traditional approaches that share information based on the gradient space, our approach shares embedding vectors between the server and clients. We conduct experiments to demonstrate the effectiveness of our method.
|
2307.04938
|
Periodicity staircase in a Fe/Gd magnetic thin film
|
Presence of multiple competing periodicities may result in a system to go
through states with modulated periodicities, an example of which is the
self-similar staircase-like structure called the Devil's staircase. Herein we
report on a novel staircase structure of domain periodicity in an amorphous and
achiral Fe/Gd magnetic thin film wherein the reciprocal space wavevector
\textbf{Q} due to the ordered stripe domains does not evolve continuously,
rather exhibits a staircase structure. Resonant X-ray scattering experiments
show jumps in the periodicity of the stripe domains as a function of an
external magnetic field. When resolved in components, the step change along
Q$_x$ was found to be an integral multiple of a minimum step height of 7 nm,
which resembles closely to the exchange length of the system. Modeling the
magnetic texture in the Fe/Gd thin film as an achiral spin arrangement, we have
been able to reproduce the steps in the magnetization using a Landau-Lifshitz
spin dynamics calculation. Our results indicate that anisotropy and not the
dipolar interaction is the dominant cause for the staircase pattern, thereby
revealing the effect of achiral magnetism.
|
Arnab Singh, Junli Li, Sergio A. Montoya, Sophie Morley, Peter Fischer, Steve D. Kevan, Eric E. Fullerton, Dao-Xin Yao, Trinanjan Datta, Sujoy Roy
|
2023-07-10T23:28:09Z
|
http://arxiv.org/abs/2307.04938v3
|
# Periodicity staircase in a Fe/Gd magnetic thin film
###### Abstract
Presence of multiple competing periodicities may result in a system to go through states with modulated periodicities, an example of which is the self-similar staircase-like structure called the Devil's staircase. Herein we report on a novel staircase structure of domain periodicity in an amorphous and achiral Fe/Gd magnetic thin film wherein the reciprocal space wavevector \(\mathbf{Q}\) due to the ordered stripe domains does not evolve continuously, rather exhibits a staircase structure. Resonant X-ray scattering experiments show jumps in the periodicity of the stripe domains as a function of an external magnetic field. When resolved in components, the step change along \(Q_{z}\) was found to be an integral multiple of a minimum step height of 7 nm, which resembles closely to the exchange length of the system. Modeling the magnetic texture in the Fe/Gd thin film as an achiral spin arrangement, we have been able to reproduce the steps in the magnetization using a Landau-Lifshitz spin dynamics calculation. Our results indicate that anisotropy and not the dipolar interaction is the dominant cause for the staircase pattern, thereby revealing the effect of achiral magnetism.
**Introduction.**
Appearance of staircase-like structure is a fascinating phenomenon that is observed in a variety of condensed matter systems. In 2D electron gas, quantized conductance is manifested as step feature in Hall effect measurements [1]. In quantum materials, interplay of competing interactions with multiple periodicities in a system can give rise to a ground state whose length scales are defined by the modulation of the original periodicities. Example of such modulated periodicities include commensurate and incommensurate phases, such as density waves in solids [2], stripes and charge density waves in cuprate superconductors [3; 4; 5], charge ordered state in manganites [6], and helical spin structure in magnetic systems [7]. A well known staircase structure is the Devil's staircase which appears when a system goes through numerous phase-locked modulated periodicities [8; 9; 10]. Devil's staircase have been observed in magnetic systems [8; 11; 12; 13; 14], liquid crystals [15] and in ferroelectrics [16]. Apart from fundamental science the staircase structures have potential applications such as in metrology, sensing devices etc [17].
Interesting staircase structure in domain size and in magnetoresistance have been observed in Dzyaloshinskii-Moriya interaction (DMI) based solitonic system [18; 19] Competition between symmetric exchange interaction and the antisymmetric Dzyaloshinskii-Moriya interaction (DMI) can give rise to interesting magnetic phases such as helix, stripes and skyrmion phases appear [20; 11; 21; 22]. DMI based chiral magnetic order in helimagnet is called Dzyaloshinskii type helimagnet structure, while a helical magnetic order due to competition between ferromagnetic and antiferromagnetic exchange interaction is known as helimagnetic structure of the Yoshimori type [23]. The chiral magnetic structures in a helimagnet exhibits solitons that can be manipulated by an external magnetic field [24]. More specifically, the soliton periodicity changes in a step wise manner which is attributed to the discrete changes in the soliton number because of confinement at the grain boundaries [24; 25]. Field evolution of confined helicoids have also been shown to occur via discrete steps in helical magnet MnSi [26]. The thin film structure of MnSi accommodates a finite number of turns and the jumps are explained due to annihilation of individual turns of the helicoid.
In this article we report on the appearance of staircase structure of the scattering wave vector \(\mathbf{Q}\) due to the ordered stripe domains in an amorphous Fe/Gd thin film. In contrast to the single crystals with DMI as described in the previous paragraph, the amorphous Fe/Gd thin film is a perpendicular magnetic anisotropy (PMA) system with dominant dipolar interactions and negligible DMI [27]. The presence of dipolar interactions can support an achiral phase which in the stripe phase results in the spins to reverse its direction of orientation twice within a period, resulting in no net chirality. We performed resonant coherent soft X-ray scattering to study variations of aligned stripe-domains periodicity. Depending on the applied field condition it is possible to obtain a skyrmion lattice phase that consists of equal number of skyrmions with opposite chiralites (+1 and -1) [28]. We observed that the scattering wave vector \(\mathbf{Q}\) changes in steps with no well defined step height and
width. However, when **Q** is resolved into components Q\({}_{x}\) and Q\({}_{y}\), the steps along Q\({}_{x}\), were found to be in integer multiple of 7 nm, which is close to the exchange length of the system. At higher temperature the steps were smeared due to thermal fluctuations.
Our X-ray scattering studies have been complemented by spin dynamics calculations that take into account the achiral nature of the system. We have simulated an experimentally observed (non-equilibrium) process where a global versus local domain dynamics delicately balances each other. On one hand we have the divergence in the total periodicity of the stripes with increasing magnetic field. On the other hand this is being counter balanced by the local minority stripe width which cannot fall below a certain size. Thus, instead of a bulk macroscopic motion of domain walls over the entire sample, these competing tendencies cause the local forces and energetics experienced by the minority domains to locally annihilate some of these half-periods. This in turn leads to (as observed experimentally and verified theoretically) a local readjustment of the domain sizes. By defining two length scales related to global and local achirality, we have been able to theoretically generate steps as a function of applied magnetic field and show the important role that anisotropy plays in generating the steps in these systems. We have developed a theoretical model for the appearance of steps using exchange, dipole and anisotropy. Our calculations indicate that the origin of the steps lie in the anisotropy term. Even if exchange and dipole interactions are present, absence of anisotropy does not produce steps. Thus, although the appearance of steps do look similar in single crystal DMI material and amorphous Fe/Gd sample, the physical origin of the steps in the two systems are different.
**Results.**
**Resonant Scattering due to Stripes**
The scattering geometry for the experimental set-up is shown in Fig.1(a). X-ray beam whose energy is tuned to the Fe L\({}_{3}\) edge is incident normally on the sample. A pinhole was placed on the beam path upstream 5 mm from the sample to establish transverse coherence of the beam. In this geometry the X-ray photons are sensitive to the spins that are aligned along the beam direction. The scattering pattern was collected on a charge coupled device camera (CCD) placed about 0.5 m away downstream. Resonant X-ray scattering measurements are sensitive to static magnetic structure (\(S(\textbf{q})\)) and spatial correlation length (\(\xi_{s}\)). From the position and intensity of the Bragg peaks it is possible to extract information about the periodicity and strength of the magnetic order. Fig.1(b) shows the full field X-ray microscope images (top panel) and X-ray resonant scattering pattern (bottom panel) of the sample. We observed three distinct magnetic phases, namely, disordered stripe, ordered stripe and skyrmions, which can be obtained by either varying the temperature or applied magnetic field. The X-ray real space images were obtained by varying the applied magnetic field at 300 K, while the resonant X-ray scattering data was measured from \(LN_{2}\) temperatures to room temperature as a function of the applied magnetic field. In the ordered stripe phase (T = 239K) the domain periodicity (2\(\pi\)/Q) at remanence is (119 \(\pm\) 5) nm. The stripe pattern persists as the field is increased from zero till around 170 mT, when new peaks in a distorted hexagonal pattern start to appear indicating a transition to the skyrmion phase. These observations are consistent with previous reports [27; 28; 29].
Fig.1(c) shows the behavior of the integrated intensity of 1\({}^{st}\) and 2\({}^{nd}\) order diffraction peak from the ordered stripe domains. As the applied magnetic field is increased the 1\({}^{st}\) order peak is maximum and the 2\({}^{nd}\) order is minimum. This is because of the equal width of up and down domains. Applied out-of-plane magnetic fields break this symmetry, causing even order diffraction peaks to appear. Around 170 mT,
Figure 1: **Experimental set-up and magnetic phases.** (a) Schematic of the coherent magnetic X-ray scattering geometry. (b) Real space (top panel) and reciprocal space (lower panel) of the different magnetic phases present in a Fe/Gd thin film. The scattering images were taken at H = 0mT; T = 85K (disorder stripes), H = 0mT; T = 225K (order stripes) and H = 190mT; T = 239K (skyrmions). (c) Variation of the 1st and 2nd order magnetic diffraction peak with field at 239K. Inset image shows the appearance of both 1st and 2nd order diffraction peaks
intensity of both peaks start to diminish and eventually new peaks in the form of hexagonal pattern appear (see Fig.1(b), bottom panel). It is interesting to note that in the hexagonal phase we observe two relatively strong intensity spots along the same direction as the stripes which would indicate that somehow the original direction of the stripes is retained even in the hexagonal phase.
**Staircase Structure of Q-vector**
The evolution of the stripe-diffraction spot in Q-space is shown in Fig.2(a) as a function of the applied field at T = 230 K. At the start of the field cycle, the momentum transfer vector is Q\({}_{1}\) ( = 0.052 nm\({}^{-1}\)). As the field increases, the magnetization increases and the size of the favorable domains (along the field directions) also increases leading to an increased domain periodicity resulting in a smaller Q-value. Interestingly, we observed that the Q-value corresponding to the magnetic Bragg peak decreases in discrete steps as a function of applied magnetic field giving rise to a staircase-like structure.
The evolution of domain periodicity happens in several steps that involve sudden jumps and appearance of a modulated periodicities. We find that along with the main magnetic Bragg peak, a much weaker satellite peak develop at a smaller Q-value, and both the peaks evolve in an interesting way as the field is changed. The increase in field leads to first the appearance of an initially weaker intensity satellite peak at Q\({}_{2}\) (at a smaller value than zero field Bragg peak at Q\({}_{1}\)). With further increase in the field, the main Bragg peak (Q\({}_{1}\)) suddenly merges with Q\({}_{2}\) giving rise to a step-like feature. Since the position and intensity of the Bragg peak gives the periodicity of the stripe domains and strength of domain scattering, we can conclude that the number of domains with periodicity P\({}_{1}\)= 2\(\pi\)/Q\({}_{1}\) decreases with increase in field while the number of domains of periodicity P\({}_{2}\) = 2\(\pi\)/Q\({}_{2}\) starts to increase and finally all the domains suddenly transform to the periodicity P\({}_{2}\). This sequence of events, changing Q from Q\({}_{1}\) to Q\({}_{7}\) with a similar mechanism of peak shifts (Q\({}_{1}\)\(\rightarrow\) Q\({}_{2}\); Q\({}_{3}\)\(\rightarrow\)Q\({}_{4}\); Q\({}_{5}\)\(\rightarrow\)Q\({}_{6}\)) was observed throughout the stripe phase (see Fig. 2(a)). In some cases, a direct change in Q-values corresponding to the Bragg peaks without any satellite (Q\({}_{2}\)\(\rightarrow\)Q\({}_{3}\); Q\({}_{4}\)\(\rightarrow\)Q\({}_{5}\)) was also observed.
In Fig.2(b) we convert the wavevector into real space periodicity (2\(\pi\)/\(Q\)) and plot it as a function of applied field at different temperatures. At higher temperatures the total number of steps increase which results in the appearance of the first step at much lower fields for higher temperatures than the lower ones. The plot of the correlation values of the stripe-diffraction spot at different fields with respect to the one at remanence for increasing magnetic fields is shown in Fig.2(c). Any subtle changes in the speckle pattern between two frames taken at 0 mT and H mT will result in a value of the correlation coefficient (CC) which is defined by
\[\mathbf{CC}=\frac{\sum_{m}\sum_{n}(A_{mn}-\bar{A})(B_{mn}-\bar{B})}{\sqrt{(\sum_{ m}\sum_{n}(A_{mn}-\bar{A})^{2})\sqrt{(\sum_{n}\sum_{n}(B_{mn}-\bar{B})^{2})}}}, \tag{1}\]
where \(A\) and \(B\) corresponds to the two images taken at two different field values. \(A_{nm}\) denotes the intensity value of the pixel position at \(m^{th}\) row and \(n^{th}\) column of the 2D scattering image. \(\bar{A}\) is the mean value of the 2D image. If CC=1 then the two images are perfectly correlated, CC=0 means completely de-correlated and CC values lying between 0 and 1 means partially correlated. Thus the variation of the correlation-coefficient can be attributed in the real space as either change in magnetization or density or periodicity of the stripes or any combination of these factors with applied field. In Fig.2(c) the CC is calculated between the scattering image taken at remanence (zero field) and another image taken at higher field. In this way the CC plot in Fig.2(c) is generated with increasing field with respect to zero field. The correlation coefficient also exhibits steps like behaviour (blue color line). As a measure of control parameter we also calculated correlation coefficients for the Airy pattern, which remains fairly close to unity at all the fields.
**Resolving staircase along Q\({}_{x}\) and Q\({}_{y}\) direction**
A typical diffraction pattern containing only the centro-symmetric first order peaks is represented in Fig.3(a) along with their in-plane Q-vectors. The enlarged image of the diffraction spot in Fig.3(b) exhibit modulation with speckles indicative of heterogeneity in the magnetic phase. The diffraction spots appears at about 45\({}^{\circ}\) to the beam propagation direction (see Fig.3(a)), meaning domains are oriented 45\({}^{\circ}\) to the X-ray propagation direction. This is due to the presence of a small in-plane field. We resolved the resultant **Q**-vector along Q\({}_{x}\) and Q\({}_{g}\) components, to get information about change in the stripe periodicity along real space X and Y direction thereby obtaining real space value L\({}_{x}\) and L\({}_{y}\) as shown in the schematic representation in Fig.2(c). We find that the steps along L\({}_{x}\) are significantly distinct compared to that in the L\({}_{y}\) direction (see Fig.3(d and e)).
Interestingly, we found that the steps along L\({}_{x}\) change in multiples of 7 nm. That is, the minimum change in periodicity along L\({}_{x}\) is 7 nm. No such relationship was found in the L\({}_{y}\) direction. The magnitude of the change in L\({}_{y}\) is small and random compared to L\({}_{x}\) (Fig.2 (d,e)). A schematic of a possible stripe domain arrangement is shown in Fig. 3(c). The blue (red) domains are majority (minority) domains. The stripes are slanted with respect to the applied field direction along z (There is a small in-plane field along the x-direction). We know from the experimental result that as the applied field is increased, the **Q**-vector of the magnetic Bragg peak moves to a lower value, but maintains its orientation of 45\({}^{\circ}\) with respect to beam direction. This indicates that the minority domain shrinks but the overall the stripe domain maintains the slant. Steps changes in multiple of 7 nm along L\({}_{x}\) then means that periodicity changes perpendicular to the beam direction but along the small-in plane field direction take place in multiple of 7 nm. Interestingly, this value matches with the exchange length (\(L_{ex}\)) of Fe/Gd thin film. One of the ways to think about this behavior is that as the minority domains shrink, there is a minimum distance between domain walls after which there cannot be smooth deformation of the spin texture. In the theory section we will show that indeed by defining a term that signifies ratio of spin kink to the spin chain, it is possible to predict jumps. At higher temperatures we observed an increase in the number of steps in the average-periodicity curves (see Fig.2(f and g) as a function of applied OOP field. This
is due to fact that thermal fluctuations aids in faster transition from one step to the other as a result we obtain more number of steps at 236 K than 85 K even though the field range over which such steps occur is much higher at lower temperatures.
Existence of steps in solitonic systems with DMI has been observed experimentally and explained theoretically [19; 30]. The presence of DMI introduces a topologically protected kink in the spin texture. The topological protection of the kink means that there is an energy cost to kink-annihilation. Different topological sectors have different energy which is the reason for step-like features. In contrast, in Fe/Gd thin films the dominant interactions are exchange, dipole, and anisotropy. This supports an achiral magnetic structure. So far there has being no theoretical studies of the step-like behaviour on dipole interaction dominant achiral spin-structures in an amorphous system. In the theoretical model presented in the next section, we have mimicked the experimental conditions by investigating a one-dimensional dipolar mediated spin chain which is achiral in nature. We have numerically solved the Landau Lifshitz (LL) equation of motion to understand the magnetization dynamics observed in the Fe/Gd thin film experiment. Based on our calculations we show that the origin of the step like behaviour under the application of an external OOP magnetic field could be explained by the spin dynamic behavior of an achiral spin chain.
**Model and theory**.
The spin kinks caused by long range dipolar interaction in the Fe/Gd thin film can be classified by a number \(n\). In Fig. 4 we show the arrangement of spins in a finite-size chain under zero applied magnetic field with fixed boundary condition on both ends. Both the local achiral structure and global achiral structure (describing the Fe/Gd thin film) are shown for comparison and context.
We consider a \(N\)-site 1D chain where spins interact with exchange interaction, dipolar interaction, PMA, and the in- and out- of plane magnetic field. The spin on each site is parameterized as
\[\mathbf{S}_{i}=(\sin\theta_{i}\cos\varphi_{i},\sin\theta_{i}\sin\varphi_{i},\cos \theta_{i}), \tag{2}\]
where the site spin angle \(\varphi_{i}=2\pi ni/\mathcal{N}\) and \(\theta_{i}=\frac{\pi}{2}\). Here \(i=0,1,2,...,\mathcal{N}\) where \(N=\mathcal{N}+1\). The kink sectors are classified by \(n\) which indicate the number of domains existing in the chain. The Hamiltonian for our Fe/Gd thin film is
\[H=H_{J}+H_{D}+H_{K}+H_{h}, \tag{3}\]
where the meaning and expression of each term is given by
\[H_{J}=-J\sum_{i\in\mathcal{N}}\mathbf{S}_{i}\cdot\mathbf{S}_{i+1}\ \ (\text{exchange}), \tag{4a}\] \[H_{D}=D\sum_{i,j\in\mathcal{C}}\mathbf{S}_{i}\cdot\mathbf{S}_{j}\Pi_{ij}\ \ (\text{ dipolar interaction}),\] (4b) \[H_{K}=-K_{U}\sum_{i}(\mathbf{S}_{i}\cdot\mathbf{x})^{2}\ \ (\text{anisotropy}),\] (4c) \[H_{h}=-g\mu_{B}H_{x}\sum_{i}S_{i}^{x}-g\mu_{B}H_{g}\sum_{i}S_{i}^{y}\ \ ( \text{magnetic field}). \tag{4d}\]
Figure 2: **Evolution of stripe diffraction peak, periodicity and correlation with field.** (a) Plot of the q-vector of the satellite peaks as a function of the applied out-of-plane (OOP) magnetic field at 230 K as the system transitions from magnetic stripe phase to skyrmion phase. (b) Evolution of the stripe-periodicity with field at various temperatures showing discrete steps like feature. (c) Correlation coefficient values with respect to the remanent state (0 mT) for increasing magnetic field at 85K.
In the above \(i\) either denotes the lattice site in the 1D chain or the location of a spin site inside a supercell (sc). The exchange interaction strength is given by \(J\), the dipolar interaction coupling by \(D\), the anisotropy by \(K_{U}\), and the in- and out-of-plane magnetic field by \(H_{x}\) and \(H_{y}\), respectively. The symbol \(g\) denotes the gyromagnetic ratio and the \(\mu_{B}\) is the Bohr magneton. The \(\Pi_{ij}\) in the dipolar interaction term is the Ewald coefficient which captures the long-range nature of the dipolar interaction. Using the angular representation of the spin \(\mathbf{S}_{i}\) we can write the total energy as
\[\begin{split}\frac{H}{JS^{2}}&=-\sum_{i\in N}\cos( \varphi_{i+1}-\varphi_{i})+J_{d}\sum_{i,j\,\in\,\infty}\Pi_{ij}\cos(\varphi_{i} -\varphi_{j})\\ &-K\sum_{i}\cos^{2}\varphi_{i}-h_{x}\sum_{i}\cos\varphi_{i}-h_{ y}\sum_{i}\sin\varphi_{i},\end{split} \tag{5}\]
where we have now introduced the scaled variables \(J_{d}=\frac{B}{J}\), \(K=\frac{K_{U}}{J}\), \(h_{x}=\frac{\mu_{B}H_{x}}{J}\) and \(h_{y}=\frac{\mu_{B}H_{y}}{J}\). In all our figures we will report the scaled fields in milli-units, that is, \(h_{x}=1\) stands for
Figure 3: **Stripe orientation and staircase-like behaviour.** (a) A typical scattering pattern of the stripe lattice along with the projection of the in-plane Q-vectors. (b) Enlarged image of the stripe-diffraction spot in Q-space. (c) Schematic real space view of the stripe-lattice orientation according to scattering image of Fig. 2(a), where blue circles with dot resemble the spin along the field direction while the red small circles with cross resemble the spins opposite to the field direction and L= L\({}_{x}\)\({}^{2}\)+L\({}_{y}\)\({}^{2}\) corresponds to the periodicity of the stripe-lattice. Plot of the evolution of (d) L\({}_{y}\) (= 2\(\pi\)/Q\({}_{y}\)) at 85 K and (e-g) L\({}_{x}\) (= 2\(\pi\)/Q\({}_{x}\)) as a function of the applied magnetic field at T = 85 K, 183 K and 236 K.
\(10^{-3}\) scaled field units.
We implement the local achiral spin structure shown in Fig. 4(a) to perform the LL simulation. To mimic the finite size of the experimental sample and to allow for the domains to grow and collapse as observed experimentally, we utilized an embedding trick to simulate the LL equations-of-motion (EOM). To capture experimentally realistic sample conditions, from a computational perspective, we introduced the concept of a local coefficient \(T\). From a physical perspective, \(T\) represents the ratio of the length of the achiral structure (which contains the twist sectors solely) over the length of the 1D chain. Thus, the achiral structure is embedded within a global ferromagnetic background spin texture. As we discuss later, the jumps occur due to the rearrangement of domain walls. The computational embedding trick allows us to capture the spontaneous rearrangement of the twist sectors in the chain configuration, thereby simulating the growth and collapse of the achiral domain walls. Our numerical simulations indicate that the eventual fate of the twist sectors and subsequent realization of jumps (as observed experimentally) is a subtle balance between \(J_{d}\), \(K\), and \(N\). We compute the minimum energy \(E_{min}\) using Eq. (5). The magnetization \(M\) is calculated using
\[M=\frac{1}{N}\sum_{i=0}^{N}\cos\varphi_{i}. \tag{6}\]
We present the energy and corresponding magnetization response of the local achiral state in Fig. 5. When the anisotropy is absent, we observe that the energy is degenerate for different twist sectors and no jumps are created by enhancing the dipolar interaction (see Fig. 5(a)-(c)). Moreover, it indicates that larger dipolar parameters induce a downshift in energy with no visible effects on the magnetization behavior in the local achiral state. In the presence of anisotropy, we keep the dipolar interaction constant and increase the \(K\) parameter as shown in Fig. 5(c)-(e). We compute the LL dynamics on a chain of local achiral state with different anisotropy parameters. We find that upon enhancing anisotropy in the presence of a magnetic field, the energy degeneracy of the different twist sectors is broken with a simple upshift. With a relatively small \(T=\frac{1}{4}\) and a strong enough anisotropy \(K=0.2\), we observed jumps in both energy and magnetization in response to magnetic field as shown in Fig. 5(e).
In Fig. 5(f)-(j) we show our calculations of energy and magnetization response as \(T\) is varied. With decreasing \(T\), jumps begin to happen in energy response with higher twist sector. When \(T>\frac{1}{2}\), jumps happen in energy curves with \(n=6\) as shown in Fig. 5(f)-(h). However, jumps happen in energy curves with smaller twist sectors \(n\) and lower magnetic field intensity \(h_{x}\). In both Fig. 5(i) and Fig. 5(j), jumps happen when twist sector is \(n\geqslant 4\). And the critical magnetic field intensity for the first jumps to happened decreases as \(T\) decreases. It is found that energy response is more powerful to show the disappearing of kinks while the jumps in magnetization response might caused by the position shifting of the kinks.
We considered a chain with larger number of sites. When the number of sites is \(N=432\) and the dipolar parameter \(J_{d}=0.00916\), jumps can be observed in energy curve with twist sector \(n=4\) and local coefficient \(T=\frac{1}{4}\). However, no jumps can be observed with \(T=\frac{1}{3}\) and \(\frac{1}{2}\) (plots not shown). This behavior can also be seen in a system with \(N=864\). The result that the decreasing \(T\) contribute to the jumps, is also consistent with the \(N=216\) system. Moreover, when the \(J_{d}\) increases, more kinks are able to establish and more jumps are observed. Thus, we draw a conclusion that not just the declining local coefficient \(T\), but also the the rising dipolar parameter \(J_{d}\) results in the jumps happening in energy curve with smaller twist sector \(n\) and weaker magnetic field \(h_{x}\).
There are no jumps in energy and magnetization response which are caused by kinks disappearing in the global achiral state (see Fig. 4(b) for global achiral state). The false jumps and oscillations in energy and magnetization for global achiral state are contributions from the position shifting of kinks. Hence, the 1D model in local achiral state is more capable to explain the origin of the jumps that happen in REXS experiment compared to the one in global achiral state.
**Discussion**.
In this work we have shown experimentally that in a amorphous and achiral Fe/Gd magnetic thin film that exhibits aligned stripes, the domain periodicity changes in steps because of abrupt disappearance of domains. This result is interesting in itself because similar to DMI based solitonic system, exchange-dipole mediated Fe/Gd thin film system also shows similar step-like behavior even though global chirality is absent in the system. Since the presence of DMI can be ruled out in the Fe/Gd thin film [27; 28], there is no inherent topological protection for the stabilized magnetic structure [23]. Thus, the achiral nature of the system prevents it from generating topologically stable spin twist sectors. In this sense, the spin twists
Figure 4: **Local and global achiral spin structure.** Finite-size chains with local and global achiral magnetic order generated by the competition between exchange interaction and dipolar interaction are shown. (a) refers to local achiral magnetic order part of the chain exhibits achiral texture while (b) refers to a global achiral magnetic order where the entire chain length exhibits achiral rotation. This conceptual difference forms the basis for introducing our local coefficient \(T\). The cartesian coordinate system shows the definition of the angle \(\varphi_{i}\) and the angle \(\theta\). These represent the angles in the \(x\)-\(y\) and the \(y\)-\(z\) plane, respectively.
that are formed due to the competition between exchange and dipolar interaction should be smoothly transformed to the ferromagnetic ground state by any finite deformation.
The existence of steps, as observed in REXS experiments, indicates that along with global achirality there must be local structures with spin-twist characteristics. Intuitively, due to the achiral spin texture of the domains, as magnetic field is increased, the minority domain starts to shrink, resulting in two "like-domain" to come closer. The minimum distance between the two domains is guided by two spin-kinks on either side which should be equivalent to length of two domain walls. Using the well known formula \(l_{w}=\sqrt{J/K}\), where \(l_{w}\) is the domain wall width, the domain wall width for Fe/Gd comes out \(\approx 3.2\) nm, twice of which is \(6.4\) nm, which is in close agreement to the experimental value of \(7\) nm. Thus the minimum distance between the two like-domains comes out to be equivalent to the exchange length (\(L_{\text{ex}}\)) of the system from our experimental study. The above explanation also points to the existence of a "global" and "local" length scales in the system which will give rise to two energy scales. It is these competing energy scales that give rise to steps. Our system is reminiscent of the case of modulated periodicities.
We take the achiral nature of the stripe spin structure as an important point in our theoretical development and show that the magnetization steps can indeed be observed in an achiral magnet. The variations and interplay of the length scales is captured in the parameter \(T\). Analysis of the energy expression with different values of \(T\) suggests that in an achiral spin arrangement staircase structure can be observed only under certain specific ratios of 1D spin to spin-twist length scale. Although simplistic, our LL calculations using local achiral spin structure shown in Fig. 4(a) is able to capture the essential feature that the system has jumps in response to an external magnetic field. The jumps happen only when the anisotropy is presented. Absence of anisotropy leads to a degeneracy of energy response for different twist sectors, meaning absence of jumps in the system. Our study provides evidence and further impetus to study an achiral magnetic texture, both from an experimental and theoretical viewpoint.
**Acknowledgements**. J. L. and D. X. Yao are supported by NKRDPC-2022YFA1402802, NKRDPC-2018YFA0306001, NSFC-92165204, NSFC-11974432, and Shenzhen International Quantum Academy. This work in the USA was primarily supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division, under Contract no. DE-AC02-05-CH11231 within the Nonequilibrium Magnetic Materials Program (MSMAG). Work at the ALS, LBNL was supported by
Figure 5: **The energy and magnetization response for different \(T\) values**. In the upper panel, dipolar parameter is \(J_{d}=0.00934\) and the anisotropy parameter is \(K=0.2\) (c). \(J_{d}=0.00962,0.01004\) in (b) and (a) while anisotropy parameter \(K\) remains the same as the one in (c). Anisotropy parameter \(K=0.1,0.2\) in (d) and (e) while dipolar parameter remain the same as the one in (e). In the lower panel, \(J_{d}=0.00962\) and \(K=0.2\). The local coefficients from the left to the right are \(T=\frac{1}{3},\frac{2}{9},\frac{1}{2},\frac{1}{2},\frac{1}{3}\),respectively. Red arrows in (h)-(j) represent the first jump for energy curve with \(n=5\). The corresponding magnetic fields are \(h_{x}=10.3,9.2,6.9\). All results are calculated with the number of sites \(N=216\).
the Director, Office of Science, Office of Basic Energy Sciences, of the US Department of Energy (Contract no. DE AC02-05CH11231). The research at UCSD was supported by the National Science Foundation, Division of Materials Research (Award #: 2105400). T. D. acknowledges hospitality of KITP. A part of this research was completed at KITP and was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. T.D. acknowledges helpful and insightful discussions on domain dynamics with Ulrich Rossler.
**Author contributions.** S.R. and A.S. conceived the experiment. A.S. and S.R. performed X-ray experiments. A.S., S.M. S.R. S.D.K. and P.F., analyzed the data and discussed experimental interpretation. S.A.M. and E.E.F. synthesized samples and performed magnetic characterization. The theory was conceived by T.D., J. L., and D. X.Y. J. L. performed the calculations. T.D and D.X.Y. checked the calculations. All authors contributed to the discussion and writing of the manuscript.
**Methods.**
**Experimental details.** The coherent X-ray magnetic scattering measurements were performed at beamline 12.0.2.2 of the Advanced Light Source, LBNL. The incident beam was tuned to the Fe L\({}_{3}\) edge (707 eV). Transverse coherence of the X-ray beam was established by inserting a 10 \(\upmu\)m pinhole in the beampath before the sample. The scattering experiment was done in the transmission geometry at temperatures ranging from 40 K to 300 K as a function the OOP magnetic field from 0 mT to 500 mT. (Fig.1(a)). The sample was subjected to the following initial magnetic field protocol. First the field was raised to 500 mT, then lowered to -500 mT and finally to zero before taking the measurements. The field ramp rate for the first two legs is 13 mT/sec, while the final drop of field from -500 mT to 0 mT took place at a rate of 380 mT/sec. We start our measurement at this zero-field condition and proceed to measure the diffraction signal as a function of applied magnetic field at a constant rate of 1.575 mT/s. A Charge Coupled Device (CCD) camera placed at about 0.5 m downstream of the sample was used to record the scattered intensity patterns.
**Theoretical method.**
_Ewald method._ In the Hamiltonian calculation, Ewald summation is applied, which is given by
\[\begin{split}\Pi_{ij}&=\sqrt{\frac{2}{\pi}}\frac{1 }{3\sigma^{3}}\sum_{n}e^{-\frac{[v_{i}-\omega]^{3}}{2\sigma^{3}}}+\frac{4\pi} {\Omega}\sum_{k\neq 0}e^{-\frac{\sigma^{2}\phi^{2}}{2}}\cos(kr_{ij})\\ &-\sqrt{\frac{2}{\pi}}\frac{1}{3\sigma^{3}}\delta_{ij},\end{split} \tag{7}\]
where \(r_{ij}\) represents the distance between two spin sites, \(L\) is the size of the supercell, \(n\) is the supercell label, \(\sigma\) is the real-space cut-off, \(k\) is the momentum space label, and \(\Omega\) is the volume of the supercell which in our case is equal to \(L\). The Ewald parameter will be redefined as \(\Pi_{ij}=\Pi_{|i-j|}\equiv\Pi_{m}\), where the symbol \(m\) tracks the number of Ewald parameters for the specific supercell size choice. The values of \(\Pi_{m}\) are shown in Table. 1.
_Landau-Lifshitz (LL) equation of motion._ We perform a LL EOM spin dynamics calculation on Eq. 3. We obtained an iterative equation which can be used to calculate the angle \(\varphi_{i}\) of each spin on the chain. Based on our computations, we are able to generate a stabilized spin order along the chain. Next, we computed the twist angle \(\Delta\varphi\) of the ground state in the absence of an external magnetic field using the energy minimization condition for the supercell given by the expression
\[\frac{E}{JS^{2}}=-\cos\Delta\varphi+J_{d}\sum_{m=1}^{L-1}(L-m)\cos(m\Delta \varphi)\Pi_{m}. \tag{8}\]
The angle \(\varphi_{i}\) was analyzed to obtain the relationship between the range of \(J_{d}\) and the number of kinks for a given lattice size of site \(N\). The relationship between the number of sites \(N\), dipolar parameter \(J_{d}\) and the maximal sector \(n_{max}\) is shown in Table. 2. Note that to perform the calculation, one needs to choose a supercell size that stabilizes the ground state and ensures that there will be minimal to no numerical oscillations in the computed result due to convergence issues. We found that \(L=6\) is the optimal supercell size which yields numerically stable results for our LL analysis. Using the numerically stable data, we computed the minimum energy \(E_{min}\) (scaled relative to \(JS^{2}\)) and magnetization \(M\) (scaled relative to \(S\)). To compare our numerical results with the experimental setup of the multilayer Fe-Gd system, we need to mimic the experimental conditions. Therefore, all the results are calculated by applying a tiny in-plane field \(h_{y}\).
The two angular variable EOMs are given by (for \(\hbar=1\))
\[S\sin\theta_{i}\partial_{t}\theta_{i}=\frac{\partial H}{\partial\varphi_{i}}, \hskip 8.535827ptS\sin\theta_{i}\partial_{t}\varphi_{i}=-\frac{\partial H}{ \partial\theta_{i}}, \tag{9}\]
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Number of sites** & **Dipolar \(J_{d}\)** & **Sector \(n_{max}\)** \\ \hline & 0.00934 & 4 \\
216 & 0.00962 & 6 \\ & 0.01004 & 8 \\ \hline & 0.00916 & 4 \\
432 & 0.00934 & 8 \\ & 0.00962 & 12 \\ \hline & 0.00916 & 8 \\
864 & 0.00934 & 17 \\ & 0.00962 & 25 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Maximum allowed number of kink sectors \(n_{max}\) under different scaled dipolar parameter \(J_{d}\) for a given number of lattice sites \(N\). We introduce a sector \(n\) denoting the number of twisted magnetic structures in a achiral lattice. The system relaxes to the ground state with a maximal sector value \(n_{max}=\left[\frac{N\Delta\varphi}{2\pi}\right]\), where the square bracket implies the flooring function.
\begin{table}
\begin{tabular}{c|c} \hline \hline
**Ewald parameter symbol** & **Ewald parameter value** \\ \hline \(\Pi_{1}\) & 2.00 \\ \(\Pi_{2}\) & 1.72 \\ \(\Pi_{3}\) & 1.32 \\ \(\Pi_{4}\) & 0.85 \\ \(\Pi_{5}\) & 0.38 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Ewald parameter symbols nad the corresponding Ewald parameter values given by Eq. 7.
where only the first equation is required because the angle \(\theta\) is held constant. Using Eq. (9) we can obtain the following expressions
\[0=\frac{1}{2}\left[\sin(\varphi_{i}-\varphi_{i-1})-\sin(\varphi_{i +1}-\varphi_{i})\right]\] \[+J_{d}\sum\limits_{j\in\,\text{sc}}\left[\sin(\varphi_{i+|i-j|}- \varphi_{i})-\sin(\varphi_{i}-\varphi_{i-|i-j|})\right]\Pi_{ij} \tag{10}\] \[+2K\sin\varphi_{i}\cos\varphi_{i}+h_{x}\sin\varphi_{i}-h_{y}\cos \varphi_{i}.\]
The above can be split further into a form convenient for a numerical iterative self-consistent approach to solve for the angle \(\phi_{i}\). Hence, we write
\[A_{i} =\tfrac{1}{2}(\sin\varphi_{i+1}+\sin\varphi_{i-1})-J_{d}\sum \limits_{j\in\,\text{sc}}(\sin\varphi_{i+|i-j|}\] \[\qquad+\sin\varphi_{i-|i-j|})\Pi_{ij}-K\sin\varphi_{i}+h_{y} \tag{11a}\] \[B_{i} =\tfrac{1}{2}(\cos\varphi_{i+1}+\cos\varphi_{i-1})-J_{d}\sum \limits_{j\in\,\text{sc}}(\cos\varphi_{i+|i-j|}\] \[\qquad+\cos\varphi_{i-|i-j|})\Pi_{ij}+K\cos\varphi_{i}+h_{x}, \tag{11b}\]
with the site angle \(\varphi_{i}\) is defined as
\[\sin\varphi_{i}=\frac{A_{i}}{\sqrt{A_{i}^{2}+B_{i}^{2}}},\quad\cos\varphi_{i} =\frac{B_{i}}{\sqrt{A_{i}^{2}+B_{i}^{2}}}. \tag{12}\]
The LLG equation is solved with the boundary condition \(\varphi_{0}=0\) and \(\varphi_{N}=2\pi n\).
|
2302.07437
|
Bridging the Usability Gap: Theoretical and Methodological Advances for
Spectral Learning of Hidden Markov Models
|
The Baum-Welch (B-W) algorithm is the most widely accepted method for
inferring hidden Markov models (HMM). However, it is prone to getting stuck in
local optima, and can be too slow for many real-time applications. Spectral
learning of HMMs (SHMM), based on the method of moments (MOM) has been proposed
in the literature to overcome these obstacles. Despite its promises, asymptotic
theory for SHMM has been elusive, and the long-run performance of SHMM can
degrade due to unchecked propagation of error. In this paper, we (1) provide an
asymptotic distribution for the approximate error of the likelihood estimated
by SHMM, (2) propose a novel algorithm called projected SHMM (PSHMM) that
mitigates the problem of error propagation, and (3) develop online learning
variants of both SHMM and PSHMM that accommodate potential nonstationarity. We
compare the performance of SHMM with PSHMM and estimation through the B-W
algorithm on both simulated data and data from real world applications, and
find that PSHMM not only retains the computational advantages of SHMM, but also
provides more robust estimation and forecasting.
|
Xiaoyuan Ma, Jordan Rodu
|
2023-02-15T02:58:09Z
|
http://arxiv.org/abs/2302.07437v3
|
Bridging the Usability Gap: Theoretical and Methodological Advances for Spectral Learning of Hidden Markov Models
###### Abstract
The Baum-Welch (B-W) algorithm is the most widely accepted method for inferring hidden Markov models (HMM). However, it is prone to getting stuck in local optima, and can be too slow for many real-time applications. Spectral learning of HMMs (SHMM), based on the method of moments (MOM) has been proposed in the literature to overcome these obstacles. Despite its promises, asymptotic theory for SHMM has been elusive, and the long-run performance of SHMM can degrade due to unchecked propagation of error. In this paper, we (1) provide an asymptotic distribution for the approximate error of the likelihood estimated by SHMM, (2) propose a novel algorithm called projected SHMM (PSHMM) that mitigates the problem of error propagation, and (3) develop online learning variants of both SHMM and PSHMM that accommodate potential nonstationarity. We compare the performance of SHMM with PSHMM and estimation through the B-W algorithm on both simulated data and data from real world applications, and find that PSHMM not only retains the computational advantages of SHMM, but also provides more robust estimation and forecasting.
_Keywords:_ hidden Markov models (HMM), spectral estimation, projection-onto-simplex, online learning, time series forecasting
## 1 Introduction
The hidden Markov model (HMM) (Baum and Petrie, 1966) is a widespread model with applications in many areas, such as finance, natural language processing and biology. An HMM is a
stochastic probabilistic model for sequential or time series data that assumes that the underlying dynamics of the data are governed by a Markov chain. (Knoll et al., 2016).
The Baum-Welch algorithm (Baum et al., 1970), which is a special case of the expectation-maximization (E-M) algorithm (Dempster et al., 1977) based on maximum likelihood estimation (MLE), is a popular approach for inferring the parameters of the HMM. However, the E-M algorithm can require a large number of iterations until the parameter estimates converge- which has a large computational cost, especially for large-scale time series data-and can easily get trapped in local optima. In order to overcome these issues especially for large, high dimensional time series, Hsu et al. (2012) proposed a spectral learning algorithm for HMMs (SHMM), based on the method of moments (MOM), with attractive theoretical properties. However, the asymptotic error distribution of the algorithm was not well-characterized. Later, Rodu (2014) improved the spectral estimation algorithm and extended it to HMMs with high-dimensional and continuously-distributed output, but again did not address the asymptotic error distribution. In this manuscript, we provide a theoretical discussion of the asymptotic error behavior of SHMM algorithms.
In addition to investigating the asymptotic error distribution, we provide a novel improvement to the SHMM family of algorithms. Our improvement is motivated from an extensive simulation study of the methods proposed in Hsu et al. (2012) and Rodu (2014). We found that spectral estimation does not provide stable results under the low signal-noise ratio setting. We propose a new spectral estimation method, the projected SHMM (PSHMM), that leverages a novel regularization technique that we call 'projection-onto-simplex' regularization. The PSHMM largely retains the computational advantages of SHMM methods without sacrificing accuracy.
Finally, we provide a novel extension of spectral estimation (including all SHMM and PSHMM approaches) to allow for online learning. We propose two approaches - the first speeds up computational time required for learning a model in large data settings, and the second incorporates "forgetfulness" which allows for adapting to changing dynamics of the data. This speed and flexibility is crucial, for instance, in high frequency trading, and we show the effectiveness of the PSHMM on real data in this setting.
The structure of this paper is as follows: In the rest of this section, we will introduce existing models. In Section 2, we provide theorems for the asymptotic properties of SHMMs. Section 3
introduces our new method, PSHMM. In Section 4, we extend the spectral estimation to online learning for both SHMM and PSHMM. Then, section 5 shows the simulation results and Section 6 shows the application on high frequency trading. We provide extensive discussion in Section 7.
### The hidden Markov model
The standard HMM (Baum and Petrie, 1966) is defined by a set of \(S\) hidden categorical states \(1,2,\cdots,S\) that evolve according to a Markov chain. We denote the hidden state at time \(t\) as \(h_{t}\). The Markov chain is characterized by an initial probability \(\pi_{0}=[\pi_{0}^{(1)},\cdots,\pi_{0}^{(S)}]\) where \(h_{1}\sim Multinomial(\pi_{0}^{(1)},\cdots,\pi_{0}^{(S)})\), and a transition matrix \(\mathbf{T}=[\mathbf{T}_{ij}]_{i=1,\cdots,S}^{j=1,\cdots,S}\) where \(\mathbf{T}_{ij}=\mathrm{P}(h_{t+1}=j|h_{t}=i)\) for \(\forall t\). The emitted observation \(X_{t}\) is distributed conditional on the value of the hidden state at time \(t\), \(X_{t}|h_{t}=s\sim\mathcal{F}_{s}\) where \(\mathcal{F}_{s}\) is the emission distribution conditioned on the hidden state \(h_{t}=s\). The standard HMM's parameters are \(\big{(}\pi_{0},\mathbf{T},\{\mathcal{F}_{s}\}_{s=1}^{S}\big{)}\). Figure 1 is a graphical representation of the standard HMM. Typically, if the emission follows a Gaussian distribution, then we call it Gaussian HMM (GHMM).
### Spectral learning of HMM
The model proposed by Rodu (2014) is shown in Figure 2. Again, \(h_{t}\) denotes the hidden state at time \(t\), and \(X_{t}\) the emitted observation. For estimation of the model, these observations are further projected onto a lower dimensional space of dimensionality \(d\) and the lower dimensional
Figure 1: Model structure of standard HMM. \(\{h_{t}\}\) is a latent Markov chain that evolves according to transition matrix \(\mathbf{T}\). For each time stamp \(t\), the observed \(X_{t}\) is generated according to the emission distribution associated with \(h_{t}\).
observations are denoted \(y_{t}=U^{\top}x_{t}\). We discuss the choice of dimension \(d\) and the projection of the observations in Section 3.3.
Using the spectral estimation framework, the likelihood can be written as:
\[Pr(x_{1:t}) = c_{\infty}^{\top}C(y_{t})C(y_{t-1})\cdots C(y_{1})c_{1}, \tag{1}\]
where
\[c_{1}=\mu,c_{\infty}^{\top}=\mu^{\top}\Sigma^{-1},C(y)=K(y)\Sigma^{-1},\]
\[\mu=E(y_{1})=U^{\top}M\pi,\]
\[\Sigma=E(y_{2}y_{1}^{\top})=U^{\top}MTdiag(\pi)M^{\top}U,\]
\[K(a)=\mathrm{E}(y_{3}y_{1}^{\top}y_{2}^{\top})a=U^{\top}MTdiag(M^{\top}Ua)Tdiag (\pi)(M^{\top}U),\]
\[M=[M_{1},\cdots,M_{S}]\mbox{ where }M_{i}=\mathrm{E}(X|i).\]
These quantities can be empirically estimated as:
\[\widehat{Pr}(x_{1:t}) = \hat{c}_{\infty}^{\top}\hat{C}(y_{t})\hat{C}(y_{t-1})\cdots\hat{ C}(y_{1})\hat{c}_{1}, \tag{2}\]
Figure 2: Spectral estimation model by Rodu (2014). In addition to containing the latent state series \(\{h_{t}\}_{t}\) and observed series \(\{X_{t}\}_{t}\), Rodu (2014) introduced a reduced-dimensional series \(\{Y_{t}=U^{\top}X_{t}\}\) which is a projection of \(X_{t}\) on a lower-dimensional subspace whose dimensionality is equal to the number of hidden states. Spectral estimation proceeds based on \(\{Y_{t}\}_{t}\).
where
\[\hat{c}_{1}=\hat{\mu},\hat{c}_{\infty}^{\top}=\hat{\mu}^{\top}\hat{ \Sigma}^{-1},\hat{C}(y)=\hat{K}(y)\hat{\Sigma}^{-1},\] \[\hat{\mu}=\frac{1}{N}\sum_{i=1}^{N}Y_{i},\] \[\hat{\Sigma}=\frac{1}{N}\sum_{i=1}^{N-1}Y_{i+1}Y_{i}^{\top},\] \[\hat{K}(y)=\frac{1}{N}\sum_{i=1}^{N-2}Y_{i+2}Y_{i}^{\top}\cdot Y_ {i+1}^{\top}y.\]
Prediction of \(y_{t}\) is computed recursively by:
\[\hat{y}_{t} = \frac{C(y_{t-1})\hat{y}_{t-1}}{c_{\infty}^{\top}C(y_{t-1})\hat{y} _{t-1}}; \tag{3}\]
Observation \(x_{t}\) can be recovered as:
\[\hat{x}_{t}|x_{1},x_{2},\cdots,x_{t-1}=U\hat{y}_{t}. \tag{4}\]
In the above exposition of spectral likelihood estimation, moment estimation, and recursive forecasting we assume a discrete output HMM. For a continuous output HMM, the spectral estimation of likelihood is slightly different. We need some kernel function \(G(x)\) to calculate \(K\), so \(K(a)=U^{\top}MTdiag(M^{\top}UG(a))Tdiag(\pi)(M^{\top}U)\) (for more on \(G(\cdot)\) see Rodu, 2014). In this paper, we will use a linear kernel \(G(a)=a\) for simplicity. In this case, the moment estimation and recursive forecasting for the continuous case are identical to the discrete case. See Rodu et al. (2013) for detailed derivations and mathematical proofs of these results.
## 2 Theoretical Properties of SHMM
In general, SHMM estimates the likelihood through the MOM. Although MOM gives fast approximation, the theoretical properties of SHMM estimation are less well studied. Hsu et al. (2012) and Rodu et al. (2013) give the conditions where the spectral estimator converges to the true likelihood almost surely:
\[\widehat{Pr}(x_{1:T}) = \hat{c}_{\infty}^{\top}\hat{C}(y_{t})\hat{C}(y_{t-1})\cdots\hat{C }(y_{1})\hat{c}_{1}\xrightarrow{a.s.}Pr(x_{1:T}), \tag{5}\]
In this manuscript, we study the asymptotic distribution of \(\hat{Pr}(x_{1:T})-Pr(x_{1:T})\). Theorem 1 shows a CLT type bound of the approximation error.
First, however, we identify the sources of error for the SHMM in Lemma 1.
Lemma 1 makes use of the what we refer to as the '\(\Delta\)' terms, defined through the following equations: \(\widehat{\mu}=\mu+\widehat{\Delta\mu}\), \(\widehat{\Sigma}=\Sigma+\widehat{\Delta\Sigma}\), \(\widehat{K}=K+\widehat{\Delta K}\).
**Lemma 1**.: \[\widehat{Pr}(x_{1:T}) = Pr(x_{1:T})+(v+\tilde{v})^{\top}\widehat{\Delta\mu}+\sum_{t=1}^ {T}a_{t}^{\top}\widehat{\Delta K}(y_{t})\tilde{a_{t}}-\sum_{t=0}^{T}b_{t}^{ \top}\widehat{\Delta\Sigma}\tilde{b_{t}}+{\cal O}_{p}(N^{-1}),\]
_where_
\[v=\left(\mu^{\top}\Sigma^{-1}K(y_{T})\cdots K(y_{1})\Sigma^{-1 }\right)^{\top};\qquad\tilde{v}=\Sigma^{-1}K(y_{T})\cdots K(y_{1})\Sigma^{-1 }\mu;\] \[a_{t}=\left(\mu^{\top}\Sigma^{-1}K(y_{T})\Sigma^{-1}\cdots K(y_ {t+1})\Sigma^{-1}\right)^{\top};\qquad\tilde{a_{t}}=\Sigma^{-1}K(y_{t-1}) \cdots K(y_{1})\Sigma^{-1}\mu;\] \[b_{t}=\left(\mu^{\top}\Sigma^{-1}K(y_{T})\Sigma^{-1}\cdots \Sigma^{-1}K(y_{t+1})\Sigma^{-1}\right)^{\top};\qquad\tilde{b_{t}}=\Sigma^{-1 }K(y_{t})\Sigma^{-1}\cdots K(y_{1})\Sigma^{-1}\mu.\]
We provide a detailed proof of this lemma in the supplementary material. The basic strategy is to fully expand \(\widehat{Pr}(x_{1:T})-Pr(x_{1:T})\) after rewriting the estimated quantities as a sum of the true quantity plus an error term. We then categorize each summand based on how many '\(\Delta\)' terms it has. There are three categories: terms with zero '\(\Delta\)' terms (i.e. the true likelihood \(Pr(x_{1:T})\)), terms with only one '\(\Delta\)' (i.e. \((v+\tilde{v})^{\top}\widehat{\Delta\mu}+\sum_{t=1}^{T}a_{t}^{\top}\widehat{ \Delta K}(y_{t})\tilde{a_{t}}-\sum_{t=0}^{T}b_{t}^{\top}\widehat{\Delta\Sigma }\tilde{b_{t}}\)), and all remaining terms, which involve at least two '\(\Delta\)' quantities, and can be relegated to \({\cal O}_{p}(N^{-1})\).
Lemma 1 shows how the estimated error propagates to the likelihood approximation. We can leverage the fact that our moment estimators have a central limit theorem (CLT) property to obtain the desired results in Theroem 1.
We denote the outer product as \(\otimes\), and define a "flattening" operator \({\cal F}(\cdot)\) for both matrices and 3-way tensors. For matrix \(A_{d\times d}\),
\[{\cal F}(A)=[A^{(1,1)},A^{(1,2)},\cdots,A^{(d,d)}]^{\top};\]
For tensor \(B_{d\times d\times d}\),
\[{\cal F}(B)=[B^{(1,1,1)},B^{(1,1,2)},\cdots,B^{(1,1,d)},B^{(1,2,1)},B^{(1,2,2 )},\cdots,B^{(1,2,d)},\cdots,B^{(1,d,d)},\cdots,B^{(d,d,d)}]^{\top}\]
We now state and prove our main theorem.
**Theorem 1**.: \[\sqrt{N}(\widehat{P}r(x_{1:T})-Pr(x_{1:T})) \xrightarrow{d} N\left(0,\beta^{\top}Cov\left(\begin{bmatrix}Y_{1}\\ \mathcal{F}(Y_{2}\otimes Y_{1})\\ \mathcal{F}(Y_{3}\otimes Y_{1}\otimes Y_{2})\end{bmatrix}\right)\beta\right),\]
_where_
\[\beta=\left[(v+\tilde{v})^{\top};-\left(\sum_{t=0}^{T}\mathcal{F}(b_{t}\otimes \tilde{b}_{t})\right)^{\top};\left(\sum_{t=1}^{T}\mathcal{F}(a_{t}\otimes \tilde{a}_{t}\otimes y_{t})\right)^{\top}\right]^{\top}\]
_and \(v,\tilde{v},a_{t},\tilde{a}_{t},b_{t},\tilde{b}_{t}\) are defined as in Lemma 1._
**Proof** (Theorem 1).: _We flatten \(\widehat{\Delta\Sigma}\) and \(\widehat{\Delta K}\) as_
\[\mathcal{F}(\widehat{\Delta\Sigma}) = [\widehat{\Delta\Sigma}^{(1,1)},\widehat{\Delta\Sigma}^{(1,2)}, \cdots,\widehat{\Delta\Sigma}^{(d,d)}]^{\top},\] \[\mathcal{F}(\widehat{\Delta K}) = [\widehat{\Delta K}^{(1,1,1)},\widehat{\Delta K}^{(1,1,2)}, \cdots,\widehat{\Delta K}^{(d,d,d)}]^{\top}.\]
_Rewriting \(a_{t}^{\top}\widehat{\Delta K}(y_{t})\tilde{a_{t}}\) and \(b_{t}^{\top}\widehat{\Delta\Sigma}\tilde{b_{t}}\) in Eq 6 as_
\[a_{t}^{\top}\widehat{\Delta K}(y_{t})\tilde{a_{t}} = \mathcal{F}(a_{t}\otimes\tilde{a}_{t}\otimes y_{t})^{\top}\cdot \mathcal{F}(\widehat{\Delta K}),\] \[b_{t}^{\top}\widehat{\Delta\Sigma}\tilde{b_{t}} = \mathcal{F}(b_{t}\otimes\tilde{b}_{t})^{\top}\cdot\mathcal{F}( \widehat{\Delta\Sigma}),\]
_we have_
\[\widehat{P}r(x_{1:T})-Pr(x_{1:T})-O_{p}(N^{-1})\] \[= \left[(v+\tilde{v})^{\top};-\left(\sum_{t=0}^{T}\mathcal{F}(b_{ t}\otimes\tilde{b}_{t})\right)^{\top};\left(\sum_{t=1}^{T}\mathcal{F}(a_{t} \otimes\tilde{a}_{t}\otimes y_{t})\right)^{\top}\right]\cdot\begin{bmatrix} \widehat{\Delta\mu}\\ \mathcal{F}(\widehat{\Delta\Sigma})\\ \mathcal{F}(\widehat{\Delta K})\end{bmatrix}\] \[= \beta^{\top}\cdot\widehat{\Delta\theta}.\]
_Since the CLT applies seperately to \(\widehat{\Delta\mu},\widehat{\Delta\Sigma},\widehat{\Delta K}\), then_
\[\sqrt{N}\widehat{\Delta\theta} = \sqrt{N}\begin{bmatrix}\frac{1}{N}\sum_{i=1}^{N}Y_{i,1}-\mu\\ \mathcal{F}(\frac{1}{N}\sum_{i=1}^{N}Y_{i,2}\otimes Y_{i,1}-\Sigma)\\ \mathcal{F}(\frac{1}{N}\sum_{i=1}^{N}Y_{i,3}\otimes Y_{i,1}\otimes Y_{i,2}-K) \end{bmatrix}\xrightarrow{d} MVN\left(\begin{bmatrix}Y_{1}\\ \mathcal{F}(Y_{2}\otimes Y_{1})\\ \mathcal{F}(Y_{3}\otimes Y_{1}\otimes Y_{2})\end{bmatrix}\right)\right).\]
_Therefore,_
\[\sqrt{N}(\widehat{P}r(x_{1:T})-Pr(x_{1:T})) \xrightarrow{d} N\left(0,\beta^{\top}Cov\left(\begin{bmatrix}Y_{1}\\ \mathcal{F}(Y_{2}\otimes Y_{1})\\ \mathcal{F}(Y_{3}\otimes Y_{1}\otimes Y_{2})\end{bmatrix}\right)\beta\right).\]
A brief aside: experimental validation of Theorem 1.We performed a series of experiments to validate the conclusions of theorem 1. We generated a target series \(x_{1:T}\) with \(x_{i}\in\mathbb{R}^{3}\) and \(T=30,100\) using a 3-state GHMM, where the initial probabilities and sticky transition probabilities are as described in Section 5.1.1, and with discrete emission probability matrix \([[0.8,0.1,0.1]^{\top},[0.1,0.8,0.1]^{\top},[0.1,0.1,0.8]^{\top}]^{\top}\). Parameters \(\widehat{\mu},\widehat{\Sigma}\), and \(\widehat{K}\) were estimated using training samples generated under the same model. Specifically, \(N\) i.i.d. samples \(Y_{1}^{(\mu)}\) were used to estimate \(\widehat{\mu}\), \(N\) i.i.d. samples of \((Y_{1}^{(\Sigma)},Y_{2}^{(\Sigma)})\) to estimate \(\widehat{\Sigma}\), and \(N\) i.i.d. samples of \((Y_{1}^{(K)},Y_{2}^{(K)},Y_{3}^{(K)})\) to estimate \(\widehat{K}\). We chose training sets of size \(N=5000,10000,50000,100000\) and for each \(N\) we replicated the experiment 1000 times. For each replication, we estimated
Figure 3: Empirical histograms of \(\hat{P}r(x_{1:T})-Pr(x_{1:T})\) estimated under different training size \(N\) and length \(T\) and theoretical density calculated based on Theorem 1. Each subfigure is associated with a different \(N\). As \(N\) increases, the distribution converges to the theoretical normal distribution. Also, when \(T\) is smaller, the estimation error converges more quickly to the asymptotic distribution.
\(\hat{Pr}(x_{1:T})^{(N,r)}\) where \(r\) indexes the replication. For each \(N\), we construct the histogram v.s. theoretical probability density function (pdf) for the estimation error, i.e. \(\{\hat{Pr}(x_{1:T})^{(N,r)}\}_{r}\), which by Theorem 1 should converge to a normal distribution as \(N\) grows larger. In Figure 3 we indeed see that the histogram v.s. theoretical probability density function (pdf) for the estimation error is almost the same as the one in Figure 3.
Figure 4: Histogram of the first-order error from the first, second and third moment estimation error (i.e. \((v+\tilde{v})^{\top}\widehat{\Delta\mu}\), \(\sum_{t=0}^{T}b_{t}^{\top}\widehat{\Delta\Sigma}\tilde{b_{t}}\), and \(\sum_{t=1}^{T}a_{t}^{\top}\widehat{\Delta K}(y_{t})\tilde{a_{t}}\)) under different training sizes \(N\) with fixed length \(T=30\) vs. the theoretical pdf calculated based on Theorem 1 (red line). Each subfigure is associated with a different \(N\). As \(N\) increases, the distribution converges to the theoretical normal distribution.
the desired effect. As \(N\) grows larger we see that the distribution of the estimated likelihood converges to the normal distribution, and with a shorter length \(T\), the error converges faster. We also separately analyzed the asymptotic behaviour of the first-order estimation error from first moment error, \((v+\tilde{v})^{\top}\widehat{\Delta\mu}\), second moment error, \(\sum_{t=0}^{T}b_{t}^{\top}\widehat{\Delta\Sigma}\tilde{b}_{t}\), and third moment error, \(\sum_{t=1}^{T}a_{t}^{\top}\widehat{\Delta K}(y_{t})\tilde{a}_{t}\).
Figure 5: Histogram of the Frobenius norm of the first, second and third moment estimation error (i.e. \(\mu\), \(\Sigma\) and \(K\)) under different training size \(N\) vs. the theoretical pdf (red line). Here the red line is the theoretical Chi-squared distribution. Each subfigure is associated with a different \(N\). As \(N\) increases, the distribution converges to the theoretical distribution.
(as shown in Figure 4), and we found that the third moment estimation error dominates the error terms and has the largest contribution as shown in Table 1. We further analyzed the asymptotic distribution of the Frobenius norm of the first, second and third moment estimation error (i.e. \(\mu\), \(\Sigma\) and \(K\)). Figure 5 shows the empirical histogram and its corresponding theoretical pdf.
We note that there are two facets to the error when estimating the likelihood. The first stems from the typical CLT-type error in estimating the parameters of the model (i.e. for smaller \(N\) the \(\mathcal{O}_{p}(N^{-1})\) term is not small enough). The second is that any error introduced into the system from estimation error can propagate under forward recursion. To achieve a stable normal distribution given the second issue, \(N\) must be much greater than \(T\). In our simulation (\(T=100\)), at \(N=10000\) we see reasonable evidence of asymptotic normality, while at \(N=5000\) we do not. When \(N=T\) or \(N\) is only slightly larger than \(T\), the distribution heavy-tailed. That is to say, there could be outliers. This also suggests us to add some kinds of regularization.
For simplicity of presentation, we derived Theorem 1 under the assumption that the output distribution is discrete. For continuous output, the CLT still holds with a proper kernel function \(G(\cdot)\) as mentioned in section 1.2.
\begin{table}
\end{table}
Table 1: Theoretical variance for first, second, third moment estimation errors based on simulated data with different \(T\).
## 3 Projected SHMM
### Motivation for Adding Projection
In the Baum-Welch algorithm (Baum et al., 1970), when we make a prediction, we are effectively predicting the belief probabilities, or weights, for each underlying hidden state. Denote the predicted weight vector at time \(t\) as \(\hat{w}_{t}\). Then the prediction can be expressed as a weighted combination of cluster means \(\hat{y}_{t}=M\hat{w}_{t}\) where \(||\hat{w}_{t}||_{1}=1\). The weights are explicitly guaranteed to be non-negative and sum to 1 during forward propagation in the Baum-Welch algorithm, which is consistent with their physical meaning. However, these two constraints are not explicitly implemented in spectral estimation since SHMM doesn't estimate the weights directly. Therefore, SHMM can sometimes give predictions which are far away from the polyhedron spanned by the cluster means. In order to solve this problem, we propose the projected SHMM, where projection serves to regularize the predictions to be within a reasonable range.
This is particularly important when \(N\) is not sufficiently large, and extreme deviations of the estimated likelihood from the true likelihood can occur when error is propagated over time. Regularization can help stabilize the performance of estimation of the likelihood by limiting this propagation of error.
### Projection-onto-polyhedron and Projection-onto-Simplex SHMM
There are two ways to achieve projection for our problem: projection-onto-polyhedron and projection-onto-simplex. Projection-onto-polyhedron is derived directly from the motivation for using projections in SHMM, but suffers from high computational cost. To obtain a better computational performance, we propose projection-onto-simplex as an alternative.
#### 3.2.1 Projection-onto-Polyhedron
Projection-onto-polyhedron SHMM first generates prediction \(\hat{y}_{t}^{(SHMM)}\) through the standard SHMM and then projects it onto the polyhedron with vertices \(\widehat{M}\). In other words, we find the point on the polyhedron spanned by \(\widehat{M}\) that is nearest to the predicted \(\hat{y}_{t}^{(SHMM)}\). We can use any distance to define "nearest point" but in our exposition we use Euclidean distance. Mathematically, we
substitute the recursive forecasting in Eq 3 with
\[\hat{y}_{t}^{(SHMM)} = \frac{C(y_{t-1})\hat{y}_{t-1}}{c_{\infty}^{\top}C(y_{t-1})\hat{y}_{t -1}};\] \[\hat{y}_{t} = \mathop{\arg\min}_{y\in Poly(\widehat{M})}d(y,\hat{y}_{t}^{(SHMM)}), \tag{6}\]
where \(d(\cdot,\cdot)\) is the distance function (such as Euclidean distance), and
\[Poly(\widehat{M})=\{y=\widehat{M}w|w\ is\ on\ the\ simplex\}\]
is the polyhedron with vertices \(\widehat{M}\). This results in a convex optimization problem if the distance is convex, which is true for general distance functions such as Euclidean distance. We can solve this using standard convex optimization methods such as the Newton-Raphson algorithm (Boyd et al., 2004), with variants allowing linear constraints such as the log-barrier methods (Frisch, 1955). To the best of our knowledge, there is no dedicated algorithm for solving projection-onto-polyhedron, and unfortunately finding a fast solution seems to be challenging. The approach we take is to write the loss function of the constrained problem as the loss function with an indicator function, and use the log-barrier method to approximate the linear constraints through log-barrier functions. We then use the Newton-Raphson algorithm to optimize this approximated loss function, iteratively relaxing the approximation and solving it until convergence. Note that this optimization must be done at every time step. This implies a trade-off between the accuracy of the approximation and optimization. Recall that we turn to SHMM because it is faster than the Baum-Welch algorithm, so any modification should not slow down the computation too much, otherwise this mitigates one of its strong advantages.
#### 3.2.2 Projection-onto-Simplex
To obtain higher efficiency in computation, we propose a second projection regularization method: projection-onto-simplex. It leverages an algorithm that allows us to calculate the projection with time complexity \(\mathcal{O}(d\log(d))\)(Wang and Carreira-Perpinan, 2013). To avoid projection onto a polyhedron, we leverage the fact that \(\hat{y}_{t}=\widehat{M}\hat{w}_{t}\) and optimize over \(\hat{w}_{t}\) which lies on the simplex. Mathematically, the optimization problem becomes
\[\hat{w}_{t} = \mathop{\arg\min}_{w\in Simplex}||w-\widehat{M}^{-1}\hat{y}_{t}^{ (SHMM)}||_{2}^{2}. \tag{7}\]
This solution is not equivalent to solution from the projection-onto-polyhedron, because \(d(a,b)\neq d(Aa,Ab)\) in general. However, the solution set is the same, i.e. the predictions are both guaranteed to be constrained to the polyhedron. The solution of Eq 7 can be obtained through a closed-form solution provided in Algorithm 1, which avoids iterations during convex optimization, yielding fast estimation. Figure 6 gives a graphical demonstration for the projection-onto-polyhedron and projection-onto-simplex methods.
The full projected SHMM algorithm is shown in Algorithm 2. In Algorithm 2, Steps 1-3 are identical to the standard SHMM. Steps 4-5 estimate \(\hat{M}\) by Gaussian Mixture Models (GMM) (McLachlan and Basford, 1988), calculate the weight processes \(\{w_{t}\}\), and apply SHMM on the weight process. Step 6 applies projection-onto-simplex on the recursive forecasting. Step 7 projects the data back into the original space.
Figure 6: The left figure shows the projection-onto-polyhedron step, and the right shows projection-onto-simplex. In both methods, we project the predicted values (blue points) into the constrained regions (defined with a red boundary), a polyhedron (left) or simplex (right).
```
Input :\(\{x_{t}\}\), where \(t=1,\cdots,T\) Output :\(\hat{x}_{T+1}\) Step 1: Compute \(\hat{E}[x_{t+1}\otimes x_{t}]=\frac{1}{T-2}\sum_{i=1}^{T-2}x_{t+1}x_{t}^{\top}\); Step 2: Obtain \(\hat{U}\) by extracting the first \(k\) left eigenvectors of \(\hat{E}[x_{t+1}\otimes x_{t}]\); Step 3: Reduce dimensionality \(y_{t}=\hat{U}^{\top}x_{t}\); Step 4: Estimate cluster mean by GMM, and obtain \(\hat{M}\), where each column is the mean vector of each cluster. Then the weight vector is \(w_{t}=\hat{M}^{-1}y_{t}\) for \(t=1,\cdots,T\); Step 5: Calculate \(\hat{\mu}=\frac{1}{T}\sum_{t=1}^{\top}w_{t}\), \(\hat{\Sigma}=\frac{1}{T-1}\sum_{t=1}^{T-1}w_{t+1}w_{t}^{\top}\), and \(\hat{K}=\frac{1}{T-2}\sum_{t=1}^{T-2}w_{t+2}\otimes w_{t}\otimes w_{t+1}\). Set \(\hat{c}_{1}=\hat{\mu}\), \(\hat{c}_{\infty}^{\top}=c_{1}^{\top}\hat{\Sigma}^{-1}\), and \(\hat{C}(w_{t})=\hat{K}(w_{t})\hat{\Sigma}^{-1}\); Step 6: Recursive prediction with projection-onto-simplex \(\hat{w}_{t}=Proj\left(\frac{\hat{C}(w_{t-1})\hat{w}_{t-1}}{\hat{c}_{\infty}^{ \top}\hat{C}(w_{t-1})\hat{w}_{t-1}}\right)\) for \(t=2,\cdots,T+1\) where \(Proj(a)=\arg\min_{w\in Simplex}||w-a||_{2}^{2}\) can be solved by Algorithm 1, and set \(\hat{y}_{1}=\hat{c}_{1}\); Step 7: \(\hat{x}_{T+1}=\hat{U}\hat{y}_{T+1}=\hat{U}\hat{M}\hat{w}_{T+1}\);
```
**Algorithm 2**Projection-onto-simplex SHMM.
#### 3.2.3 Bias-variance tradeoff
In PSHMM, we leverage GMM to provide projection boundaries, which would introduce bias, since the hidden state means estimated by GMM are biased where we ignore time dependency information. In addition, either projection method- 'projection-onto-polyhedron' or 'projection-onto-simplex'- can introduce bias since they are not necessarily an orthogonal projection due to optimization constraints. However, adding such a projection will largely reduce the variance. That is, there is a bias-variance tradeoff.
### Discussion of the proposed methodology
In this section, we discuss considerations for the proposed methodology. In addition to the choice of dimensionality of the projection space, we discuss computation of the projection matrix \(U\). Finally, we show how to adapt the method when the dimensionality of the projection space \(d\) is believe to be higher than the observation space. In this case, we provide a mechanism based on initially estimating a GMM on the observations directly to obtain a suitable mapping from the observation space to the projection space.
#### 3.3.1 The choice of hyperparameter \(d\)
\(d\) is the dimensionality of the projection space. In theory, \(d\) should equal the number of states in the HMM. Our simulations show that when \(d\) is chosen to be equal to the underlying true number of states, the estimation and prediction will perform better than at other values of \(d\). However, the number of hidden states is usually unknown. In practice, we can either choose \(d\) using prior knowledge or tune it if we do not have a strong prior belief.
3.2 Calculation of \(U\) matrix under extremely high-dimensional data: unigram or bigram randomized SVD
The projection matrix \(U\) is constructed by the first \(d\) left singular vectors from the singular value decomposition (SVD) (Eckart and Young, 1936) of the bigram covariance matrix \(\widehat{\Sigma}=\widehat{\mathbb{E}}[X_{2}\otimes X_{1}]\). This encodes the transition information and will eliminate the in-cluster covariance structure. However, this is not the only acceptable projection. For instance, we could also estimate \(U\) through an SVD of \(\widehat{\mathbb{E}}[X_{1}\otimes X_{1}]\). This will encode covariance structure along with information about the cluster means. In most cases we suggest using the bigram matrix.
A point worth mentioning is that for extremely high-dimensional cases, we can leverage a fast approximation algorithm for computing \(\hat{U}\). The algorithm is based on the randomized SVD (Halko et al., 2011). When computing the SVD of the bigram matrix, we need to avoid computing the covariance matrix \(\hat{E}[x_{t+1}\otimes x_{t}]\). The standard algorithm for the SVD requires time complexity \(\mathcal{O}(Tp^{2}+p^{3})\), where \(T\) and \(p\) are the sample size and dimensionality of the dataset. For the high-dimensional cases where \(p\gg d\), the randomized SVD has time complexity \(\mathcal{O}(pT\log(d)+(p+T)d^{2})\). In this case, the bottleneck is the computation of \(\hat{E}[x_{t+1}\otimes x_{t}]\), whose time complexity is \(\mathcal{O}(Tp^{2})\).
Note that \(\hat{E}[x_{t+1}\otimes x_{t}]=\frac{1}{T-2}\sum_{i=1}^{T-2}x_{t+1}x_{t}^{\top} =\frac{1}{T-2}X_{2}^{\top}X_{1}\), where \(X_{2}=[x_{2},\cdots,x_{T}]^{\top}\) and \(X_{1}=[x_{1},\cdots,x_{T-1}]^{\top}\). We can take the randomized SVD of \(X_{1}\) and \(X_{2}\) separately to obtain two rank-\(\tilde{d}\) decompositions with \(d\leq\tilde{d}\ll p\): \(X_{1}\approx U_{1}\Sigma_{1}V_{1}^{\top}\), \(X_{2}\approx U_{2}\Sigma_{2}V_{2}^{\top}\). Then \(X_{2}^{\top}X_{1}=V_{2}(\Sigma_{2}U_{2}^{\top}U_{1}\Sigma_{1})V_{1}^{\top}\). The matrix \((\Sigma_{2}U_{2}^{\top}U_{1}\Sigma_{1})\) is of dimension \(\tilde{d}\times\tilde{d}\), and computing it is much faster than computing \(\hat{E}[x_{t+1}\otimes x_{t}]\). We then perform an SVD on this matrix to get \(\Sigma_{2}U_{2}^{\top}U_{1}\Sigma_{1}=\tilde{U}\tilde{\Sigma}\tilde{V}^{\top}\). Then \(\hat{E}[x_{t+1}\otimes x_{t}]\approx(V_{2}\tilde{U})(\frac{1}{T-2}\tilde{ \Sigma})(V_{1}\tilde{V})^{\top}\). Note that \(V_{2}\tilde{U}\) and \(V_{1}\tilde{V}\) are orthonormal matrices and \(\frac{1}{T-2}\tilde{\Sigma}\) is a diagonal matrix, so this is the rank-\(\tilde{d}\) SVD of \(\hat{E}[x_{t+1}\otimes x_{t}]\). The first \(d\) vectors of \(V_{2}\tilde{U}\) are an estimate of the \(\hat{U}\) matrix we are to compute in Step 1 and 2 in Algorithm 2.
#### 3.3.3 Projecting \(\{w_{t}\}\) onto the probability space
The standard SHMM and the proposed PSHMM perform well under the high-dimensional observations. They will fail when the dimensionality of the projection space \(d\) is believe to be higher than the observation space. In this setting, we must increase the dimensionality of the observations in order to avoid low-rank issues for moment estimation of \(\Sigma\) which needs to be inverted. One way to increase the dimensionality is to project observations onto a Hilbert space using kernel methods (Song et al., 2010). However, this method suffers from lack of interpretability.
We propose to project the observations directly onto the assumed probability space to increase the dimensionality. Taking cues from the advantages of PSHMM, we fit a GMM on our observations \(x_{t}\) to obtain observation representations \(w_{t}^{(i)}=\mathrm{P}(h_{t}=i|x_{t})\). This probability vector has similar interpretation to the emission probability in the Gaussian HMM but is computed by GMM. Specifically, we modify steps 2 and 4 in Algorithm 2 as follows:
* Step 2: Set \(\hat{U}\) to be the identity matrix so that \(y_{t}=x_{t}\) for all \(t\);
* Step 4: Estimate GMM to obtain \(\hat{M}\) and \(w_{t}=[w_{t}^{(1)},w_{t}^{(2)},\cdots,w_{t}^{(d)}]^{\top}\) for each \(t\), where each \(w_{t}^{(i)}=\mathrm{P}(h_{t}=i|y_{t})\) is from GMM.
The above modifications are equivalent to directly computing probabilities from the GMM in \(x\)-space, skipping steps 1 to 4 altogether. However, in order to draw a direct comparison to our proposed method in the high-dimensional setting, we chose to modify particular steps of algorithm 2 to achieve our goal.
Compared to Song et al. (2010), the advantage of this method lies in its interpretation. In this model, \(w_{t}\) has a straightforward probabilistic meaning, i.e. the weight on different components for a given observation. The SHMM directly predicts \(w_{t}\), or the weight process. Because we still leverage GMM and project predictions onto the probability space, the bias-variance tradeoff (see Section 3.2.3) still exists.
## 4 Online Learning
Because computational speed is an important consideration when leveraging SHMMs, it is natural to consider online learning for SHMMs to further accelerate the computational speed in some
settings. Traditionally SHMMs will be estimated using batch or offline learning. Batch learning is applicable under the following assumptions: (1) the entire training data set is available in the training phase (2) we can endure a relative long computational time and (3) the underlying data generation mechanism is consistent in both the training and forecasting phases (Fontenla-Romero et al., 2013). However, there are many application scenarios that do not meet those assumptions. For example, in quantitative trading, markets often exhibit a regime-switching phenomenon. Thus it is likely that the observed returns for some financial products are not stationary. In high frequency trading, especially in second-level or minute-level trading, the delay from frequent offline re-training of the statistical model could impact the strategy and trading speed (Lahmiri and Bekiros, 2021).
### Online Learning of SHMM and PSHMM
SHMM and PSHMM can be useful in online settings, and the ability to adapt the parameters is essential. Let \(\hat{\mu},\hat{\Sigma}\), and \(\hat{K}\), be the estimated moments based on \(T\) data points.
When we obtain new data \(Y_{T+1}\), we update our moments recursively as follows:
\[\hat{\mu} \leftarrow \frac{T\cdot\hat{\mu}+Y_{T+1}}{T+1};\] \[\hat{\Sigma} \leftarrow \frac{(T-1)\cdot\hat{\Sigma}+Y_{T+1}\otimes Y_{T}}{T};\] \[\hat{K} \leftarrow \frac{(T-2)\cdot\hat{K}+Y_{T+1}\otimes Y_{T-1}\otimes Y_{T}}{T-1};\] \[T \leftarrow T+1. \tag{8}\]
The above works for both SHMM and PSHMM. The pseudo code for the online learning of PSHMM is shown in Algorithm 3.
Updating the GMM used in PSHMMTo update the first, second and third order moments of \(w_{t}\), we replace \(Y\) with \(w\) in the above formulas. At question is if we also update the parameters of the GMM. There are several ways to do this. If desired, we recommend updating the GMM without changing cluster membership. For example, we can simply classify a new input into a particular cluster and update the cluster's mean and covariance. In particular, we don't suggest re-estimating the GMM to allow for adding or removing clusters. In theory, the number of clusters
should be pre-specified to equal to the number of states in the HMM, which in turn specifies the dimensionality of space for \(y\). The addition or deletion of a cluster in GMM fundamentally changes the relationship between the dimensionality of \(y\) and the number of hidden states. In practice however, we find that PSHMM works well without any updates to the GMM.
### Online Learning of SHMM Class with Forgetfulness
When dealing with nonstationary data, adding a forgetting mechanism on parameter estimation can be beneficial. We must specify a decay factor \(\gamma\) that specifies the rate, \(1-\gamma\), at which
information is forgotten. The the updating rule is, then,
\[\hat{\mu} \leftarrow \frac{(1-\gamma)\tilde{T}\hat{\mu}+Y_{T+1}}{(1-\gamma)\tilde{T}+1};\] \[\hat{\Sigma} \leftarrow \frac{(1-\gamma)\tilde{T}\hat{\Sigma}+Y_{T+1}\otimes Y_{T}}{(1- \gamma)\tilde{T}+1};\] \[\hat{K} \leftarrow \frac{(1-\gamma)\tilde{T}\hat{K}+Y_{T+1}\otimes Y_{T-1}\otimes Y _{T}}{(1-\gamma)\tilde{T}+1};\] \[\tilde{T} \leftarrow \tilde{T}\cdot(1-\gamma)+1. \tag{9}\]
Here \(\tilde{T}=\sum_{i=1}^{T}(1-\gamma)^{i-1}\) serves as an effective sample size. This strategy is equivalent to calculating the exponentially weighted moving average for each parameter:
\[\hat{\mu}=\frac{\sum_{t=1}^{T}(1-\gamma)^{T-t}Y_{t}}{\sum_{t=1}^{T}(1-\gamma)^ {T-t}};\hat{\Sigma}=\frac{\sum_{t=2}^{T}(1-\gamma)^{T-t}Y_{t}\otimes Y_{t-1}}{ \sum_{t=2}^{T}(1-\gamma)^{T-t}};\hat{K}=\frac{\sum_{t=3}^{T}(1-\gamma)^{T-t}Y _{t}\otimes Y_{t-2}\otimes Y_{t-1}}{\sum_{t=3}^{T}(1-\gamma)^{T-t}}. \tag{10}\]
## 5 Simulations
### Test Robustness under Different Signal-Noise Ratio, Mis-specified Models and Heavy-Tailed Data
#### 5.1.1 Experiment setting
We generated 100-dimensional data with length 10000 for training data and 100 for testing data under different settings, and used SHMM and PSHMM with recursive prediction for time series forecasting. We repeated each simulation setup 100 times, calculated \(R^{2}\) for each repeat, and computed an average \(R^{2}\) over all repeats. The results below show this averaged \(R^{2}\). We tested 3 variants of the PSHMM: projection onto polyhedron, projection onto simplex, and projection onto simplex with online learning. The online learning variant of PSHMM used 1000 training samples for the initial estimate (warm-up), and incorporated the remaining 9000 training samples using online updates. In our simulations, online training for PSHMM differs from offline training for two reasons. First, the estimation of \(\hat{U}\) and \(\hat{M}\) are based only on the warm-up set for online learning (as is the case for the online version of SHMM), and the entire training set for offline learning. Second, during the training period, for PSHMM, the updated moments are based on
the recursive predictions of \(\hat{w}_{t}\), which are themselves based on the weights \(\{w_{s}\}_{s=1}^{t-1}\). For offline learning of PSHMM in contrast, the weights used to calculate the moments are based on the GMM estimates from the entire training dataset. For each state, we assumed the emission distribution has a one-hot mean vector and diagonal covariance matrix, that the mean vector of the \(i\)-th state is \([1\{i=j\}]_{j=1}^{p}\) where \(1\{\cdot\}\) is the indicator function. We tested those methods and the E-M algorithm under two types of transition matrix, three different signal-noise ratios and different emission distributions as below:
* Transition matrix:
* Sticky transition matrix: diagonal elements are 0.6, off-diagonal elements are \(\frac{0.4}{S-1}\), where \(S\) is the hyperparameter that tell how many clusters that generated the data;
* Non-sticky transition matrix: diagonal elements are 0.4, off-diagonal elements are \(\frac{0.6}{S-1}\).
* Signal-noise ratio: The covariance matrix of each cluster is: \(\sigma^{2}I_{p}\), where \(p\) is the dimension of the space, \(\sigma=0.01,0.05,0.1,0.5,1.0\).
* Data generated from:
* Gaussian distribution: generate according to mean vector and covariance matrix;
* \(t\) distribution: generate standard a random vector of \(i.i.d.\)\(t_{5}\), \(t_{10}\), \(t_{15}\) and \(t_{20}\) distribution first, and then multiply by covariance matrix and shifted by mean vector.
For each setting, we also showed the oracle \(R^{2}\). To calculate the oracle, we assume we know all parameters for the HMM but do not know the specific hidden state at each time point.
#### 5.1.2 Simulation Results
Figure 7 shows that PSHMM performs comparably to E-M. In most cases they achieve similar \(R^{2}\), with PSHMM outperforming E-M algorithm under large noise or heavy-tails. Note that we allow multiple initializations under the E-M algorithm in order to avoid being trapped in local optima. However, this substantially increases the computational time. Finally, we note that dimension of the simulated observed data was set at \(dim=100\). In our simulations, we found that this approached the useful limit of the E-M algorithm. When we tried the simulated data
Figure 7: These subfigures show the simulation results for experimental settings in Section 5.1.1. The left column shows the results from sticky transitions, and the right column from nonsticky transitions. The first row examines the effect of \(\sigma\) (data generated using a 5-state GHMM), the second row shows the effect of misspecification of the dimensionality \(d\) (data generated from a 5-state GHMM with \(\sigma=0.05\)), and the last row shows the results under non-gaussian emissions (data generated using a 5-state HMM with t-distributed emissions and \(\sigma=0.05\), under varying degrees of freedom). In each subfigure, the y-axis is average \(R^{2}\). For \(R^{2}<0\), we threshold at 0 for plotting purposes. See supplementary for detailed results.
with \(dim=1000,10000\), the E-M algorithm failed, while SHMM still performed well. Here for comparison, we show the results with data of \(dim=100\).
The last point is an important one. In Figure 7, in order to include the E-M algorithm in our simulations, we restrict our investigation to settings where E-M is designed to succeed, and show comparable performance with our methodology. Many real-world scenarios exist in a space where E-M is a non-starter.
In Figure 7, we can also see that adding projections to SHMM greatly improved \(R^{2}\), achieving near oracle-level performance in some settings. The top row shows that under both high or low signal-to-noise scenarios, PSHMM works well. The middle row shows that PSHMM is robust and outperforms SHMM when the model is mis-specified, for example, when the underlying data contains 5 states but we choose to reduce the dimensions to 3 or 4. The last row shows that PSHMM is more robust and has a better \(R^{2}\) than SHMM with heavy-tailed data such as data generated by a t distribution. In all figures, when \(R^{2}<0\), we threshold to 0 for plotting purpose. See supplementary tables for detailed results. Negative \(R^{2}\) occurs only for the standard SHMM, and implies that it is not stable. Since we simulated 100 trials for each setting, in addition to computing the mean \(R^{2}\) we could also calculate the variance of the \(R^{2}\) metric. We provide the variance of the \(R^{2}\) in the appendix, but note here that the variance of \(R^{2}\) under PSHMM was smaller than that of the SHMM. Overall, while SHMM often performs well, it is not robust, and PSHMM provides a suitable solution.
Finally, in all simulation settings, we see that the standard SHMM tends to give poor predictions except in non-sticky and high signal-to-noise ratio settings. PSHMM is more robust against noise and mis-specified models. Among the PSHMM variants, projection-onto-simplex outperforms projection-onto-polyhedron. The reason is that the projection-onto-simplex has a dedicated optimization algorithm that guarantees the optimal solution in the projection step. In contrast, projection-onto-polyhedron uses the log-barrier method, which is a general purpose optimization algorithm and does not guarantee the optimality of the solution. Since projection-onto-polyhedron also has a higher computational time, we recommend using projection-onto-simplex. Additionally, for projection-onto-simplex, we see that online and offline estimation perform similarly in most settings, suggesting that online learning does not lose too much power compared with offline learning.
### Testing the effectiveness of forgetfulness
Experimental setting.Similarly to Section 5.1.1, we simulated 100-dimensional, 5-state GHMM data with \(\sigma=\{0.01,0.05,0.1,0.5,1.0\}\) and of length 2000 where the first 1000 steps are for training and the last 1000 steps are for testing. The transition matrix is no longer time-constant but differ in the training and testing period as follows:
* training period (diagonal-0.8): \(\mathbf{T}^{(train)}=[\mathbf{T}^{(train)}_{ij}]_{i=1,\cdots,5}^{j=1,\cdots,5}=[0.75 \times\mathrm{I}\{i=j\}+0.05]_{i=1,\cdots,5}^{j=1,\cdots,5}\).
* testing period (antidiagonal-0.8): \(\mathbf{T}^{(test)}=[\mathbf{T}^{(test)}_{ij}]_{i=1,\cdots,5}^{j=1,\cdots,5}=[0.75 \times\mathrm{I}\{i+j=5\}+0.05]_{i=1,\cdots,5}^{j=1,\cdots,5}\).
We tested different methods on the last 100 time steps, including the standard SHMM, projection-onto-simplex SHMM, online learning projection-onto-simplex SHMM, online learning projection-onto-simplex SHMM with decay factor \(\gamma=0.05\), E-M, and the oracle as defined above. For online learning methods, we used the first 100 steps in the training set for warm-up and incorporated the remaining 900 samples using online updates.
Figure 8: Simulation results for online learning variants. The simulations are generated using a 5-state GHMM across a variety of \(\sigma\) settings and time-varying transition distributions. The y-axis is the average \(R^{2}\). As before, for \(R^{2}<0\), we threshold at 0 for plotting purposes. See supplementary for detailed results.
Simulation results.Figure 8 shows the simulation results. Since the underlying data generation process is no longer stationary, most methods failed including the E-M algorithm, with the exception of the online learning projection-onto-simplex SHMM with decay factor \(=0.05\). Adding the decay factor was critical for accommodating non-stationarity. PSHMM with decay factor outperformed the other methods since adding forgetfulness helps accommodate the changing patterns. As \(\sigma\) increases even PSHMM performs poorly- the oracle shows that this degredation in performance is hard to overcome.
### Testing Computational Time
Experimental settingWe used a similar experimental setting from Section 5.1.1. We simulated 100-dimensional, 3-state GHMM data with \(\sigma=0.05\) and length 2000. We used the first half for warm-up and test computational time on the last 1000 time steps. We tested both E-M algorithm and SHMM (all variants). For the SHMM family of algorithms, we tested under both online and offline learning regimes. We computed the total running time in seconds. The implementation is done in python with packages Numpy (Harris et al., 2020), Scipy (Virtanen et al., 2020) and scikit-learn (Pedregosa et al., 2011) without multithreading. Note that in contrast to our previous simulations, the entire process is repeated only 30 times. The computational time is the average over the 30 runs.
Simulation resultsTable 2 shows the computational times for each method. First, online learning substantially reduced the computational cost. For the offline learning methods, projection-onto-simplex SHMM performed similarly to SHMM, and projection-onto-polyhedron was much slower. In fact, the offline version of projection-onto-polyhedron was slow even compared to the the E-M algorithm. However, the online learning variant of projection-onto-polyhedron was much faster than the Baum-Welch algorithm. Taking both the computational time and prediction accuracy into consideration, we conclude that online and offline projection-onto-simplex SHMM are the best choices among these methods.
## 6 Application: Backtesting on High Frequency Crypt-Currency Data
### Data Description & Experiment Setting
To show the performance of our algorithm on real data, we used a crypto currency trading records dataset published by Binance ([https://data.binance.vision](https://data.binance.vision)), one of the largest Bitcoin exchanges in the world. We used the minute-level data and used the calculated log returns for each minute as the input for the models. We set aside a test set from 2022-07-01 to 2022-12-31. For each day in the test set, we used a previous 30-day rolling period to train the models, and made consecutive-minute recursive predictions over the testing day without updating the model parameters. For currency, we chose Bitcoin, Ethereum, XRP, Cardano and Polygon. For prediction, we used the HMM with E-M inference (HMM-EM), SHMM, and PSHMM (simplex) and compared their performance. For HMM-EM, SHMM and PSHMM, we chose to use 4 latent states. This was motivated by the fact that there are 4 dominant types of log returns: combinations of large/small gain/loss.
Ultimately, we evaluated models based on the performance of a trading strategy. Translating predictions into a simulated trading strategy is straightforward, and proceeds as follows. If we forecast a positive return in the next minute, we buy the currency, and if we forecast a negative return, we short-sell the currency. We buy a fixed dollar amount of crypto-currency for each of
\begin{table}
\begin{tabular}{c|c|c} \hline \hline Method & Offline/online & Computational time (sec) \\ \hline E-M (Baum-Welch) & - & 2134 \\ SHMM & offline & 304 \\ SHMM & online & 0.5 \\ PSHMM (simplex) & offline & 521 \\ PSHMM (simplex) & online & 0.7 \\ PSHMM (polyhedron) & offline & 10178 \\ PSHMM (polyhedron) & online & 14 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Simulation results for comparing computational time among different methods.
the 5 currencies, hold it for one minute, and then sell it. We repeat this for every minute of the day, and calculate the return of that day as
\[R_{m}=\frac{1}{5}\sum_{i=1}^{5}\sum_{t}sign(\widehat{Y}_{i,t}^{(m)})Y_{i,t}^{(m)}\]
where \(Y_{i,t}^{(m)}\) is the return for minute \(t\) of day \(m\) for currency \(i\), \(\widehat{Y}_{i,t}^{(m)}\) is its prediction, and \(sign(a)\) is 1 if \(a\) is positive, \(-1\) if \(a\) is negative, and 0 if \(a=0\). Over a period of \(M\) days, we obtain \(R_{1},\cdots,R_{M}\) and calculate the annualized return,
\[Annualized\ return=365\times\overline{R},\]
the Sharpe ratio (Sharpe, 1966)
\[Sharpe\ ratio=\frac{\sqrt{365}\times\overline{R}}{\widehat{std}(R)},\]
where \(\overline{R}\) and \(\widehat{std}(R)\) are the sample mean and standard deviation of the daily returns, and the maximum drawdown (Grossman and Zhou, 1993)
\[Maximum\ drawdown=\max_{m_{2}}\max_{m_{1}<m_{2}}\left[\frac{\sum_{m=m_{1}}^{m_{2}} (-R_{m})}{1+\sum_{m=1}^{m_{1}}R_{m}}\right].\]
These three metrics are standard mechanisms for evaluating the success of a trading strategy in finance. The annualized return shows the ability of a strategy to generate revenue and is the most straightforward metric. Sharpe ratio is the risk-adjusted return, or the return earned per unit of risk, where the standard deviation of the return is viewed as the risk. In general, we can increase both the return and risk by borrowing money or adding leverage, so Sharpe ratio is a better metric than annualized return because it is not affected by the leverage effect. Maximum drawdown is the maximum percentage of decline from the peak. Since the financial data is leptokurtic, the maximum drawdown shows the outlier effect better than the Sharpe ratio which is purely based on the first and second order moments. A smaller maximum drawdown indicates that the method is less risky.
### Results
From Table 3, we see that PSHMM outperforms all other benchmarks with the highest Sharpe ratio and annualized return, and the lowest maximum drawdown. PSHMM outperforms SHMM
and SHMM outperforms HMM-EM. SHMM outperforms HMM-EM because the spectral learning doesn't suffer from the local minima problem of E-M algorithm. PSHMM outperforms SHMM because the projection-onto-simplex provides regularization.
The accumulated daily return is shown in Figure 9. PSHMM well-outperformed the other methods. The maximum drawdown of PSHMM is 49%. Considering the high volatility of the crypto currency market during the second half of 2022, this maximum drawdown is acceptable.
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline Method & Sharpe Ratio & Annualized Return & Maximum drawdown \\ \hline PSHMM & 2.88 & 1012\% & 49\% \\ SHMM & 1.07 & 345\% & 90\% \\ HMM-EM & 0.89 & 197\% & 53\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Real-world application results: PSHMM, SHMM, HMM-EM and AR on crypto-currency trading.
Figure 9: The average accumulated return of crypto currencies.
For computational purposes, the drawdown is allowed to be larger than 100% because we are always using a fixed amount of money to buy or sell, so effectively we are assuming an infinite pool of cash. Between PSHMM and SHMM, the only difference is from the projection-onto-simplex. We see that the maximum drawdown of PSHMM is only about half that of SHMM, showing that PSHMM takes a relatively small risk, especially given that PSHMM has a much higher return than SHMM. Combining the higher return and lower risk, PSHMM performs substantially better than SHMM.
## 7 Discussion
Spectral estimation avoids being trapped in local optima.E-M (i.e. the B-W algorithm) optimizes the likelihood function and is prone to local optima since the likelihood function is highly non-convex, while spectral estimation uses the MOM directly on the observations. Although multiple initializations can mitigate the local optima problem with E-M, it, there is no guarantee that it will convergence to the true global optimum. Spectral estimation provides a framework for estimation which not only avoids non-convex optimization, but also has nice theoretical properties. The approximate error bound tells us that when the number of observations goes to infinity, the approximation error will go to zero. In this manuscript, we also provide the asymptotic distribution for this error.
Projection-onto-simplex serves as regularization.The standard SHMM can give poor predictions due to the accumulation and propagation of errors. Projection-onto-simplex pulls the prediction back to a reasonable range. This regularization is our primary methodological innovation, and importantly makes the SHMM well-suited for practical use. Although the simplex, estimated by the means from a GMM, can be biased, this simplex provides the most natural and reasonable choice for a projection space. We can think of this as a bias-variance trade-off. When the data size is small, this regularization sacrifices bias to reduce variance.
Online learning can adapt to dynamic patterns and provide faster learning.Finally, we provide an online learning strategy that allows the estimated moments to adapt over time, which is critical in several applications that can exhibit nonstationarity. Our online learning
framework can be applied to both the standard SHMM and PSHMM. Importantly, online learning substantially reduces the computational costs compared to re-training the entire model prior to each new prediction.
**SUPPLEMENTAL MATERIALS**
**Python-package for SHMM and PSHMM:**: Python-package \(PSHMM\) contains code to perform the SHMM and PSHMM described in the article. The package also contains an illustration of package usage. (.zip file)
**Proofs:**: Detailed proof for Lemma 1 in Section 2. (.pdf file)
**Simulation details:**: Detailed simulation results for Figure 7 (.pdf file)
|
2301.06279
|
Numerical Investigation of Localization in Two-Dimensional Quasiperiodic
Mosaic Lattice
|
A one-dimensional lattice model with mosaic quasiperiodic potential is found
to exhibit interesting localization properties, e.g., clear mobility edges [Y.
Wang et al., Phys. Rev. Lett. \textbf{125}, 196604 (2020)]. We generalize this
mosaic quasiperiodic model to a two-dimensional version, and numerically
investigate its localization properties: the phase diagram from the fractal
dimension of the wavefunction, the statistical and scaling properties of the
conductance. Compared with disordered systems, our model shares many common
features but also exhibits some different characteristics in the same
dimensionality and the same universality class. For example, the sharp peak at
$g\sim 0$ of the critical distribution and the large $g$ limit of the universal
scaling function $\beta$ resemble those behaviors of three-dimensional
disordered systems.
|
Hui-Hui Wang, Si-Si Wang, Yan Yu, Biao Zhang, Yi-Ming Dai, Hao-Can Chen, Yi-Cai Zhang, Yan-Yang Zhang
|
2023-01-16T06:36:24Z
|
http://arxiv.org/abs/2301.06279v1
|
# Numerical Investigation of Localization in Two-Dimensional Quasiperiodic Mosaic Lattice
###### Abstract
A one-dimensional lattice model with mosaic quasiperiodic potential is found to exhibit interesting localization properties, e.g., clear mobility edges [Y. Wang et al., Phys. Rev. Lett. **125**, 196604 (2020)]. We generalize this mosaic quasiperiodic model to a two-dimensional version, and numerically investigate its localization properties: the phase diagram from the fractal dimension of the wavefunction, the statistical and scaling properties of the conductance. Compared with disordered systems, our model shares many common features but also exhibits some different characteristics in the same dimensionality and the same universality class. For example, the sharp peak at \(g\sim 0\) of the critical distribution and the large \(g\) limit of the universal scaling function \(\beta\) resemble those behaviors of three-dimensional disordered systems.
## 1. Introduction
Anderson localization of wavefunctions in disordered systems is a subtle consequence of quantum interference[1; 2; 3; 4; 5]. It is known that electrons in lower dimensions are more prone to localization in the presence of even weak disorder[2; 3; 5; 6]. Theoretical progresses[7; 8; 9; 10; 11; 12] and experimental realizations[13; 14; 15; 16; 17] of the delocalization-localization transition (or the metal-insulator transition, MIT) in disordered materials are still exciting topics until today. On the other hand, quasiperiodicity is a delicate structure between order and disorder[18; 19]. The most typical case is the Aubry-Andre-Harper (AAH) model, a one-dimensional (1D) lattice with nearest hopping and with a quasiperiodic potential incommensurate with the underlying lattice structure[20; 21]. For this 1D model, there exist delocalized states at finite potential magnitude, which is impossible for a completely disordered counterpart[6; 22]. When the quasiperiodic potential magnitude is strong enough, all states will be localized.
Recently the study of quasiperiodic systems has attracted a lot of attentions due to inspiring progresses of experimental realizations[18; 19; 23; 24; 25; 26; 27], and its relation with the many body localization[28]. New analytical methods based on Green's functions are developed, offering distinct insights into their physical grounds[29; 30]. In the presence of pairing interaction, surprisingly, it was found that the quasiperiodic potential can enhance superconductivity remarkably[31; 32], suggesting more unexpected phenomena of quasiperiodicity on quantum states. Even in the single particle picture, the quasiperiodic potential results in rich phenomena in different models. Several nontrivial variations of the AAH model have been investigated recently, for instance, generalizations to a dimerized chain[33] or two coupled chains[34], with an unbounded quasiperiodic potential[35] or with a relative phase in the quasiperiodic potential[36], and higher dimensions[37]. Novel transport phases and topological phases are found when hoppings are long ranged[38; 39; 40] or quasiperiodic as well[41]. Besides these, two-dimensional (2D) quasiperiodic systems provide richer phenomena of localization[42; 43], topology[44], flat band[45], and many-body effects[23; 46].
The original 1D AAH model possesses a self duality for the transformation between the real and momentum spaces. This leads to the absence of mobility edges, which means that all eigenstates are either localized or delocalized, depending only on the strength of the potential. Among attempts to break the self duality, a nontrivial modification is the 1D quasiperiodic mosaic lattice, where the quasiperiodic potential only appears at sites with equally spaced distance[47]. This is an analytically solvable model that exhibits mobility edges separating localized and delocalized states for a fixed potential strength.
In this manuscript, we generalize this quasiperiodic mosaic lattice to a two-dimensional (2D) version on the square lattice and study its localization properties. Different from its 1D counterpart, the phase boundary between localization and delocalization is highly fractal. By varying the initial phase of the quasiperiodic potential, statistical properties of the conductance are studied. In the end, we summarize all numerical results of conductance in a universal scaling function \(\beta(g)\), which shows some different features from those in disordered 2D systems. Interestingly, for this 2D quasiperiodic model, some of these properties are similar to those in three-dimensional (3D) disordered systems.
## 2. Model and Methods
Our 2D model is defined on a square lattice with the nearest hopping Hamiltonian
\[\begin{split}\mathcal{H}&=\sum_{i_{x},i_{y}}t(c^{ \dagger}_{i_{x},i_{y}}c_{i_{x}+1,i_{y}}+c^{\dagger}_{i_{x},i_{y}}c_{i_{x},i_{y} +1})+\text{H.c.}\\ &+2\sum_{i_{\mathbf{x}},i_{\mathbf{y}}}V(i_{x},i_{y})c^{\dagger}_{i_{x}, i_{y}}c_{i_{x},i_{y}},\end{split} \tag{1}\]
where \(c^{\dagger}_{i_{x},i_{y}}\) (\(c_{i_{x},i_{y}}\)) creates (annihilate) an electron at the site with integral coordinate \((i_{x},i_{y})\). In the following, the hopping integral \(t=1\) will be used as the energy unit, and the lattice constant \(a=1\) will be used as the length unit.
The quasiperiodic mosaic potential \(V(i_{x},i_{y})\) is a generalization of the 1D version as[47; 48]
\[V(i_{x},i_{y}) =\begin{cases}\lambda F(i_{x},i_{y}),&i_{x}=m\kappa\text{ and }i_{y}=n\kappa,\\ \phantom{-}0\phantom{-},&\text{otherwise}\end{cases} \tag{2a}\] \[F(i_{x},i_{y}) =\cos\big{[}2\pi(\omega i_{x}+\theta_{x})]+\cos[2\pi(\omega i_{y} +\theta_{y})\big{]}, \tag{2b}\]
where \(\lambda\geqslant 0\) is the amplitude of the potential. The irrational number \(\omega\) defines the quasiperiodicity, and the integer
\(\kappa\) defines the mosaic period as illustrated in Figure 1. When \(\kappa=1\) it returns to the original 2D AAH model without mosaic, and only models with \(\kappa>1\) can be called mosaic. Recently an endoepitaxial growth of mosaic heterostructures has been experimentally realized on monolayer 2D atomic crystals[49]. In this manuscript, except in Figure 2 (a), we always adopt \(\omega=(\sqrt{5}-1)/2\) and \(\kappa=2\),[47]. The real numbers \(0\leqslant\theta_{x},\theta_{y}<1\) are phase offsets of the potential profile. With other model parameters (\(\lambda\), \(\omega\) and \(\kappa\)) fixed, different pairs of \((\theta_{x},\theta_{y})\) corresponds to different realizations of an ensemble, similar to different realizations of a disordered ensemble[48]. This will be useful when one needs statistical properties of this model, for example, mean values and statistical fluctuations.
Since analytical treatments for a 2D model are more difficult than those in 1D, we will rely on numerical methods, which will be briefly introduced in the following. All of our following calculations are performed in FORTRAN codes, along with Intel Math Kernel Library.
For a square shaped finite sample with \(L\times L\) lattice sites, the inverse participation ratio (IPR) of the \(m\)-th normalized eigenstate \(|\Psi_{m,j}\rangle\) is defined as[5]
\[\text{IPR}(L,m)=\sum_{j=1}^{L\times L}|\Psi_{m,j}|^{4}. \tag{3}\]
Then the localization of the state can be characterized by the fractal dimension of the wavefunction as
\[\Gamma(L,m)=-\frac{\ln\left[\text{IPR}(L,m)\right]}{\ln L}. \tag{4}\]
In the thermodynamic limit \(L\rightarrow\infty\), the state is extended (localized) if \(\lim\limits_{L\rightarrow\infty}\Gamma(L)\to 2\) (\(\lim\limits_{L\rightarrow\infty}\Gamma(L)\to 0\)).
The localization properties can also be studied from quantum transports, by attaching two leads to the sample with quasiperiodic potential. At zero temperature, the two-terminal conductance \(G_{T}\) at Fermi energy \(E\) is proportional to the transmission (Landauer formula)[50], and can be expressed as[51; 52]
\[G_{T}(E)=2\frac{e^{2}}{h}g_{T}=2\frac{e^{2}}{h}\text{Tr}\left[\Gamma_{L}(E)G^ {r}(E)\Gamma_{R}(E)G^{a}(E)\right], \tag{5}\]
where the prefactor 2 accounts for the spin degeneracy. Here \(G^{r/a}(E)\equiv\left(E\pm-H-\Sigma_{L}^{r/a}-\Sigma_{R}^{r/a}\right)^{-1}\) is the dressed retarded/advanced Green's function of the central sample, and \(\Gamma_{L(R)}=i(\Sigma_{L(R)}^{r}-\Sigma_{L(R)}^{a})\) with \(\Sigma_{L(R)}^{r/a}(E)\) being retarded/advanced self energies due to the left (right) lead, respectively. In order to diminish the interface scattering, we take both leads to be lattices identical to the sample with vanishing potential \(V\).
Based on the dimensionless conductance \(g_{T}\) defined in Equation(5), the appropriate variable for size scaling is the
Figure 1: Illustration of our 2D quasiperiodic mosaic lattice model with \(\kappa=2\). Only solid red sites have nonzero quasiperiodic potential. Each site is represented by its coordinate \((i_{x},i_{y})\) and the nearest hopping is \(t\).
intrinsic conductance \(g\) expressed as [53; 54]
\[\frac{1}{g}=\frac{1}{g_{T}}-\frac{1}{N_{c}}, \tag{6}\]
with \(N_{c}\) the number of active channels at Fermi energy when the potential is absent. The second term \(\frac{1}{N_{c}}\) is used to deduct the effect of contact resistance so that the intrinsic transport property of the sample can be manifested. This intrinsic conductance of square shaped (\(L\times L\)) samples can be used to evaluate the scaling function \(\beta=\frac{d\left(\ln d\right)}{d\ln L}\) of the metal-insulator transition, where \(\langle\cdots\rangle\) stands for averaging over an appropriate ensemble [54; 55; 56]. An increase (decrease) of \(\ln g\) with increasing \(\ln L\) indicates a metal (insulator) phase.
## III 3. Results
### Phase diagram from eigenstates
First let us have a global view of the localization property of this model. In Figure 2, we present the fractal dimension \(\Gamma\) of eigenstates as functions of potential magnitude \(\lambda\) and corresponding eigenenergy \(E\), for an isolated \(200\times 200\) sample with a definite potential configuration \(\theta_{x}=\theta_{y}=0\). For comparison, we present the cases of \(\kappa=1\) (non-mosaic) and \(\kappa=2\) (mosaic) lattice in panels (a) and (b) respectively. With increasing \(\lambda\), the band is broadened outwards and localized states (\(\Gamma\sim 0\), blue color) appear. However, there is a remarkable difference between these two cases. For the non-mosaic case [Figure 2 (a)], all states on the energy spectrum transit into localization simultaneously around \(\lambda\sim 1\). In other words, there is no mobility edge for this case. For the mosaic case, on the other hand, there are both localized (blue color) and delocalized (orange color) states after \(\lambda\gtrsim 0.6\). This distinction between mosaic and non-mosaic models are similar to that in 1D [47]. Also similar to the mosaic lattice in 1D[47], extended states (\(\Gamma\sim 2\), yellow color) mostly distribute around the band center. Although their fraction among all states decreases with increasing \(\lambda\), their existence can survive at large \(\lambda\). In the following, we will focus on the mosaic lattice only.
To see more details of this phase diagram in Figure 2 (b) for the 2D mosaic lattice, we plot the profile of \(\Gamma(E)\) along the red dashed line at \(\lambda=2.5\) in Figure 3 (a). For comparison, a typical counterpart of the 1D model is plotted in Figure 3 (b). Compared to the 1D case, the first obvious feature of the 2D model is strong fluctuations even around adjacent eigenenergies. For example, in the delocalization region \(E\lesssim 3\), \(\Gamma\) for most eigenstates fluctuate rapidly between \((1.35,1.8)\), and some of them even drop around \(\Gamma=1\). Furthermore, the transition between localization and delocalization tends to be a fractal region. In 1D, on the contrary, the profile consists of smooth curves with sudden jumps at transitions. Similar to that shown Figure 3 (a), fractal transitional region of transports is also predicted for the topological transition of a 2D incommensurate bilayer, which is a quasiperiodic structure as well[57].
Figure 2: Fractal dimension \(\Gamma\) of eigenstates as functions of corresponding eigenvalues \(E\) and quasiperiodic potential strength \(\lambda\), for a fixed sample size \(L=200\). (a) Non-mosaic with \(\kappa=1\). (b) Mosaic with \(\kappa=2\).
To characterize the difference between Figure 3 (a) and (b) quantitatively, we calculate the Hausdorff dimension \(d_{H}\) of the point set \(\left\{\left(\Gamma_{i},E_{i}\right)\right\}\) (\(i\) is the index of eigenstates) shown in these 2D panels, by using the standard box-counting (BC) method [58; 59]. This algorithm counts the number \(N\) of squares with size \(\delta\) which are necessary to continuously cover the graph of points \(\left(\Gamma_{i},E_{i}\right)\) rescaled to a unit square. In the intermediate region of \(\delta\) ("the scaling region") where the scaling relation \(N\sim\delta^{-d_{H}}\) holds, the slope of the log-log plot of \(\delta-N\) is the estimate of the Hausdorff dimension \(d_{H}\)[60; 61]. The results corresponding to Figure 2 (a) and (b) are shown as red and blue plots in panel (c) respectively. As expected, \(d_{H}\) for the 1D model is close to 1, reflecting the fact that \(\Gamma(E)\) consists of simple smooth curves. For the 2D model however, \(d_{H}\sim 5/3\) suggests that \(\Gamma(E)\) is indeed highly fractal. We have checked that (but not shown here) for a very small potential magnitude, say, \(\lambda=0.3\), the relative fluctuation of \(\Gamma(E)\) of the 2D model can be smaller, but its Hausdorff dimension (\(d_{H}=1.68\)) is still distinctly larger than that of the 1D model (\(d_{H}=0.87\)).
### 3.2. Transport
The above results were from eigenstates of an isolated sample. Now let us scrutinize transport properties obtained by attaching conducting leads to the sample.
The localization (delocalization) can be characterized by the increasing (decreasing) of \(g\) [Equation(6)] with growing sample size \(L\)[3; 5]. Similar to the case of disordered systems[54], the conductance should be
Figure 4: The typical value of the intrinsic conductance \(\exp\left(\left(\ln g\right)\right)\) as a function of sample size \(L\), for different model parameters \(\lambda\) and \(E\). The sample is square shaped and the average \(\left\langle\cdots\right\rangle\) is over 150,000 realizations of \(\left(\theta_{x},\theta_{y}\right)\).
Figure 3: (a) \(\Gamma\) as a function of \(E\) for \(\lambda=2.5\), which is just the cross section shown as the red dotted line in Figure 2 (b). (b) \(\Gamma\) as a function of \(E\) for a 1D mosaic lattice, with \(\lambda=2.5\), \(\kappa=2\) and \(N_{x}=3000\). (c) The \(\log-\log\) plots of box counting parameters \(\delta\) versus \(N\) for panel (a) (red) and panel (b) (blue), whose slope gives the Hausdorff dimension.
size scaling, to diminish the effect of sharp coherent fluctuations. To this end, we choose \(\theta_{x}\) and \(\theta_{y}\) to be random variables uniformly distributed within \((0,1)\), and define \(\langle\cdots\rangle\) to be the arithmetic average over different realizations of \((\theta_{x},\theta_{y})\). In the following calculations, all averages are over 150,000 realizations. Since \(g\) is not a self-averaging quantity and \(\ln g\) is "better distributed" than \(g\) itself (especially in the case of localization, as will be seen in the following), it is numerically more preferable to extract information from \(\langle\ln g\rangle\) (or equivalently the typical value \(g^{\rm typ}\equiv\exp\left(\langle\ln g\rangle\right)\)) rather than the mean value \(\langle g\rangle\)[4; 5; 54]. In Figure 4, we present the typical values of conductance \(g^{\rm typ}\) as functions of the sample size \(L\), for different model parameters. There are typical delocalization and localization states with increasing or decreasing \(g(L)\) dependence respectively, and also critical states (\(\lambda=3.1\) and \(E=2.5\)) between them.
Besides the averaged quantity, the distribution of the conductance also provides insightful information of localization[4; 5]. In Figs. 5 and 6, we present distributions of \(g\) and \(\ln g\) respectively, for four typical regimes. In Figure 5 (a) with extremely weak disorder, the conductance has a very small relative deviation \(\Delta g/g\). The distribution profile is rather irregular with multiple peaks. This is not surprising because the quasiperiodic potential is not "random-like" enough. Moreover, the energy scale associated with the quasiperiodic potential has not dominated over the level separation of the finite sample yet, so that the transmission can be very sensitive to the competition between these two factors, resulting in multiple peaks of the distribution. With a stronger potential but still in the delocalized state as shown in Figure 5 (b), the distribution is smoothened to be a perfect Gaussian profile. This is identical to what happens in a delocalized state with disorder[4]. From Figure 6 (a) and (b) we can see that in the delocalized regime, the distribution profile of \(\ln g\) is almost identical to that of \(g\).
At the critical state between localization and delocalization presented in Figure 5 (c), the distribution profile is largely deformed. There is an obvious nonanalytic point at \(g=1\), which is also the common feature at the MIT in disordered systems[4; 62]. Another noticeable feature is the sharp peak near \(g=0\). From the distribution of \(\ln g\) shown in 6 (c) it can be confirmed that this is a peak _close to \(g=0\)_, instead of one _at \(g=0\)_[62]. Interestingly, such a peak close to \(g=0\) is also found at the MIT of a three-dimensional (3D) orthogonal system[4], but is absent at that of a 2D symplectic system (the only example to exhibit bulk MIT in 2D)[62; 63] and at the plateau-plateau transition of a 2D unitary system (quantum Hall effect)[5; 64]. In one word, for this 2D quasiperiodic model, the statistical distribution of the MIT exhibits similar behavior to that of a 3D disordered system.
In the localized phase shown in Figure 5 (d), the conductance displays a single-peak distribution highly concentrated around \(g=0\), which is also similar to that in the localized phase of disordered systems[65; 66]. In disordered systems with localization, it was found that the quantity \(\ln g\) is "better distributed" as a partial Gaussian distribution terminated around \(\ln g=0\)[65; 66]. The distribution of \(\ln g\) for our model at strong localization is presented in Figure 6 (d). One can see that, although it is not a typical Gaussian shape but a clear termination around \(\ln g\) still persists.
### Scaling function
For a certain set of model parameters \(\{\lambda,\omega\}\), and after a polynomial fitting of data points \((\langle\ln g\rangle,\ln L)\) shown in Figure 4, one can obtain a numerical evaluation of the scaling function \(\beta=\frac{d\langle\ln g\rangle}{d\ln L}\). Results of \(\beta\) as a function of \(\langle\ln g\rangle\)
Figure 5: Statistical distribution of \(g\) with sample size \(L=150\). (a) and (b): the delocalized states with \(\beta>0\). (c) The critical state with \(\beta=0\). (d) The localized state with \(\beta<0\). The statistics is over 150,000 realizations of \((\theta_{x},\theta_{y})\).
for a wide range of model parameters are shown in Figure 7. Within our best capability of calculation, i.e., 150,000 configurations at size \(300\times 300\), there are still strong fluctuations, especially around the transition region, \(\beta\sim 0\), which is consistent with what we have seen from Figure 3 (a) and Figure 5 (c). Nevertheless, some useful information can still be drawn. Firstly, all data points, similar to the case of disordered systems[54], tend to collapse around a universal curve. We manage to fit these data with an appropriate nonlinear function. Numerically, among common functions we find that the Boltzmann function provides the best smooth curve that fits these data with the expression
\[\beta(\ln g) = A_{1}+\frac{A_{1}-A_{2}}{1+\exp\left[(\ln g-x_{0})/\Delta\right]}, \tag{7}\] \[A_{1} = -9.387,\qquad A_{2}=0.755,\] \[x_{0} = -5.038,\qquad\Delta=1.550\]
which is plotted as the blue dashed curve. This is a monotonic function with a single zero point at \(\ln g_{c}=-1.130\) where the MIT occurs. We stress again that this Boltzmann function is just an empirical choice instead of a principle-based analytical result, and we merely use it to extract some useful quantities in a numerically convenient way. For example, the slope of \(\beta(\ln g)\) at its zero gives the inverse of \(\nu\), the critical exponent characterizing the the divergence of the localization (correlation) length near the critical point[3, 54]. From Equation(8) we have an estimate \(\nu\sim 2.22\). This value is larger than those of MITs in 3D, and that of MIT in 2D symplectic systems[67, 54], but is close to that of the plateau-plateau transition of 2D quantum Hall effects (unitary system)[68, 69, 70, 71]. Another important feature is the saturation of \(\beta\sim 1\) at large \(\ln g\) limit. For disordered systems, it has been known that the \(\epsilon\) expansion in the large conductance limit gives \(\beta\to D-2\), for all three universality classes[72, 73, 74, 2]. In this sense, the scaling behavior of our 2D quasiperiodic model in the weak potential (therefore large conductance) limit seems to resemble that of the 3D disordered model.
## IV 4. Summary and discussion
In this paper, we numerically investigate the localization properties of the 2D quasiperiodic mosaic lattice model. We find some properties similar to, and also some different from those of 2D disordered systems.
Similar to the 1D counterpart, there exists localization-delocalization transition when varying the energy and the strength of the quasiperiodic potential. However the transition region is fractal like, contrary to clear phase boundaries in 1D. This model shares many common statistical features of the conductance \(g\), for example, Gaussian-like distribution in the delocalized phase, high peak near zero in the localized phase, and the existence of non-analytical behavior of the distribution function at the critical point between two phases. However, this critical distribution at the small \(g\) limit shows a sharp peak around zero, which is a particular feature of the 3D disordered systems. The scaling function \(\beta\) is also a universal function of \(g\), but its large \(g\) limit approaches 1, which is also the feature of 3D disordered systems. This may suggest a novel role of the spatial dimensionality of quasiperiodic systems, which will be studied in the future. [18, 19].
Figure 6: Similar to Figure 5 but for the quantity \(\ln g\).
## Acknowledgements
This work was supported by National Natural Science Foundation of China under Grant Nos. 12104108, 11774336 and 11874127, the Joint Fund with Guangzhou Municipality under Nos. 202201020198 and 202201020137, and the Starting Research Fund from Guangzhou University under Grant Nos. RQ2020082, RQ 2020083 and 62104360.
|
2303.10262
|
Estimation of Unknown Payoff Parameters in Large Network Games
|
We consider network games where a large number of agents interact according
to a network sampled from a random network model, represented by a graphon. By
exploiting previous results on convergence of such large network games to
graphon games, we examine a procedure for estimating unknown payoff parameters,
from observations of equilibrium actions, without the need for exact network
information. We prove smoothness and local convexity of the optimization
problem involved in computing the proposed estimator. Additionally, under a
notion of graphon parameter identifiability, we show that the optimal estimator
is globally unique. We present several examples of identifiable homogeneous and
heterogeneous parameters in different classes of linear quadratic network games
with numerical simulations to validate the proposed estimator.
|
Feras Al Taha, Francesca Parise
|
2023-03-17T22:06:30Z
|
http://arxiv.org/abs/2303.10262v1
|
# Estimation of Unknown Payoff Parameters in Large Network Games
###### Abstract
We consider network games where a large number of agents interact according to a network sampled from a random network model, represented by a graphon. By exploiting previous results on convergence of such large network games to graphon games, we examine a procedure for estimating unknown payoff parameters, from observations of equilibrium actions, without the need for exact network information. We prove smoothness and local convexity of the optimization problem involved in computing the proposed estimator. Additionally, under a notion of graphon parameter identifiability, we show that the optimal estimator is globally unique. We present several examples of identifiable homogeneous and heterogeneous parameters in different classes of linear quadratic network games with numerical simulations to validate the proposed estimator.
## I Introduction
Systems involving very large numbers of autonomous agents making strategic decisions and influencing each other over a network structure are becoming ubiquitous. For example, they appear in applications involving power and traffic networks in engineering settings as well as product adoption, targeted marketing, and opinion dynamics in socio-economic settings. Studying how agents make decisions in these complex environments is a fundamental prerequisite for the successful design of interventions and control laws aimed at improving welfare or system performance. To this end, game theoretical principles are typically used to model agents' decisions via payoff maximization, resulting in the concept of Nash equilibrium (i.e., a set of actions in which no agent has interest in unilateral deviations) as a solution outcome. When translating these results to practice, however, a main issue emerges: while the parametric form of agents' payoff functions might be known, in most applications the parameters themselves are not. For example, in games capturing agents' decisions under peer pressure, the strength of neighbors' peer effect on an individual's marginal return might not be known [1] and, in fact, may vary for different network instances (e.g. different schools, neighborhoods, etc.). In these settings, it is then of paramount importance to understand whether a central planner can estimate the unknown parameters from observations of agents' actions at equilibrium. This ability would indeed enable the central planner to design interventions steering agents towards equilibria with improved welfare or system efficiency [2, 3, 4].
Starting from the seminal work of Bramoulle et al. [5], a large literature studied the above question under the assumption that the planner knows the network over which agents interact. When we turn our attention to applications involving a large number of agents, however, collecting data about the exact network of interactions can become very expensive or not at all possible because of privacy concerns. Consequently, recent works started investigating parameter estimation under partial or statistical network information [6, 7, 8, 9]. Importantly, all the works cited above focus on the specific problem of estimating the peer effect parameter in linear in means models [10]. The key objective of this paper is to develop a general parameter estimation procedure that: i) relies only on statistical instead of exact information about network interactions and ii) can be applied for parameter estimation in generic network games.
To obtain such a result, we build on the framework of graphon games recently proposed in [2]. Graphon games are games played over a continuum of agents that interact heterogeneously according to a graphon. Building on an interpretation of graphons as random network models (which generalizes for example Erdos-Renyi and stochastic block models (SBM) [11]), [2] shows that equilibria of network games in which the network of interactions is sampled from the graphon (termed _sampled network games_) converge, in the limit of large populations, to the equilibrium of the corresponding graphon game. Equilibria of graphon games can thus be seen as an approximation of strategic behavior in large network games, computed by using only information about the random network model. Based on this result, [2] suggests a novel procedure for payoff parameter estimation in sampled network games without the need for exact network data. Specifically, given an observation of the equilibrium of a network game with unknown parameters, the proposed approach consists of selecting as estimator the parameters for which the equilibrium of the corresponding graphon game is closest to the observed equilibrium. It is shown in [2] that this estimator is asymptotically consistent if the parameter satisfies an identifiability assumption capturing games in which equilibria that are close are generated by parameters that are also close.
In this work, we address two main open problems related to the estimation procedure detailed above. First, finding the parameter for which the graphon game equilibrium is the closest to the observed equilibrium requires the solution of an optimization problem. We here show that the objective function of such an optimization problem is smooth and locally strictly convex around the true parameter. Moreover, under the identifiability assumption above, we show that the optimization problem admits a unique global optimizer, thus guaranteeing that the proposed estimator is unique. Second, we prove that the required identifiability assumption holds
for several examples of linear quadratic (LQ) network games with both homogeneous and heterogeneous parameters. We validate the convergence of the proposed estimator on these games with numerical simulations.
This paper is part of a growing literature that studies strategic behavior in large network games (see e.g. [3, 12, 13, 14, 15, 16, 17]), in particular using graphons [18, 19, 20, 21]. However, none of the works cited above focuses on parameter estimation.
The rest of the paper is organized as follows. Section II presents the network game setup and its connection to graphon games. Section III introduces the parameter estimation problem. Section IV provides our main result on properties of the corresponding optimization problem. Section V provides examples of identifiable parameters in different linear quadratic network games and Section VI demonstrates the convergence of the estimator with numerical simulations. Omitted proofs are given in the Appendix.
_Notation:_ We denote by \(L^{2}([0,1])\) the space of square integrable functions defined on \([0,1]\) and by \(L^{2}([0,1];\mathbb{R}^{n})\) the space of square integrable vector valued functions defined on \([0,1]\). The norms on these spaces are \(\|v\|:=\sqrt{\sum_{i=1}^{n}v_{i}^{2}}\), \(\|f\|_{L^{2}}:=\sqrt{\int_{0}^{1}f(x)^{2}}dx\) and \(\|g\|_{L^{2},\mathbb{R}^{n}}:=\sqrt{\int_{0}^{1}\|g(x)\|^{2}}dx\) where \(v\in\mathbb{R}^{n}\), \(f\in L^{2}([0,1])\) and \(g\in L^{2}([0,1];\mathbb{R}^{n})\). Additionally, we denote by \(\|\cdot\|_{\infty}\) the uniform norm (or sup norm) of an operator. We denote by \([v]_{j}\) the \(j\)th component of a vector \(v\in\mathbb{R}^{n}\) and by \([A]_{ij}\) the \(ij\)th entry of a matrix \(A\in\mathbb{R}^{m\times n}\). The symbol \(\mathds{1}\) denotes the vector of all ones (with appropriate dimension) and \(\mathds{1}(x)\) the function constantly equal to one on the unit interval. The symbol \(\mathbb{I}\) denotes the identity operator.
## II Recap on Finite and Infinite Network Games
### _Finite network games_
Network games can be used to model settings where a _finite number of agents_ interact strategically over a network. In the following, we represent the network of interactions with its adjacency matrix \(P\in\mathbb{R}^{N\times N}\) with diagonal entries equal to zero (i.e., without self-loops) and assume that each agent \(i\in\{1,...,N\}\) aims at selecting a scalar strategy1\(s^{i}\in\mathbb{R}\) in a feasible set \(\mathcal{S}\subseteq\mathbb{R}\) to maximize a payoff function
Footnote 1: We assume scalar strategies for simplicity of exposition. Similar arguments can be made for vector strategies.
\[U(s^{i},z^{i}(s),\theta^{i}) \tag{1}\]
where \(s:=[s^{i}]_{i=1}^{N}\in\mathbb{R}^{N}\) is the strategy profile, \(z^{i}(s):=\frac{1}{N}\sum_{j=1}^{N}P_{ij}s^{j}\) denotes the local aggregate computed according to the network and \(\theta^{i}\in\Theta\subseteq\mathbb{R}^{m}\) models heterogeneity in the payoff functions of different agents. We remark that the local aggregate \(z^{i}(s)\) of an agent does not include its own strategy \(s^{i}\) (since \(P_{ii}=0\)). The model is said to be homogeneous across agents when \(\theta^{i}=\theta\) for all \(i\).
**Definition 1** (Nash equilibrium).: _A strategy \(\bar{\bar{s}}\in\mathbb{R}^{N}\) with associated local aggregate \(\bar{\bar{z}}:=[\bar{\bar{z}}^{i}]_{i=1}^{N}\) where \(\bar{\bar{z}}^{i}:=z^{i}(\bar{\bar{s}})\) is a Nash equilibrium if for all \(i\in\{1,\dots,N\}\), we have \(\bar{\bar{s}}^{i}\in\mathcal{S}\) and_
\[U(\bar{s}^{i},\bar{z}^{i},\theta^{i})\geq U(\bar{s},\bar{\bar{z}}^{i},\theta^{ i})\qquad\text{for all }\bar{s}\in\mathcal{S}.\]
### _Graphon games_
A graphon game is defined in terms of a _continuum of agents_, indexed by \(x\in[0,1]\), that interact heterogeneously according to a symmetric and measurable graphon \(W:[0,1]^{2}\mapsto[0,1]\). Intuitively, \(W(x,y)\) measures the level of interaction between infinitesimal agents \(x\) and \(y\). As in network games, the goal of each agent in a graphon game is to select a strategy \(s(x)\in\mathcal{S}\) to maximize their payoff
\[U(s(x),z(x|s),\theta(x)) \tag{2}\]
where \(z(x|s):=\int_{0}^{1}W(x,y)s(y)dy\) is the local aggregate experienced by agent \(x\) and \(\theta:[0,1]\rightarrow\Theta\subseteq\mathbb{R}^{m}\) is a function modelling payoff heterogeneity across agents. Note that the payoff function \(U\) is the same payoff function as in (1); the only difference is how the network aggregate is computed.
**Definition 2** (Graphon Nash equilibrium).: _A function \(\bar{s}\in L^{2}([0,1])\) with associated local aggregate \(\bar{z}(x):=z(x|\bar{s})=\int_{0}^{1}W(x,y)\bar{s}(y)dy\) is a Nash equilibrium for the graphon game if for all \(x\in[0,1]\), we have \(\bar{s}(x)\in\mathcal{S}\) and_
\[U(\bar{s}(x),\bar{z}(x),\theta(x))\geq U(\tilde{s},\bar{z}(x),\theta(x))\quad \text{for all }\tilde{s}\in\mathcal{S}.\]
Conditions for existence and uniqueness of the graphon Nash equilibrium are derived in [2] in terms of properties of the graphon operator \(\mathbb{W}:L^{2}([0,1])\mapsto L^{2}([0,1])\) given by
\[f(x)\mapsto(\mathbb{W}f)(x)=\int_{0}^{1}W(x,y)f(y)dy,\]
which intuitively plays the same role as the adjacency matrix for finite networks. These conditions are summarized in the following assumption.
**Assumption 1.a** (Existence and uniqueness).:
1. _The function_ \(U(s,z,\theta)\) _in (_2_) is continuously differentiable and strongly concave in_ \(s\) _with uniform constant_ \(\alpha_{U}\) _for all_ \(z\) _and_ \(\theta\)_. Moreover,_ \(\nabla_{s}U(s,z,\theta)\) _is uniformly Lipschitz in_ \(z\) _and_ \(\theta\) _with constants_ \(\ell_{U},\ell_{\theta}\) _for all_ \(s\)_._
2. _The set_ \(\mathcal{S}\) _is convex and compact, so that_ \(s_{max}:=\max_{s\in\mathcal{S}}\|s\|<\infty\)_._
3. _The largest eigenvalue_ \(\lambda_{\max}(\mathbb{W})\) _of the graphon operator_ \(\mathbb{W}\) _satisfies the bound_ \(\lambda_{\max}(\mathbb{W})<\frac{\alpha_{U}}{U_{U}}\)_._
**Remark**.: _Under conditions (i) and (ii), existence of a Nash equilibrium follows from standard fixed point argument. Condition (iii) guarantees that the best response mapping is a contraction from which uniqueness follows. See Theorems 1 and 2 in [2]._
### _Sampled network games_
Besides being of interest as models of heterogeneous interactions in infinite populations, graphons can be used as random network models [11]. Specifically, given any graphon \(W\), a _sampled network_ can be obtained by uniformly
and independently sampling \(N\) points2\(\{t_{i}\}_{i=1}^{N}\) from \([0,1]\) and by defining a 0-1 adjacency matrix \(P^{[N]}\) corresponding to a graph with \(N\) nodes, no self-loops (i.e., \(P^{[N]}_{ii}=0\) for all \(i\)) and random links \((i,j)\) sampled with Bernoulli probability \(W(t_{i},t_{j})\). Graphons can therefore be used to encode statistical information about the likelihood of agents' interactions, with the understanding that the network observed in reality (i.e., \(P^{[N]}\)) is one possible realization of such random network model. Building on this statistical interpretation of graphons, [2] shows that, for \(N\) large enough, graphon Nash equilibria (as defined in Section II-B) are a good approximation of strategic behavior in any finite network game (as defined in Section II-A) where agents interact over a network sampled from the graphon (which we term a _sampled network game3_).
Footnote 2: Without lost of generality, the points \(\{t_{i}\}_{i=1}^{N}\) are assumed to be ordered such that \(t_{i}\leq t_{i+1}\), \(i=1,\ldots,N-1\), since the nodes can be relabeled.
Footnote 3: Since the maximum eigenvalue of the sampled network \(P^{[N]}\) converges almost surely to the maximum eigenvalue of the graphon \(W\), Assumption 1.a guarantees existence and uniqueness of equilibria in sampled network games for \(N\) large enough with high probability.
**Remark**.: _Equilibria in finite network games are vectors instead of functions. To obtain comparable objects, we define a piecewise constant interpolation4 of the network game equilibrium \(\bar{\bar{s}}^{[N]}\) as a function \(\bar{\bar{s}}^{[N]}(x)=\left[\bar{\bar{s}}^{[N]}\right]_{i}\) for all \(x\in\left[\frac{i-1}{N},\frac{i}{N}\right].\) In the following, we use the notation \(\bar{\bar{s}}^{[N]}\in\mathbb{R}^{N}\) for a vector-valued equilibrium and \(\bar{s}^{[N]}\in L^{2}([0,1])\) for its interpolation._
Footnote 4: Rather than interpolating the equilibria about the points \(\{t_{i}\}_{i=1}^{N}\), we interpolate them about a regular grid \(\{i/N\}_{i=1}^{N}\) so that the players’ strategies are assigned equal weight.
**Proposition 1** ([2, Theorem 2]).: _Consider a graphon game satisfying Assumption 1.a with unique Nash equilibrium \(\bar{s}\in L^{2}([0,1])\). Let \(\bar{s}^{[N]}\in L^{2}([0,1])\) be a piecewise constant interpolation of the equilibrium of a network game sampled from this graphon game. Then, \(\|\bar{s}^{[N]}-\bar{s}\|_{L^{2}}\overset{\text{\tiny{B.S.}}}{\to}0\) as \(N\to\infty\)._
The key importance of this result is that the graphon equilibrium can be computed by relying only on information about the random network model, without the need for information about exact agent interactions (\(P^{[N]}\)). We next show how this key observation can be used for parameter estimation in settings in which the central planner does not have full network knowledge.
## III The parameter estimation problem
In many applications of interest, agents' payoffs may depend on parameters that are unknown to the central planner. In the following, we capture this aspect by assuming that the heterogeneity vector \(\theta\), which characterizes agent-specific behavior, may depend on some unknown parameter \(\eta\in\Xi\subseteq\mathbb{R}^{n}\) and we stress this dependence with the notation \(\theta_{\eta}\). This paper addresses the task of identifying \(\eta\) from the observation of a sampled equilibrium \(\bar{s}^{[N]}\in\mathbb{R}^{N}\) and the labels \(\{t_{i}\}_{i=1}^{N}\), which are assumed to be known since they can represent an observable trait of the players (e.g., their community or geographical location). While most of the literature focused on settings in which \(P^{[N]}\) is known, we here assume that the central planner cannot observe the sampled network, but instead has information about the random network model (i.e. the graphon). With this information, the central planner can compute the graphon Nash equilibrium corresponding to any possible choice of parameter \(\eta\). Building on Proposition 1, the central planner can then estimate the true parameter \(\eta\) by choosing as estimator the parameter \(\hat{\eta}\) which yields the closest graphon equilibrium to the observed equilibrium. Mathematically, we define the estimator
\[\hat{\eta}:=\arg\min_{\eta\in\Xi}\quad\left\|\bar{s}^{[N]}-\bar{s}_{\eta} \right\|_{L^{2}}^{2}, \tag{3}\]
where \(\bar{s}^{[N]}\) is the observed equilibrium sampled from a graphon game with true parameter \(\bar{\eta}\), \(\bar{s}_{\eta}\in L^{2}([0,1])\) is the equilibrium of the graphon game with parameter \(\eta\) and \(\Xi\subseteq\mathbb{R}^{n}\) is the set of admissible parameter values. To guarantee uniqueness of \(\bar{s}_{\eta}\), we make the following assumption.
**Assumption 1.b** (Existence and uniqueness for all \(\eta\in\Xi\)).: _For all \(\eta\in\Xi\), the graphon game with parameter \(\eta\) satisfies Assumption 1.a._
We study the performance of the estimator suggested in (3), under the following assumption.
**Assumption 2** (Identifiability).: _Suppose that Assumption 1.b holds. The true parameter \(\bar{\eta}\in\Xi\) is identifiable, that is, there exists \(L_{\bar{\eta}}>0\) such that_
\[\|\bar{\eta}-\eta\|\leq L_{\bar{\eta}}\|\bar{s}_{\bar{\eta}}-\bar{s}_{\eta}\|_ {L^{2}}\quad\forall\eta\in\Xi. \tag{4}\]
Intuitively, the identifiability assumption is needed because if two arbitrarily close graphon equilibria could be generated by two significantly different parameters, then it would be impossible to identify \(\bar{\eta}\) from a single observation of a sampled equilibrium. It follows immediately from Proposition 1 that, under Assumption 2, the estimator defined in (3) is asymptotically consistent in the limit of infinite population.
**Proposition 2** ([2, Corollary 1]).: _Suppose that Assumptions 1.b and 2 hold. Then,_
\[\|\hat{\eta}-\bar{\eta}\|\ \overset{\text{\tiny{B.S.}}}{\to}\ 0\ \text{as}\ N\to\infty.\]
Overall, the results detailed so far provide a procedure for estimating unknown parameters of sampled network games under two key assumptions. First, one needs to be able to solve the optimization problem in (3). Second, one needs to be able to verify parameter identifiability as defined in Assumption 2. In the rest of the paper, we investigate these two points. Specifically, in Section IV, we study properties of the optimization problem in (3), guaranteeing for example uniqueness of the solution. In Section V, we instead investigate parameter identifiability for common LQ network games.
## IV Parameter estimation properties
In this section, we present results on the smoothness and local convexity of problem (3). To this end, we make the following additional assumptions guaranteeing local convexity
and smoothness of the graph equilibrium with respect to parameter variations.
**Assumption 3** (Convex parameter set).: _The parameter set \(\Xi\in\mathbb{R}^{n}\) is a convex set and contains the true parameter \(\bar{\eta}\) in its interior._
**Assumption 4** (Smoothness of equilibrium).: _Under Assumption 1.b, the equilibrium \(\bar{s}_{\eta}(x)\) is twice Lipschitz continuously differentiable in \(\eta\), uniformly in \(x\)._
While this assumption may seem restrictive, we demonstrate in Section V that it holds for various classes of LQ games. Under the above assumptions, we next derive our main theorem on regularity properties of the objective function in (3), which we denote by
\[J(\eta):=\|\bar{s}^{[N]}-\bar{s}_{\eta}\|_{L^{2}}^{2}. \tag{5}\]
**Theorem 1**.: _Suppose that Assumptions 1.b, 2, 3 and 4 hold. Then,_
1. \(J(\eta)\) _is_ \(L\)_-smooth._
_Moreover, for all \(\delta>0\), there exists \(\bar{N}>0\) such that with probability \(1-\delta\), for all \(N>\bar{N}\),_
1. _the function_ \(J(\eta)\) _in (_5_) is locally strictly convex around the true parameter_ \(\bar{\eta}\) _and_
2. _the optimization problem (_3_) has a globally unique solution._
Proof.: 1) To prove that \(J(\eta)\) is \(L\)-smooth, note that
\[J(\eta) =\|\bar{s}^{[N]}-\bar{s}_{\eta}\|_{L^{2}}^{2}=\int_{0}^{1}\left( \bar{s}^{[N]}(x)-\bar{s}_{\eta}(x)\right)^{2}dx,\] \[\nabla_{\eta}J(\eta) =-\int_{0}^{1}2(\bar{s}^{[N]}(x)-\bar{s}_{\eta}(x))\nabla_{\eta} \bar{s}_{\eta}(x)dx.\]
By Assumption 4, there exists \(L_{1},L_{2}>0\) such that \(\|\bar{s}_{\eta}-\bar{s}_{\bar{\eta}}\|_{L^{2}}\leq L_{1}\|\eta-\tilde{\eta}\|\) and \(\|\nabla_{\eta}\bar{s}_{\eta}-\nabla_{\eta}\bar{s}_{\tilde{\eta}}\|_{L^{2}; \mathbb{R}^{n}}\leq L_{2}\|\eta-\tilde{\eta}\|\). Hence,
\[\frac{1}{2}\|\nabla_{\eta}J(\eta)-\nabla_{\eta}J(\tilde{\eta})\|\] \[\quad=\left\|\int_{0}^{1}\left[(\bar{s}^{[N]}(x)-\bar{s}_{\eta}( x))\nabla_{\eta}\bar{s}_{\eta}(x)\right.\right.\] \[\quad\quad\left.-(\bar{s}^{[N]}(x)-\bar{s}_{\tilde{\eta}}(x)) \nabla_{\eta}\bar{s}_{\tilde{\eta}}(x)\right]dx\right\|\] \[\leq\left\|\int_{0}^{1}(\bar{s}^{[N]}(x)-\bar{s}_{\eta}(x))( \nabla_{\eta}\bar{s}_{\eta}(x)-\nabla_{\eta}\bar{s}_{\tilde{\eta}}(x))dx\right\|\] \[\quad+\left\|\int_{0}^{1}(\bar{s}_{\tilde{\eta}}(x)-\bar{s}_{\eta }(x))\nabla_{\eta}\bar{s}_{\tilde{\eta}}(x)dx\right\|\] \[\leq\|\bar{s}^{[N]}-\bar{s}_{\eta}\|_{L^{2}}\left\|\nabla_{\eta} \bar{s}_{\eta}-\nabla_{\eta}\bar{s}_{\tilde{\eta}}\right\|_{L^{2};\mathbb{R}^ {n}}\] \[\quad+\|\bar{s}_{\tilde{\eta}}-\bar{s}_{\eta}\|_{L^{2}}\left\| \nabla_{\eta}\bar{s}_{\tilde{\eta}}\right\|_{L^{2};\mathbb{R}^{n}}\quad\left. \left\{\text{By Cauchy-Schwartz}\right\}\right.\] \[\leq 2s_{\max}L_{2}\|\eta-\tilde{\eta}\|+L_{1}\|\eta-\tilde{\eta}\| \cdot L_{1}\] \[\quad\left.\left\{\text{By Lemma 2 in Appendix A}\right\}\right.\] \[=\left(2L_{2}s_{\max}+L_{1}^{2}\right)\|\eta-\tilde{\eta}\|=:L_{J} \|\eta-\tilde{\eta}\|.\]
2) To investigate the local convexity of \(J(\eta)\) around \(\bar{\eta}\), we compute the Hessian \(H(\eta):=\nabla_{\eta}^{2}J(\eta)\) and examine under which conditions it is positive definite
\[H(\eta):=\nabla_{\eta}^{2}J(\eta) =2\underbrace{\int_{0}^{1}\nabla_{\eta}\bar{s}_{\eta}(x)\nabla_{ \eta}\bar{s}_{\eta}(x)^{T}dx}_{=:T_{1}(\eta)} \tag{6}\] \[-2\underbrace{\int_{0}^{1}(\bar{s}^{[N]}(x)-\bar{s}_{\eta}(x)) \nabla_{\eta}^{2}\bar{s}_{\eta}(x)dx}_{=:T_{2}^{N}(\eta)}.\]
Lemma 6 in Appendix B shows that \(T_{1}(\bar{\eta})\) is positive definite, while Lemma 7 shows that for all \(\delta>0\), \(\epsilon>0\), there exists \(\bar{N}\) such that for all \(N>\bar{N}\), \(\|T_{N}^{N}(\bar{\eta})\|<\epsilon\) with probability \(1-\delta\). It follows that for all \(\delta>0\), there exists \(\bar{N}\) such that for all \(N>\bar{N}\), the Hessian \(H(\bar{\eta})\) is positive definite with probability \(1-\delta\). We next show that \(H(\eta)>0\) locally around \(\bar{\eta}\). To this end, note that
\[\frac{1}{2}H(\eta)=T_{1}(\bar{\eta}) +[T_{1}(\eta)-T_{1}(\bar{\eta})]\] \[+T_{2}^{N}(\bar{\eta})+[T_{2}^{N}(\eta)-T_{2}^{N}(\bar{\eta})].\]
From Lemma 8 in Appendix B, the difference terms can be made arbitrary small for \(\eta\) close to \(\bar{\eta}\), independently of \(N\). Hence, for all \(\delta>0\), it follows from Lemmas 6, 7, 8 that there exists \(\mu\) and \(\bar{N}\) such that with probability \(1-\delta\), the Hessian \(H(\eta)\) is positive definite for all \(\eta\) satisfying \(\|\eta-\bar{\eta}\|<\mu\) and for all \(N>\bar{N}\). This implies that locally around \(\bar{\eta}\), \(J(\eta)\) is strictly convex and there is a unique solution to
\[\underset{\eta\in\Xi,\|\eta-\bar{\eta}\|\leq\mu}{\text{min}}\quad\|\bar{s}^{[ N]}-\bar{s}_{\eta}\|_{L^{2}}^{2}. \tag{7}\]
3) Finally, to prove that (3) has a unique solution over the entire domain \(\Xi\), let \(\tilde{\eta}\) be any generic solution to (3). By identifiability, we have
\[\|\bar{\eta}-\tilde{\eta}\| \leq L_{\bar{\eta}}\|\bar{s}_{\bar{\eta}}-\bar{s}_{\tilde{\eta}}\|_ {L^{2}}\] \[\leq L_{\bar{\eta}}(\|\bar{s}_{\bar{\eta}}-\bar{s}^{[N]}\|_{L^{2}}+ \|\bar{s}^{[N]}-\bar{s}_{\tilde{\eta}}\|_{L^{2}})\] \[\leq 2L_{\bar{\eta}}\|\bar{s}_{\bar{\eta}}-\bar{s}^{[N]}\|_{L^{2}}.\] (By (3) )
Therefore, by Proposition 1, for a given \(\mu\) and \(\delta\), there exists \(\bar{N}^{\prime}\geq\bar{N}\) such that for \(N>\bar{N}^{\prime}\), with probability \(1-\delta\), any solution \(\bar{\eta}\) to (3) satisfies \(\|\bar{\eta}-\tilde{\eta}\|\leq\mu\). In this case, (3) is equivalent to (7) and thus has a unique solution.
The first part of Theorem 1 is useful as smoothness of the objective function in (3) is a sufficient condition for the convergence to a stationary point of many derivative-free optimization algorithms such as trust region [22] and finite difference methods [23]. Additionally, the second and third parts of Theorem 1 guarantee that if these algorithms start close enough to the optimal solution, they converge to it for large enough \(N\) with high probability. Global convergence and strong convexity remain an open problem.
## V Identifiability and Smoothness
The results derived above rely on parameter identifiability (Assumption 2) and smoothness of the equilibrium (Assumption 4). We next verify that these assumptions hold in games involving both homogeneous and heterogeneous parameters.
To this end, we focus on linear quadratic (LO) games in which the payoff \(U\) is linear in the network aggregate \(z(x)\) and quadratic in the strategy \(s(x)\)
\[U(s(x),z(x), \theta_{\eta}(x))=-\frac{1}{2}s(x)^{2}\] \[+([\theta_{\eta}(x)]_{1}+[\theta_{\eta}(x)]_{2}z(x))s(x) \tag{8}\]
where the components of the heterogeneity parameter \(\theta_{\eta}(x)=[[\theta_{\eta}(x)]_{1},[\theta_{\eta}(x)]_{2}]^{\top}\in \Theta\subseteq\mathbb{R}_{+}^{2}\) denote the standalone marginal return (\([\theta_{\eta}(x)]_{1}\)) and the local aggregate effect on marginal return (\([\theta_{\eta}(x)]_{2}\)), respectively.
If the game has homogeneous parameter \(\theta_{\eta}(x)\equiv[\eta_{1},\eta_{2}]^{\top}\in\mathbb{R}_{+}^{2}\) for all \(x\in[0,1]\), then under Assumption 1.b, the graphon Nash equilibrium \(\bar{s}_{\eta}\) can be explicitly written as a fixed point of the best-response mapping
\[\bar{s}_{\eta}(x)=\Pi_{\mathcal{S}}\left[\eta_{1}\mathds{1}(x)+\eta_{2} \mathbb{W}\bar{s}_{\eta}(x)\right] \tag{9}\]
where \(\Pi_{\mathcal{S}}[\cdot]\) is the operator for the projection onto the strategy set \(\mathcal{S}\). Since under this projection operation, different parameters \(\eta\) could yield the same equilibrium, we introduce an additional assumption to guarantee identifiability.
**Assumption 5** (Internal equilibrium).: _Under Assumption 1.b, for all \(\eta\in\Xi\), the equilibrium \(\bar{s}_{\eta}(x)\) is interior (i.e., \(\bar{s}_{\eta}(x)\in\text{int}(\mathcal{S})\)) for all \(x\)._
With this additional assumption, the graphon Nash equilibrium in (9) simplifies to
\[\bar{s}_{\eta}(x)=(\mathbb{I}-\eta_{2}\mathbb{W})^{-1}\eta_{1}\mathds{1}(x), \tag{10}\]
which corresponds to the Bonacich centrality of agent \(x\) in the graphon \(W\)[21].
### _LQ games with unknown homogeneous parameters_
We start our analysis by proving identifiability and smoothness in homogeneous LQ games when both the parameter \(\eta_{2}\) representing the effect of the local aggregate and the standalone marginal return parameter \(\eta_{1}\) are unknown.
**Proposition 3** (Identifiability).: _Consider a LQ game with \(\Xi\subset\mathbb{R}_{+}^{2}\) and payoff function_
\[U(s(x),z(x),\theta_{\eta}(x))=-\frac{1}{2}s^{2}(x)+(\eta_{1}+\eta_{2}z(x))s(x), \tag{11}\]
_so that \(\theta_{\eta}(x)=\eta=[\eta_{1},\eta_{2}]^{\top}\) for all \(x\in[0,1]\). Suppose that Assumptions 1.b and 5 hold. The parameter vector \(\eta\in\Xi\) is identifiable if and only if agents have heterogeneous effects on the local aggregate at equilibrium (i.e. \(\bar{z}_{\eta}\neq\gamma\mathds{1}\) for any \(\gamma\in\mathbb{R}\)). Otherwise, only the sum \(\eta_{1}+\gamma\eta_{2}\) is identifiable._
**Proposition 4** (Smoothness).: _Consider a homogeneous LQ game satisfying all the assumptions in Proposition 3 and further assume that \(\Xi\) is compact and \(\eta_{2}\|\mathbb{W}\|_{\infty}<1\) for all \(\eta\in\Xi\). The corresponding graphon equilibrium satisfies Assumption 4._
To prove smoothness of the equilibrium in Proposition 4, we use the assumption that \(\eta_{2}\|\mathbb{W}\|_{\infty}<1\). This is a stronger assumption than what is needed for existence and uniqueness of the equilibrium since \(\lambda_{\text{max}}(\mathbb{W})<\|\mathbb{W}\|_{\infty}\) for symmetric graphons. The assumption on \(\|\mathbb{W}\|_{\infty}\) (which has an interpretation in terms of max degree of the agents in the graphon) is required to guarantee Lipschitz continuity of the equilibrium with respect to the parameter \(\eta\), point-wise in \(x\).
### _LQ games with heterogeneous parameters_
The next example generalizes the setting of Section V-A by considering parameters which can be heterogeneous across communities of agents. Specifically, we consider a setting in which agents are partitioned into \(K\) communities (with probability \(\pi_{k}\) such that \(\sum_{k=1}^{K}\pi_{k}=1\)) and connect with a probability depending on the community they belong to. This random network model (which is essentially a stochastic block model (SBM)) can be captured with a graphon by partitioning \([0,1]\) into \(K\) disjoint intervals \(\{\mathcal{C}_{k}\}_{k=1}^{K}\), each of length \(\pi_{k}\), and by defining a piecewise constant graphon \(W_{\mathrm{SBM}}\) as
\[W_{\mathrm{SBM}}(x,y)=Q_{ij}\qquad\text{for all }x\in\mathcal{C}_{i},\ y\in \mathcal{C}_{j} \tag{12}\]
where \(Q_{ij}\) is the probability that an agent from community \(i\) interacts with an agent from community \(j\) and the corresponding matrix \(Q\in\mathbb{R}^{K\times K}\) is symmetric.
We assume that the local aggregate effect is the same for each agent belonging to the same community but is a priori unknown. In other words, the payoff for each agent \(x\in\mathcal{C}_{k}\) in community \(k\) is
\[U(s(x),z(x),\theta_{\eta}(x))=-\frac{1}{2}s^{2}(x)+(\theta_{1}+\eta_{k}z(x))s(x) \tag{13}\]
where \(\theta_{\eta}(x)=\eta_{k}\) for each \(x\in\mathcal{C}_{k}\). For simplicity, we assume that the standalone marginal return \(\theta_{1}>0\) is known and homogeneous across agents. The parameter to identify is then \(\eta=[\eta_{1},\ldots,\eta_{K}]^{\top}\in\mathbb{R}_{+}^{K}\).
**Proposition 5** (Identifiability).: _Consider a LQ game with SBM graphon as defined in (12) and payoff as defined in (13) with \(\theta_{1}>0\). Let \(\Xi\subset\mathbb{R}_{+}^{K}\) and suppose Assumptions 1.b and 5 hold. If \(\mathcal{S}\subseteq\mathbb{R}_{+}\), then the parameter \(\eta\in\Xi\) is identifiable._
**Proposition 6** (Smoothness).: _Consider a heterogeneous LQ game satisfying all the assumptions in Proposition 5 and further assume that \(\Xi\) is compact. Then, the corresponding graphon equilibrium satisfies Assumption 4._
## VI Simulations
In this section, we provide numerical simulations demonstrating the convergence of the estimator proposed in (3) to the true parameter for the LQ game defined in Proposition 5. The simulation considers the LQ game with \(\bar{\eta}=[0.8,0.6,1,0.8]^{\top}\), a SBM graphon with
\[Q=\begin{bmatrix}0.9&0.05&0&0\\ 0.05&0.2&0.05&0\\ 0&0.05&0.2&0.05\\ 0&0&0.05&0.8\end{bmatrix}\]
and equally sized communities (\(\pi_{k}=0.25\)). Figure 1 illustrates convergence of the estimator \(\hat{\eta}\) to \(\bar{\eta}\) for large enough \(N\) for this example. The estimation problem (3) was solved using MATLAB's fmincon solver (which uses an interior-point method algorithm).
## VII Conclusions
In this work, we introduced a method for estimation of payoff parameters in large network games by leveraging the framework of graphon games. We discussed properties of the corresponding optimization problem and proved parameter identifiability for linear quadratic games with both homogeneous and heterogeneous parameters. Identifiability of parameters for games with nonlinear dependence on the network aggregate such as the _quadratic quadratic game_ presented in [2] is a future research direction.
## Appendix
### _Auxiliary lemmas_
Lemmas 2 and 3 are standard results.
**Lemma 2**.: _Assumption 4 holds if and only if there exists \(L_{1},L_{2}>0\) such that_
\[\left|\frac{\partial\bar{s}_{\eta}(x)}{\partial\eta_{i}}\right| \leq\|\nabla_{\eta}\bar{s}_{\eta}(x)\|\leq L_{1}\quad\text{for all $x\in[0,1]$ and}\] \[\left|\frac{\partial^{2}\bar{s}_{\eta}(x)}{\partial\eta_{i} \partial\eta_{j}}\right| \leq\|\nabla_{\eta}^{2}\bar{s}_{\eta}(x)\|\leq L_{2}\quad\text{for all $x\in[0,1]$}.\]
**Lemma 3**.: _Take \(f:\mathbb{R}^{K}\to\mathbb{R}\) continuous and differentiable. If \(\|\nabla_{\eta}f(\eta)\|\leq L\) for all \(\eta\), then \(f\) is \(L\)-Lipschitz continuous._
**Lemma 4**.: _Consider a series of the form_
\[f_{h}(\eta,x):=\sum_{k=h}^{\infty}k(k-1)\ldots(k-h+1)\eta^{k-h}\alpha_{k}(x)\]
_for \(\alpha_{k}\in L^{2}([0,1])\), \(h\in\mathbb{N}\), \(\eta\in\Xi\subset\mathbb{R}\) and \(x\in[0,1]\). Suppose that: i) \(\exists\eta_{\max}\) such that \(|\eta|\leq\eta_{\max}\) for all \(\eta\in\Xi\), ii) \(\exists\,\beta>0\) such that \(|\alpha_{k}(x)|\leq\beta^{k},\forall x\in[0,1],\forall k\in\mathbb{N}\) and iii) \(\eta_{\max}\beta<1\). Then the series \(f_{h}(\eta,x)\) converges and is \(L_{h}\)-Lipschitz continuous in \(\eta\) uniformly in \(x\)._
Proof.: First, we show that \(f_{h}(\eta,x)\) converges by comparison test with a convergent geometric series. Note that
\[\sum_{k=h}^{\infty}\left|k(k-1)\ldots(k-h+1)\eta^{k-h}\alpha_{k}(x)\right|\] \[\leq\beta^{h}\sum_{k=h}^{\infty}k(k-1)\ldots(k-h+1)(|\eta|\beta)^ {k-h}\] \[\stackrel{{(*)}}{{=}}\beta^{h}\frac{d^{h}}{dz^{h}} \left(\frac{1}{1-z}\right)\Big{|}_{z=|\eta|\beta}=\beta^{h}\frac{h!}{(1-|\eta| \beta)^{h+1}}\] \[\leq\beta^{h}\frac{h!}{(1-\eta_{\max}\beta)^{h+1}}=:B_{h}\]
where \((*)\) follows from [24] since \(|\eta|\beta<1\). Hence, by the comparison test for series, \(f_{h}(\eta,x)\) converges and \(|f_{h}(\eta,x)|\leq B_{h}\) for all \(\eta\in\Xi\) and \(x\in[0,1]\). Moreover, for all \(h\), \(f_{h}(\eta,x)\) is Lipschitz continuous in \(\eta\) uniformly in \(x\) since, if we denote \(a_{h,k}:=k(k-1)\ldots(k-h+1)\),
\[|f_{h}(\eta,x)-f_{h}(\tilde{\eta},x)|\leq\left|\sum_{k=h}^{\infty}a _{h,k}(\eta^{k-h}-\tilde{\eta}^{k-h})\alpha_{k}(x)\right|\] \[\stackrel{{(*)}}{{\leq}}\sum_{k=h+1}^{\infty}a_{h,k} |\eta-\tilde{\eta}|(k-h)(\eta_{\max})^{k-h-1}|\alpha_{k}(x)|\] \[\leq B_{h+1}|\eta-\tilde{\eta}|=:L_{h}|\eta-\tilde{\eta}|\quad \forall x\in[0,1]\]
where \((*)\) holds by the identity [25, p.83]\(x^{p+1}-y^{p+1}=(x-y)(x^{p}+x^{p-1}y+\cdots+xy^{p-1}+y^{p})\) and \(|\eta|,|\tilde{\eta}|\leq\eta_{\max}\).
**Lemma 5**.: _Consider the Nash equilibrium \(\bar{s}_{\eta}\) of a LQ graphon game with graphon \(W_{\mathrm{SBM}}\) defined as in (12), payoff defined as in (13), with \(\eta\in\mathbb{R}_{+}^{K}\) and satisfying Assumption 5. Then, \(\bar{s}_{\eta}(x)=\bar{s}_{\eta}^{k}\) for all \(x\in\mathcal{C}_{k}\) where \(\bar{s}_{\eta}\in\mathbb{R}^{K}\) can be characterized as_
\[\bar{\bar{s}}_{\eta}=\theta_{1}(I-\Delta_{\eta}Q\Delta_{\pi})^{-1}\mathds{1}\]
_where \(\Delta_{\eta}:=\text{diag}(\eta_{1},...,\eta_{K})\) and \(\Delta_{\pi}:=\text{diag}(\pi_{1},...,\pi_{K})\). Additionally,_
\[\|\bar{s}_{\eta}-\bar{s}_{\bar{q}}\|_{L^{2}}\geq\sqrt{\min_{k}(\pi_{k})}\|\bar {s}_{\eta}-\bar{s}_{\bar{q}}\|.\]
Proof.: The first part of the statement follows from a generalization of [21, Proposition 1]. Then, \(\|\bar{s}_{\eta}-\bar{s}_{\bar{q}}\|_{L^{2}}^{2}=\sum_{k=1}^{K}\int_{\mathcal{ C}_{k}}(\bar{\bar{s}}_{\eta}^{k}-\bar{\bar{s}}_{\bar{q}}^{k})^{2}dx=\sum_{k=1}^{K} \pi_{k}(\bar{\bar{s}}_{\eta}^{k}-\bar{\bar{s}}_{\bar{q}}^{k})^{2}=(\bar{\bar{s}} _{\eta}-\bar{\bar{s}}_{\bar{q}})^{T}\Delta_{\pi}(\bar{s}_{\eta}-\bar{\bar{s}}_{ \bar{q}})\geq\min_{k}(\pi_{k})\|\bar{s}_{\eta}-\bar{\bar{s}}_{\bar{q}}\|^{2}\).
### _Auxiliary lemmas for the proof of Theorem 1_
**Lemma 6**.: _Suppose that Assumptions 1.b, 2, 3 and 4 hold. Then, \(T_{1}(\bar{\eta})\) as defined in (6) is positive definite._
Proof.: Showing that \(T_{1}(\bar{\eta})\) is positive definite requires that for any nonzero \(v\in\mathbb{R}^{n}\),
\[v^{T}T_{1}(\bar{\eta})v>0\Leftrightarrow\int_{0}^{1}(v^{T}\nabla_{\eta}\bar{s} _{\bar{\eta}}(x))^{2}dx>0. \tag{14}\]
Suppose that there exists \(v\neq 0\) such that \(v^{T}\nabla_{\eta}\bar{s}_{\eta}(x)=0\)_almost everywhere in \(x\)_ (so that the integral in (14) would be zero). Let \(\tilde{\eta}=\bar{\eta}+v\) be a perturbation of the true parameter \(\bar{\eta}\), where, without loss of generality, we assume that the perturbation is small enough to ensure that \(\bar{\eta}+v\in\text{int}(\Xi)\) and that \(\|v\|<\frac{2}{L_{\bar{\eta}}L_{2}}\) where \(L_{\bar{\eta}}\) is as defined in Assumption 2 and \(L_{2}\) is as defined in Lemma 2 (by Assumption 4). Such a perturbation \(v\) must exist since \(\tilde{\eta}\) is in the interior of \(\Xi\) (by Assumption 5). This perturbed parameter \(\tilde{\eta}\) results in a new equilibrium \(\bar{s}_{\bar{\eta}}=\bar{s}_{\bar{\eta}+v}\). By Taylor's expansion for multivariate functions [26, Theorem 12.14], for almost every \(x\), we have
\[\bar{s}_{\tilde{\eta}}(x)=\bar{s}_{\tilde{\eta}}(x)+\overbrace{(\tilde{\eta}- \tilde{\eta})^{T}\nabla_{\eta}\bar{s}_{\bar{\eta}}(x)}^{=v^{T}\nabla_{\eta} \bar{s}_{\bar{\eta}}(x)=0}\]
Fig. 1: Convergence of estimator. The red dashed line is the true parameter value and quantiles of the estimator values for 100 experiments per each \(N\) are shown in blue.
\[\|(\eta_{1}-\bar{\eta}_{1})\mathds{1}+(\eta_{2}-\bar{\eta}_{2})(z_{ \bar{\eta}}^{\perp}+z_{\bar{\eta}}^{\perp})\|_{L^{2}} \tag{16}\] \[=\|((\eta_{1}-\bar{\eta}_{1})+\gamma(\eta_{2}-\bar{\eta}_{2})) \mathds{1}+(\eta_{2}-\bar{\eta}_{2})\bar{z}_{\bar{\eta}}^{\perp}\|_{L^{2}}^{2}\] \[\geq((((\eta_{1}-\bar{\eta}_{1})+\gamma(\eta_{2}-\bar{\eta}_{2}))^ {2}+(\eta_{2}-\bar{\eta}_{2})^{2})\bar{\rho}\] \[\geq\lambda_{\min}\begin{pmatrix}1&\gamma\\ \gamma&\gamma^{2}+1\end{pmatrix}\bar{\nu}\begin{bmatrix}\eta_{1}-\bar{\eta}_{1} \\ \eta_{2}-\bar{\eta}_{2}\end{bmatrix}\bar{\nu}\]
where \(\bar{\nu}:=\min(1,\|z_{\bar{\eta}}^{\perp}\|_{L^{2}})\). We distinguish two cases.
**Case 1:** If \(\bar{z}_{\bar{\eta}}\neq\gamma\mathds{1}\), it must be that \(\bar{z}_{\bar{\eta}}^{\perp}\neq 0\). The minimal eigenvalue \(\lambda_{m}=((\gamma^{2}+2)-\sqrt{(\gamma^{2}+2)^{2}-4})/2\) in (16) is positive for any \(\gamma\), hence, combining (15) and (16) yields parameter identifiability
\[\|\eta-\bar{\eta}\|<2(\lambda_{m}\bar{\nu})^{-\frac{1}{2}}\|\bar{s}_{\eta}- \bar{s}_{\bar{\eta}}\|_{L^{2}}=:L_{\bar{\eta}}\|\bar{s}_{\eta}-\bar{s}_{\bar{ \eta}}\|_{L^{2}}.\]
**Case 2:** If \(\bar{z}_{\bar{\eta}}=\gamma\mathds{1}\) (i.e., when \(\bar{z}_{\bar{\eta}}^{2}=0\)), then the left hand side of (15) becomes \(\|(\eta_{1}-\bar{\eta}_{1})\mathds{1}+(\eta_{2}-\bar{\eta}_{2})\mathbb{W}\bar{s }_{\bar{\eta}}\|_{L^{2}}=\|(\eta_{1}-\bar{\eta}_{1})\mathds{1}+(\eta_{2}-\bar{ \eta}_{2})\gamma\mathds{1}\|_{L^{2}}=|(\eta_{1}+\gamma\eta_{2})-(\bar{\eta}_{1 }+\gamma\bar{\eta}_{2})|\). From (15), we then obtain
\[|(\eta_{1}+\gamma\eta_{2}) -(\bar{\eta}_{1}+\gamma\bar{\eta}_{2})|\leq 2\|\bar{s}_{\eta}- \bar{s}_{\bar{\eta}}\|_{L^{2}}\] \[=:L_{\bar{\eta}}\|\bar{s}_{\eta}-\bar{s}_{\bar{\eta}}\|_{L^{2}}.\]
Hence \(\eta_{1}+\gamma\eta_{2}\) is identifiable. Finally, to show that when \(\bar{z}_{\eta}=\gamma\mathds{1}\), \(\eta\) is not identifiable, we provide a counterexample. Consider a simple LQ graphon game satisfying Assumption 1.b and 5 with constant graphon \(W(x,y)=c\) for some \(c\in\mathbb{R}\). By (10), for any \(\eta\), the unique Nash equilibrium of this LQ game and its corresponding local aggregate are \(\bar{s}=\frac{\eta_{1}}{1-\eta_{2}c}\mathds{1}\) and \(\bar{z}=c\bar{s}=c\frac{\eta_{1}}{1-\eta_{2}c}\mathds{1}\). It is then clear that for any pair of parameters \((\eta_{1},\eta_{2})\) such that \(\frac{\eta_{1}}{1-\eta_{2}c}=\frac{\bar{s}_{1}}{1-\eta_{2}c}\), the identifiability condition will be violated since \(\begin{bmatrix}\eta_{1}-\bar{\eta}_{1}\\ \eta_{2}-\bar{\eta}_{2}\end{bmatrix}\neq 0\) but \(\|\bar{s}_{\eta}-\bar{s}_{\bar{\eta}}\|_{L^{2}}=0\).
**Proof of Proposition 4**
By Assumption 1.b, since \(\eta_{2}\|\mathbb{W}\|<1\) for all \(\eta_{2}\in\Xi\), the homogeneous LQ game equilibrium in (10) can be rewritten by using the Neumann series \(\bar{s}_{\eta}=(\mathds{1}-\eta_{2}\mathbb{W})^{-1}\eta_{1}\mathds{1}=\eta_{1 }\sum_{k=0}^{\infty}\eta_{2}^{k}\mathbb{W}^{k}\mathds{1}\). Hence, for each \(x\),
\[\bar{s}_{\eta}(x)=\eta_{1}\sum_{k=0}^{\infty}\eta_{2}^{k}(\mathbb{W}^{k} \mathds{1})(x)=\eta_{1}f_{0}(\eta_{2},x)\]
where \(f_{h}(\eta_{2},x)\) is as defined in Lemma 4 with \(\alpha_{k}(x):=(\mathbb{W}^{k}\mathds{1})(x)\). We next compute the partial derivatives of \(\bar{s}_{\eta}\) and express them in terms of \(f_{h}(\eta_{2},x)\)
\[\frac{\partial\bar{s}_{\eta}(x)}{\partial\eta_{1}} =f_{0}(\eta_{2},x),\quad\frac{\partial^{2}\bar{s}_{\eta}(x)}{ \partial\eta_{1}^{2}}=0,\] \[\frac{\partial\bar{s}_{\eta}(x)}{\partial\eta_{2}} =\eta_{1}\sum_{k=1}^{\infty}k\eta_{2}^{k-1}\mathbb{W}^{k} \mathds{1}=\eta_{1}f_{1}(\eta_{2},x),\] \[\frac{\partial^{2}\bar{s}_{\eta}(x)}{\partial\eta_{2}^{2}} =\eta_{1}\sum_{k=2}^{\infty}k(k-1)\eta_{2}^{k-2}\mathbb{W}^{k} \mathds{1}=\eta_{1}f_{2}(\eta_{2},x),\] \[\frac{\partial^{2}\bar{s}_{\eta}(x)}{\partial\eta_{1}\partial\eta_ {2}} =\frac{\partial^{2}\bar{s}_{\eta}(x)}{\partial\eta_{2}\partial\eta_{1}}=\sum_{k =1}^{\infty}k\eta_{2}^{k-1}\mathbb{W}^{k}\mathds{1}=f_{1}(\eta_{2},x).\]
Since \(\eta_{2}^{\max}<\infty\) exists by compactness of \(\Xi\), \(|\alpha_{k}(x)|\leq\|\mathbb{W}\|_{\infty}^{k}\) and \(\eta_{2}^{\max}\|\mathbb{W}\|_{\infty}<1\) by assumption, Lemma 4 with \(\beta=\|\mathbb{W}\|_{\infty}\) ensures that \(f_{h}(\eta_{2},x)\) is well defined, uniformly bounded by a positive constant \(B_{h}\) and \(L_{h}\)-Lipschitz continuous in \(\eta_{2}\) uniformly in \(x\) for any \(h\in\mathbb{N}\). Hence, the equilibrium and its partial derivatives are well defined. To prove Lipschitz continuity, it suffices to show that \(\eta_{1}f_{h}(\eta_{2},x)\) is Lipschitz continuous in \(\eta\) for all \(x\in[0,1]\) and for \(h=0,1,2\). This holds since
\[|\eta_{1}f_{h} (\eta_{2},x)-\tilde{\eta}_{1}f_{h}(\tilde{\eta}_{2},x)|\] \[=|(\eta_{1}-\tilde{\eta}_{1})f_{h}(\eta_{2},x)-\tilde{\eta}_{1}(f _{h}(\tilde{\eta}_{2},x)-f_{h}(\eta_{2},x))|\] \[\leq|\eta_{1}-\tilde{\eta}_{1}||f_{h}(\eta_{2},x)|+|\tilde{\eta}_ {1}||f_{h}(\tilde{\eta}_{2},x)-f_{h}(\eta_{2},x)|\] \[\leq|\eta_{1}-\tilde{\eta}_{1}|B_{h}+\eta_{1}^{\max}L_{h}|\eta_{2} -\tilde{\eta}_{2}|\] \[\leq\max(B_{h},\eta_{1}^{\max}L_{h})((\eta_{1}-\tilde{\eta}_{1})^{ 2}+(\eta_{2}-\tilde{\eta}_{2})^{2})\] \[=:K_{h}\|\eta-\tilde{\eta}\|\quad\forall x\in[0,1]\]
for \(K_{h}:=\max(B_{h},\eta_{1}^{\max}L_{h})\) and \(\eta_{1}^{\max}:=\max_{\eta\in\Xi}|\eta_{1}|<\infty\) since \(\Xi\) is compact.
**Proof of Proposition 5**
By Lemma 5, we can relate the graphon game equilibrium to a vector \(\bar{s}_{\eta}\in\mathbb{R}^{K}\) satisfying
\[\bar{\bar{s}}_{\eta}-\Delta_{\eta}Q\Delta_{\pi}\bar{\bar{s}}_{\eta}=\theta_{1} \mathds{1}.\]
Subtracting to \(\bar{\bar{s}}_{\bar{\eta}}\) the expression for the generic equilibrium \(\bar{\bar{s}}_{\eta}\in\mathbb{R}^{K}\), we get
\[(\bar{\bar{s}}_{\eta}-\bar{\bar{s}}_{\bar{\eta}})-\Delta_{\eta}Q \Delta_{\pi}\bar{\bar{s}}_{\eta}+\Delta_{\bar{\eta}}Q\Delta_{\pi}\bar{\bar{s}}_{ \bar{\eta}} =0\] \[(\bar{\bar{s}}_{\eta}-\bar{\bar{s}}_{\bar{\eta}})-\Delta_{\eta}Q \Delta_{\pi}(\bar{\bar{s}}_{\eta}-\bar{\bar{s}}_{\bar{\eta}})+(\Delta_{\bar{ \eta}}-\Delta_{\eta})Q\Delta_{\pi}\bar{\bar{s}}_{\bar{\eta}} =0\] \[(\mathbb{I}-\Delta_{\eta}Q\Delta_{\pi})(\bar{\bar{s}}_{\eta}-\bar{ \bar{s}}_{\bar{\eta}})=(\Delta_{\eta}-\Delta_{\bar{\eta}})Q\Delta_{\pi}\bar{ \bar{s}}_{\bar{\eta}}.\]
Then, taking the norm of both sides, we have
\[\|(\Delta_{\eta}-\Delta_{\bar{\eta}})Q\Delta_{\pi}\bar{s}_{\bar{ \eta}}\|=\|(I-\Delta_{\eta}Q\Delta_{\pi})(\bar{s}_{\eta}-\bar{\bar{s}}_{ \bar{\eta}})\| \tag{17}\] \[\leq(\|I\|+\|\Delta_{\eta}\|\cdot\|Q\Delta_{\pi}\|)\|\bar{\bar{s}} _{\eta}-\bar{\bar{s}}_{\bar{\eta}}\|\] \[\stackrel{{(a)}}{{\leq}}(1+\max_{k}(\eta_{k})\lambda_{ \max}(Q\Delta_{\pi}))\|\bar{\bar{s}}_{\eta}-\bar{\bar{s}}_{\bar{\eta}}\| \stackrel{{(b)}}{{\leq}}2\|\bar{\bar{s}}_{\eta}-\bar{\bar{s}}_{ \bar{\eta}}\|\]
where (a) follows from [2, Lemma 10] since \(Q\) is symmetric, and since \(\lambda_{\max}(Q\Delta_{\pi})=\lambda_{\max}(\mathbb{W})\), (b) follows by Assumption 1.b as \(\max_{k}(\eta_{k})\lambda_{\max}(
By using this fact, it can be easily shown that under the given assumptions, the partial derivatives of \(\bar{\bar{s}}_{\eta}\) exist5 and can be bounded uniformly in \(\eta\) as follows
Footnote 5: The partial derivatives of \(\bar{\bar{s}}_{\eta}=\theta_{1}\nabla_{\eta}^{-1}\mathds{1}\) with respect to \(\eta\) can be computed by using the identity \(\frac{\partial\nabla_{\eta}^{-1}}{\partial\eta}=-\nabla_{\eta}^{-1}\frac{ \partial\nabla_{\eta}}{\partial\eta}\eta_{1}^{-}\), [28, Eq (59)].
\[\left\|\frac{\partial\bar{\bar{s}}_{\eta}}{\partial\eta_{j}\eta_{i}}\right\| \leq\theta_{1}\bar{V}^{2}\lambda_{\max}(\mathbb{W})\sqrt{K}=:M_{1}\] \[\left\|\frac{\partial^{2}\bar{\bar{s}}_{\eta}}{\partial\eta_{j} \partial\eta_{i}}\right\| \leq 2\theta_{1}\bar{V}^{3}\lambda_{\max}(\mathbb{W})^{2}\sqrt{K}=:M _{2}\] \[\left\|\frac{\partial^{3}\bar{s}_{\eta}}{\partial\eta_{i}\eta_{j }\eta_{i}}\right\| \leq 6\theta_{1}\bar{V}^{4}\lambda_{\max}(\mathbb{W})^{3}\sqrt{K}=:M _{3}.\]
The conclusion then follows from Lemmas 3 and 5. We illustrate this result for the equilibrium. First, note that the gradient of \(\bar{\bar{s}}_{\eta}\) can be bounded uniformly in \(\eta\) as follows
\[\|\nabla_{\eta}\bar{\bar{s}}_{\eta}\|_{2}\leq\|\nabla_{\eta}\bar{\bar{s}}_{ \eta}\|_{F}=\sqrt{\sum_{i=1}^{K}\left\|\frac{\partial\bar{\bar{s}}_{\eta}}{ \partial\eta_{i}}\right\|_{2}^{2}}\leq\sqrt{K}M_{1}=:L_{1}\]
where \(\|\cdot\|_{F}\) denotes the Frobenius norm. For any \(k\), this implies that \(\|\nabla_{\eta}[\bar{\bar{s}}_{\eta}]_{k}\|\leq\|\nabla_{\eta}\bar{\bar{s}}_{ \eta}\|_{2}\leq L_{1}\). Hence, by Lemma 3, \([\bar{\bar{s}}_{\eta}]_{k}\) is Lipschitz continuous with constant \(L_{1}\). It follows that the graphon equilibrium \(\bar{s}_{\eta}\) is uniformly Lipschitz in \(\eta\) since for any \(x\in[0,1]\), there exists \(k\) such that
\[|\bar{s}_{\eta}(x)-\bar{s}_{\bar{\eta}}(x)|=|[\bar{\bar{s}}_{\eta}]_{k}-[ \bar{\bar{s}}_{\bar{\eta}}]_{k}|\leq L_{1}\|\eta-\tilde{\eta}\|.\]
Similar arguments apply to the first and second order derivatives of \(\bar{s}_{\eta}\).
|
2310.18706
|
ALERTA-Net: A Temporal Distance-Aware Recurrent Networks for Stock
Movement and Volatility Prediction
|
For both investors and policymakers, forecasting the stock market is
essential as it serves as an indicator of economic well-being. To this end, we
harness the power of social media data, a rich source of public sentiment, to
enhance the accuracy of stock market predictions. Diverging from conventional
methods, we pioneer an approach that integrates sentiment analysis,
macroeconomic indicators, search engine data, and historical prices within a
multi-attention deep learning model, masterfully decoding the complex patterns
inherent in the data. We showcase the state-of-the-art performance of our
proposed model using a dataset, specifically curated by us, for predicting
stock market movements and volatility.
|
Shengkun Wang, YangXiao Bai, Kaiqun Fu, Linhan Wang, Chang-Tien Lu, Taoran Ji
|
2023-10-28T13:31:39Z
|
http://arxiv.org/abs/2310.18706v1
|
ALERTA-Net: A Temporal Distance-Aware Recurrent Networks for Stock Movement and Volatility Prediction
###### Abstract
For both investors and policymakers, forecasting the stock market is essential as it serves as an indicator of economic well-being. To this end, we harness the power of social media data, a rich source of public sentiment, to enhance the accuracy of stock market predictions. Diverging from conventional methods, we pioneer an approach that integrates sentiment analysis, macroeconomic indicators, search engine data, and historical prices within a multi-attention deep learning model, masterfully decoding the complex patterns inherent in the data. We showcase the state-of-the-art performance of our proposed model using a dataset, specifically curated by us, for predicting stock market movements and volatility.
stock market prediction, twitter, google trends, sentiment analysis, macroeconomic data
## I Introduction
Significantly influencing other business sectors [1], the stock market serves as a vital mechanism and is crucial for companies to raise capital. With U.S. stock holdings expected to hit $40 trillion in 2023, equating to 1.5 times the nation's GDP, it stands as a major portion of the entire economy, highlighting the stock market's pivotal position as a benchmark for the U.S. economic landscape. Our research centers on blue-chip stocks1, which mirror the broader dynamics of the stock market.
Footnote 1: Blue chip stocks are shares issued by financially robust, well-established companies with stellar reputations.
We've selected 41 blue-chip stocks from 10 Global Industry Classification Standard (GICS)2 Sectors for our financial market study. Each of these stocks is considered investment-worthy3 by both Moody's and S&P. Recognizing the intrinsic challenge in accurately predicting stock prices as highlighted by Nguyen et al. [2], we use blue-chip stocks in our research to anticipate upcoming stock price movements and volatility trends, as indicated by Feng et al. [3] and Xu et al. [4].
Footnote 2: GICS classifies companies into specific economic sectors and industry groups that most accurately represent their business operations.
Footnote 3: Companies rated Baa or higher by Moody’s and Standard & Poor’s (S&P) are considered to be of high quality and deemed investment-worthy.
In the domain of stock market research, two primary methodologies prevail: technical analysis and fundamental analysis. Technical analysis utilizes past stock prices to predict future trends [5]. However, its heavy dependence on historical data can sometimes overlook sudden market changes due to unexpected events. Assuming a uniformly rational market behavior, this methodology can inadvertently create an echo chamber. This effect can cause trading signals to amplify themselves, eventually becoming disconnected from the actual economic context. Conversely, fundamental analysis integrates both price features and external information, including data from social media [6] and search engines [7]. Mao et al. [8] demonstrated an enhanced accuracy in forecasting the S&P 500 closing price when integrating Twitter4 data into their model. While these data sources frequently reflect not only the financial market but also vital economic indicators, the prevailing research in fundamental analysis tends to emphasize the financial market, neglecting the symbiotic relationship between the broader economy and the stock market. Moreover, while existing models mainly center on forecasting trend shifts [9], they often neglect the importance of the scale of these changes. In the realm of stock behavior, the magnitude of these shifts holds significant weight.
Footnote 4: Despite the recent rebranding of Twitter to “X”, this article retains the use of its original name, “Twitter”.
In this paper, we propose ALERTA-Net: Attentional TemporalL DistancE AwaRe RecurrentT NeurAl Networks. To our best knowledge, it is the first paper to use the combination of social media, macroeconomic data and search engine information to predict both stock price movement and volatility. Our contributions can be summarized as follows:
* **Proposing a framework enabling the fusion of social media, macroeconomic factors and search engine data for stock movements and volatility.** By integrating above information with our method, we can not only predict stock price movements but also efficiently extract information from stock market volatility. This allows us to provide advance warning of any unusual fluctuations in the stock market in the future.
* **Formulating a temporal distance-aware, multi-attention mechanisms on multi-view market data.** The proposed ALERTA-Net shines in recognizing the dynamic, temporal distance-based relationships inherent within difference hidden states. Capitalizing the same day stock price movements, the model greatly amplifies its precision in predicting stock market volatility.
* **Validating the effectiveness and efficiency of the proposed model via experiments and comparisons**. We conduct experiments on one real-world dataset5. Both conventional methods and deep learning based methods for stock market movements and volatility are selected for comparisons. Evaluations of various metrics are presented, illustrating the effectiveness of our proposed model.
Footnote 5: Our dataset is available at [https://github.com/hao1zhao/ALERTA-Net](https://github.com/hao1zhao/ALERTA-Net)
## II Related Work
Predictive methods for stock movement can be broadly categorized into two main types: technical analysis and fundamental analysis. While technical analysis relies exclusively on past price data to anticipate future trends, fundamental analysis adopts a more comprehensive approach, considering not only historical prices but also information from textual sources, economic indicators, financial metrics, and a myriad of both qualitative and quantitative aspects.
**Social Media**: Numerous studies have explored stock market predictions using social media data. Contemporary research combines sentiment analysis with historical price data, extracting insights from platforms such as Yahoo's message board [10], blogs [11], Twitter [4], and Reddit [12]. This integration has revealed correlations with stock market trends.
**News and Search Engine**: Exploring the influence of public news and user browsing habits, research has probed into how traders respond to news events. For instance, Xiong et al. [13] regard Google trends and market data as catalysts for daily S&P 500 variations. Other research, works like Bordino et al. [14] draw connections between search query volume and stock activity. More sophisticated techniques, like the hierarchical attention mechanisms introduced by Hu et al. [15], extract news sequences directly from textual content to predict stock trends.
**Macroeconomic Indicators**: Numerous studies pinpoint various economic elements influencing stock returns. Notably, Ferson et al. [16] highlight the central role of interest rates in dictating stock returns. In addition, indicators like the relative T-Bill rate and the consumption-wealth ratio are underscored as crucial predictors by Jank et al. [17]. Beyond these, economic markers like unemployment rates, inflation, and commodity prices also exert significant influence on stock returns, as affirmed by sources like [18] and [19].
## III Problem Setup
Let \(p=\{p^{1},\ldots,p^{t}\}\) denote the stock's daily adjusted closing price. We formulate the actual labels movement \(y_{m}\) and volatility \(y_{v}\) as follows:
\[y_{m}^{t}=\mathbb{1}(p^{t}-p^{t-1}), \tag{1}\]
\[y_{v}^{t}=\mathbb{1}(p^{t}-p^{t-1})/p^{t-1}. \tag{2}\]
The actual labels can be represented as \(y_{m}=\{y_{m}^{1},\ldots,y_{m}^{t}\}\in\mathbb{R}^{T}\) and \(y_{v}=\{y_{v}^{1},\ldots,y_{v}^{t}\}\in\mathbb{R}^{T}\). Let \(X^{T}=\{x^{1},\ldots,x^{t}\}\in\mathbb{R}^{D\times T}\) represents the sequential input features (e.g., sentiment scores, stock adjusted prices) from the previous \(T\) time-steps, where \(D\) signifies the dimension of the features. Since our goal is to utilize the sequence of features \(X^{T}\) to predict the next time-step movements \(\hat{y}_{m}\) and volatility \(\hat{y}_{v}\) of blue-chip stocks. We can define our prediction functions \(\hat{y}_{m}^{t}=f(X^{T};\Theta_{1}),\hat{y}_{v}^{t}=g(X^{T},\hat{y}_{m}^{t}; \Theta_{2})\), where \(f\) the function with parameters \(\Theta_{1}\) aims to predict the movement of stock s at the next time-step from the sequential features \(X^{T}\) and \(g\) aims to predict the unusual fluctuation from both \(X^{T}\) and \(\hat{y}_{m}^{t}\).
In practical settings, by varying the time lag in extensive historical stock data, we can often produce numerous training examples. However, for clarity in presenting our proposed technique, we zero in on a distinct time lag for predicting both movement and volatility. Additionally, our predictive models have assimilated the concept of adjusted stock prices [20]. This ensures that our models discern authentic stock value changes driven by market factors, rather than getting influenced by artificial shifts arising from corporate decisions.
Figure 1: The architecture of ALERTA-Net is designed to predict the movement \(y_{m}^{t}\), and volatility \(y_{v}^{t}\) on day \(t\). In the data input and preprocessing phase, we extract textual information from Twitter and convert it into sentiment scores; Then, ALERTA-Net utilizes these scores, along with other features, to make predictions, taking temporal distance into account.
## IV Framework Components
In this paper, we introduce a new framework, ALERTA-Net, which incorporates temporal distance-aware, multi-attention mechanisms for processing multi-view stock data. The overall architecture is illustrated in Figure 1. The data input & preprocessing layer transforms both temporal and textual information into dense vectors. Then, our temporal distance-aware layer have the recurrent representation identifies hidden dependencies within the current stock data, based on past information. After that, distance-matrix context integrates these historical dependencies within the sequence of features \(X^{T}\). Lastly, the predictions layer generates time-aware forecasts for the stock movements and volatility in the next time-step, thereby providing a complete and cohesive system for stock prediction.
**Data input & preprocessing**. We designate the textual data extracted from Twitter as \(\alpha\). To quantify the embedded sentiment within the Twitter text, we utilize the roBERTa-base sentiment model in combination with TweetNLP [21], facilitating the generation of tweet sentiment scores denoted as \(\hat{\alpha}\). Then, we concatenate \(\hat{\alpha}\) with other relevant historical data, yielding \(E^{T}=\{e^{1},\ldots,e^{t}\}\in\mathbb{R}^{D\times T}\), in which the feature dimension is 17. Considering the fact that normalization provides a uniform scale to all features, thereby precluding any specific feature from dominating, we apply log normalization to \(E^{T}\). For the purpose of avoiding numerical instability, we add a small constant \(\epsilon=1e-8\), the formula is as follows:
\[X^{T}=\log E^{T}+\epsilon. \tag{3}\]
**Temporal distance aware (TDA)**. In this layer, we propose a temporal-distance aware layer to enhance the modulation of conditional dependencies by allowing the model to access and directly attend to previous hidden states. First, given the proficiency in handling long-term dependencies, recurrent neural network is extensively employed for sequential data processing [22]. The general idea of recurrent unit is to recurrently project the input sequence into a sequence of hidden representations. At each time-step, the recurrent unit learns the hidden representations \(h_{t}\) by jointly considering the input \(x_{t}\) and previous hidden representation \(h_{t-1}\) to capture sequential dependency. To capture the sequential dependencies and temporal patterns in the historical stock features, an GRU [23] recurrent unit is applied to map \(\{x^{1},\ldots,x^{t}\}\) into hidden representations \(\{h^{1},\ldots,h^{t}\}\in\mathbb{R}^{U\times T}\), with the dimension of \(U\).
Instead of just using the immediate previous hidden state \(h^{t}\) to update the current hidden state \(h^{t+1}\), we are now considering a weighted sum of all previous hidden states. This not only enable the model to place greater emphasis on the impact of recent events on the stock market but also to consider a longer history. We utilized a temporal distance to represent the interval between two time steps.
\[w^{i}=\frac{1}{t-i+1}, \tag{4}\]
where the weight \(w^{i}\) for a hidden state at time \(i\) based on its temporal distance from the current time \(t\). By adding 1 to the denominator, we ensure that even the most recent hidden state gets a weight, preventing division by zero.
Then, we apply weights to the hidden states of the GRU:
\[c^{t}=\text{GRU}\left(x^{t},\sum_{i=1}^{t}w^{i}\cdot h^{i}\right), \tag{5}\]
each hidden state \(h^{i}\) is multiplied by its respective weight \(w^{i}\), and the results are summed up. The result is a context state \(c^{t}\) that captures the weighted influence of all previous states.
**Prediction Layer**. Instead of directly predicting stock movement, denoted as \(y_{m}^{t}\), and volatility \(y_{v}^{t}\), we first concatenate the context \(c^{t}\) with the previous hidden state \(h^{t}\) in the fusion layer. We then use the cross-entropy function as the optimizer for stock price movement prediction. Following this, we concatenate the last hidden state \(h^{t}\), context \(c^{t}\), and the output \(y_{m}^{t}\) to predict volatility \(y_{v}^{t}\). For this prediction, we utilize the binary cross-entropy with logits as the loss function.
## V Experiment
### _Dataset Description_
Our dataset coalescest three main components: Historical price data, Twitter data and macroeconomics data.
**Yahoo Finance**. We sourced historical data from Yahoo Finance, which monitored the trajectory of 41 blue-chip stocks between June 1, 2020, and June 1, 2023. To hone our prediction objectives, we designated a threshold range spanning from -0.5% to 0.5% to filter out negligible shifts. While Baker et al. [24] suggest daily stock price alterations exceeding 2.5% are deemed notable, our model's aim is to forecast atypical volatility. Aligning with Ding et al.'s insights [25] on distinct stock fluctuation parameters, we've established a loftier standard, recognizing a 5% swing as an outlier. As a result, we categorize samples with variances below 5% as 0 and those at or above 5% as 1.
**Twitter**. During the same date range, we included approximately 7.8 million tweets, gathered via Twitter's official API at a sampling rate of 10%. We were particular in our selection of tweets: they needed to contain at least one cashtag and had to be posted within standard U.S. trading hours, from 9 am to 4:30 pm. We recognized the significant influence that Twitter volume has on stock trading, a fact underscored by Cazzoli et al. [26]. Thus, we ensured that our model's input parameters included the daily count of processed Twitter posts.
**Google Trends & Federal Reserve Economic Data**. We engaged in targeted searches on Google and Federal Reserve Economic Data, using carefully curated keywords originating from the "Outline of Economics" Wikipedia page. In light of the disparate update intervals across various indicators, we broke down each data pull into smaller segments, ensuring normalization of the data across these windows.
### _Baseline Methods and Evaluation Metrics_
We evaluate our model's effectiveness by comparing it with DP-LSTM [27], a renowned stock movement prediction network by using financial data.
Other benchmarks employed in our study include Extreme Gradient Boosting [28], Attention-based LSTM [29], and GRU [23]. Following similar procedures in (Xu et. al [22]; Zhang et. al [9]), we report our results in terms of Accuracy (Acc.) and Matthews Correlation Coefficient (MCC). Given that data points involving stock price changes greater than 5% only constitute a minor portion of our dataset, we've also chosen to utilize the Area Under the ROC Curve (AUC) as our performance metric in order to achieve a more robust and realistic evaluation.
### _Results_
The performances of our proposed models and the established baselines are detailed in TABLE I. AT-LSTM is observed to be the superior baseline model in terms of accuracy and MCC for movement prediction, while GRU shows an outstanding performance in the AUC (Area Under the Curve) and MCC for volatility prediction. ALERTA-Net surpasses both of these models by significant margins. In terms of accuracy, ALERTA-Net achieves a score of 0.5238 and 0.6136, outperforming GRU and AT-LSTM by 1.4% and 4.2% respectively. Additionally, for MCC, ALERTA-Net outperforms GRU and AT-LSTM by 22.8% and 6.2% respectively, and outshines both in AUC by a margin of 3.4%. Overall, these results reinforce the effectiveness of our proposed model ALERTA-Net.
### _Ablation Study_
To conduct an in-depth analysis of the core components of ALERTA-Net, we have constructed three variations alongside the fully-loaded model. Each variant is specifically tailored to handle certain types of input data: ALERTA-Net(P) solely relies on closed price data, ALERTA-Net(S) exclusively processes Twitter-derived sentiment data, while ALERTA-Net(W/O M) incorporates both price and sentiment data, but omits macroeconomic information. As shown in TABLE II, our ablation study revealed that incorporating macroeconomic data significantly enhances the predictive capabilities of the model for stock movement and volatility to varying degrees.
## VI Conclusion
We introduced ALERTA-Net, a deep generative neural network architecture, to showcase the efficacy of combining search engine data, macro-economy data, and social media data when trying to predict stock movements and volatility. We tested our model on a new comprehensive dataset and showed it performs better than strong baselines. In future studies, we plan to enhance accuracy by integrating a variety of text and audio sources, including earnings calls and financial reports.
|
2303.16607
|
Spectral gap of the symmetric inclusion process
|
We consider the symmetric inclusion process on a general finite graph. Our
main result establishes universal upper and lower bounds for the spectral gap
of this interacting particle system in terms of the spectral gap of the random
walk on the same graph. In the regime in which the gamma-like reversible
measures of the particle systems are log-concave, our bounds match, yielding a
version for the symmetric inclusion process of the celebrated Aldous' spectral
gap conjecture originally formulated for the interchange process. Finally, by
means of duality techniques, we draw analogous conclusions for an interacting
diffusion-like unbounded conservative spin system known as Brownian energy
process.
|
Seonwoo Kim, Federico Sau
|
2023-03-29T11:40:36Z
|
http://arxiv.org/abs/2303.16607v2
|
# Spectral gap of the symmetric inclusion process
###### Abstract.
We consider the symmetric inclusion process on a general finite graph. Our main result establishes universal upper and lower bounds for the spectral gap of this interacting particle system in terms of the spectral gap of the random walk on the same graph. In the regime in which the gamma-like reversible measures of the particle systems are log-concave, our bounds match, yielding a version for the symmetric inclusion process of the celebrated Aldous' spectral gap conjecture originally formulated for the interchange process. Finally, by means of duality techniques, we draw analogous conclusions for an interacting diffusion-like unbounded conservative spin system known as Brownian energy process.
Key words and phrases:Interacting particle systems; unbounded conservative spin systems; spectral gap; symmetric inclusion process; Brownian energy process.
and with infinitesimal generator \(L_{k}\) given, for all functions \(f\in\mathbb{R}^{\Xi_{k}}\), by
\[L_{k}f(\eta)=\sum_{x\in V}\eta_{x}\sum_{y\in V}c_{xy}\left(\alpha_{y}+\eta_{y} \right)\left(f(\eta-\delta_{x}+\delta_{y})-f(\eta)\right)\,\qquad\eta\in\Xi_{k}. \tag{1.1}\]
In this formula, \(\eta-\delta_{x}+\delta_{y}\) denotes the configuration in \(\Xi_{k}\) obtained from \(\eta\) (with \(\eta_{x}\geq 1\)) by moving a particle from site \(x\) to \(y\). We observe that, if \((\alpha_{y}+\eta_{y})\) were to be replaced by \((1-\eta_{y})\), \(L_{k}\) in (1.1) would describe a SEP-dynamics (e.g., [10]); while, if \(c_{xy}\equiv 1/n\), we would obtain the Moran model with parent-independent mutation (e.g., [11]).
For each \(k\in\mathbb{N}\), due to the connectedness of the graph \(G\), the generator \(L_{k}\) describes an irreducible Markov chain with a unique invariant measure, which we call \(\mu_{\alpha,k}\). As a simple detailed balance computation shows, the particle system is actually reversible with respect to \(\mu_{\alpha,k}\), which reads as follows:
\[\mu_{\alpha,k}(\eta)=\frac{1}{Z_{\alpha,k}}\prod_{x\in V}\frac{\Gamma(\alpha_ {x}+\eta_{x})}{\Gamma(\alpha_{x})\,\eta_{x}!}\,\qquad\eta\in\Xi_{k}\, \tag{1.2}\]
with
\[Z_{\alpha,k}:=\frac{\Gamma(|\alpha|+k)}{\Gamma(|\alpha|)\,k!}\,\qquad| \alpha|:=\sum_{x\in V}\alpha_{x}\,\]
being the normalization constant. Here, \(\Gamma(\beta)\) is the usual gamma function, for which we recall \(\Gamma(\beta+\ell)/\Gamma(\beta)=\beta\,(\beta+1)\cdots(\beta+\ell-1)\) for \(\beta>0\) and \(\ell\in\mathbb{N}\).
By the ergodic theorem for finite-state Markov chains, the law of the \(k\)-particle system converges in the long-run to the invariant measure \(\mu_{\alpha,k}\). Moreover, since the process is reversible, the generator \(L_{k}\) has real and non-positive eigenvalues
\[-\lambda_{k,n-1}\leq\ldots\leq-\lambda_{k,1}<-\lambda_{k,0}=0\,\]
and all admit a variational characterization. Among these, the spectral gap -- namely, the second smallest eigenvalue of \(-L_{k}\) -- measures the exponential rate of such a convergence.
In what follows, we let
\[\operatorname{gap}_{k}(G,\alpha):=\lambda_{k,1}\]
denote the spectral gap of \(\operatorname{SIP}_{k}(G,\alpha)\). Our main goal is to estimate \(\operatorname{gap}_{k}(G,\alpha)\) in terms of the spectral gap of the corresponding random walk on the same graph, i.e., the Markov process, referred to as \(\operatorname{RW}(G,\alpha)\), on \(V\) with generator \(A_{\alpha}\) acting on functions \(f\in\mathbb{R}^{V}\) as
\[A_{\alpha}f(x)=\sum_{y\in V}c_{xy}\,\alpha_{y}\left(f(y)-f(x)\right)\,\qquad x \in V\.\]
Since the rate to jump from site \(x\) to \(y\) equals \(c_{xy}\,\alpha_{y}\), detailed balance shows that the reversible measure for \(\operatorname{RW}(G,\alpha)\) is proportional to \(\alpha=(\alpha_{x})_{x\in V}\).
We observe that \(\operatorname{SIP}_{k}(G,\alpha)\) with just one particle (\(k=1\)) corresponds to a single random walk (thus, non-interacting) evolving like \(\operatorname{RW}(G,\alpha)\) on the sites of \(G\); the non-trivial inclusion interaction between particles occurs as soon as \(k\geq 2\). Let
\[\operatorname{gap}_{\operatorname{SIP}}(G,\alpha):=\inf_{k\geq 2}\operatorname{ gap}_{k}(G,\alpha)\qquad\text{and}\qquad\operatorname{gap}_{ \operatorname{RW}}(G,\alpha):=\operatorname{gap}_{1}(G,\alpha)\]
denote the spectral gaps of the interacting particle system and of the random walk, respectively. We are now ready to state our main result.
**Theorem 1.1**.: _For every \(G=(V,(c_{xy})_{x,y\in V})\) and \(\alpha=(\alpha_{x})_{x\in V}\),_
\[(1\wedge\alpha_{\min})\operatorname{gap}_{\operatorname{RW}}(G,\alpha)\leq \operatorname{gap}_{\operatorname{SIP}}(G,\alpha)\leq\operatorname{gap}_{ \operatorname{RW}}(G,\alpha)\,\]
_where \(\alpha_{\min}:=\min_{x\in V}\alpha_{x}\)._
These bounds can be used to efficiently estimate the spectral gap of SIP in concrete examples. Remarkably, we observe that the inequalities in Theorem 1.1 saturate to identities as soon as \(\alpha_{\min}\geq 1\) (which is equivalent to the log-concavity of \(\mu_{\alpha,k}\)), yielding the following spectral gaps' identity:
**Corollary 1.2**.: _For every \(G=(V,(c_{xy})_{x,y\in V})\) and \(\alpha=(\alpha_{x})_{x\in V}\) such that \(\alpha_{\min}\geq 1\),_
\[\operatorname{gap}_{\operatorname{SIP}}(G,\alpha)=\operatorname{gap}_{ \operatorname{RW}}(G,\alpha). \tag{1.3}\]
The result in Corollary 1.2 may be interpreted as a SIP's version of the celebrated _Aldous' spectral gap conjecture_, originally formulated for the interchange process and SEP, and recently solved by Caputo _et al._ in [10]. An identity of the type (1.3) may be viewed as an exact tensorization of the Poincare inequality over the \(k\) components, a property which trivially holds when considering \(k\) independent particles. Such an identity is, in general, not expected to hold for truly interacting systems. This property -- apart from situations in which the spectrum is fully explicit (e.g., [14, 15]) -- has been established on _any graph_ only for a few examples other than the interchange process and SEP:
* SEP in contact with equilibrium reservoirs in [13];
* the binomial splitting process in [12] (see also [1]).
Corollary 1.2 proves that SIP with \(\alpha_{\min}\geq 1\) also satisfies this exact tensorization property. See also Section 1.3.2 for a more detailed discussion on Aldous' spectral gap conjecture and related work.
### BEP and its spectral gap
The Brownian energy process (BEP) is an interacting system of continuous spins placed on the sites of a graph (e.g., [1]). This process falls into the larger class of _unbounded conservative spin systems_, and is intimately related to SIP. The spins (or, energies) evolve as diffusions. Moreover, the dynamics preserves the total amount of energy of the system, and is reversible with respect to measures associated to gamma distributions. Just like SIP and the Moran model are related, BEP on the complete graph corresponds to the multi-type Wright-Fisher diffusion with mutation.
Let us now describe the model more formally. Given a graph \(G\) and site-weights \(\alpha=(\alpha_{x})_{x\in V}\), \(\operatorname{BEP}(G,\alpha)\) is the diffusion process \((\zeta(t))_{t\geq 0}\) on \([0,\infty)^{V}\), and whose infinitesimal evolution is described by the following generator:
\[\mathcal{L}=\frac{1}{2}\sum_{x,y\in V}c_{xy}\left\{-\left(\alpha_{y}\,\zeta_{ x}-\alpha_{x}\,\zeta_{y}\right)\left(\partial_{\zeta_{x}}-\partial_{\zeta_{y}} \right)+\zeta_{x}\,\zeta_{y}\left(\partial_{\zeta_{x}}-\partial_{\zeta_{y}} \right)^{2}\right\}. \tag{1.4}\]
The diffusion \((\zeta(t))_{t\geq 0}\) admits \(\nu_{\theta}:=\otimes_{x\in V}\operatorname{Gamma}(\alpha_{x},\theta)\), \(\theta>0\), as a one-parameter family of reversible product measures, fully supported on \([0,\infty)^{V}\). However, all features of the system are well-captured by the dynamics which only considers configurations with unit total energy: on the one side, applying the generator \(\mathcal{L}\) to the function \(\zeta\mapsto|\zeta|:=\sum_{x\in V}\zeta_{x}\) shows that the dynamics conserves the total energy of the system; on the other side, a simple scaling argument demonstrates that the action of \(\mathcal{L}\) does not depend on \(|\zeta|\). Therefore, all throughout, it suffices to consider \(\zeta(t)\) as evolving on \(\Delta_{V}\), the simplex of probability measures on \(V\), for which \(\pi:=\nu_{\theta}(\,\cdot\mid\zeta\in\Delta_{V})\), \(\theta>0\), is reversible, and given by
\[\pi(\mathrm{d}\zeta)=\left(\frac{1}{B(\alpha)}\prod_{x\in V}\zeta_{x}^{\alpha _{x}-1}\right)\mathrm{d}\zeta\,\qquad\text{with }\zeta\in\Delta_{V}\,\ B(\alpha):=\frac{ \prod_{x\in V}\Gamma(\alpha_{x})}{\Gamma(|\alpha|)}\,\]
where \(\mathrm{d}\zeta\) denotes the uniform measure on \(\Delta_{V}\). Note that \(\pi\) is independent of \(\theta>0\).
Quantifying the exponential rate of convergence to equilibrium goes through a spectral analysis of the generator \(\mathcal{L}\) on \(L^{2}(\Delta_{V},\pi)\). Since \(\mathcal{L}\) is self-adjoint on \(L^{2}(\Delta_{V},\pi)\), its spectrum is real. Moreover, as we will show in Section 4, \(-\mathcal{L}\) is non-negative and has a pure-point spectrum, with only one zero eigenvalue corresponding to the constant eigenfunction.
In our next result, we provide an analogue of Theorem 1.1 for \(\operatorname{gap}_{\mathrm{BEP}}(G,\alpha)>0\), the smallest non-zero eigenvalue of \(-\mathcal{L}\) on \(L^{2}(\Delta_{V},\pi)\).
**Theorem 1.3**.: _For every \(G=(V,(c_{xy})_{x,y\in V})\) and \(\alpha=(\alpha_{x})_{x\in V}\),_
\[(1\wedge\alpha_{\min})\operatorname{gap}_{\mathrm{RW}}(G,\alpha)\leq \operatorname{gap}_{\mathrm{BEP}}(G,\alpha)\leq\operatorname{gap}_{\mathrm{RW }}(G,\alpha)\.\]
_Hence, provided \(\alpha_{\min}\geq 1\),_
\[\operatorname{gap}_{\mathrm{BEP}}(G,\alpha)=\operatorname{gap}_{\mathrm{RW }}(G,\alpha). \tag{1.5}\]
### Related work, proof strategy, and open problems
#### 1.3.1. Functional inequalities and comparison techniques
Functional inequalities play a major role in PDE and probability theory, and several approaches have been developed for this purpose. Comparison techniques (e.g., [10, 11]) are among the most robust and well-established ones, and proved to be especially effective when estimating spectral gaps (or Poincare constants, their inverses), log-Sobolev constants, and Nash inequalities.
In the more specific context of interacting particle systems and unbounded spin systems subjected to conservation laws, comparing Dirichlet forms is key in the so-called martingale method and its variants (e.g., [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23] and references therein). In a nutshell, this strategy compares the system's Dirichlet form on the geometry of interest (e.g., SEP on \(\mathbb{Z}^{d}\)-boxes of size \(\ell\)), to that on more tractable geometries (e.g., the complete graph), and finally transfers the gained information back through a path counting argument. In most examples, this method captures the correct dependence on the size \(\ell\) of the system, but the universal prefactor is typically not optimal (e.g., it deteriorates with \(d\), the dimension of the box).
#### 1.3.2. Aldous' spectral gap conjecture and related examples
Sharper identities like the one expressed in Aldous' spectral gap conjecture [10] holding true on general graphs have been verified only for a handful of models (as already discussed below Corollary 1.2), each of these examples requiring _ad hoc_ proof arguments.
The proof in [10] for the interchange process and SEP combines in a non-trivial way a nonlinear network reduction and a hard correlation inequality (which became known as _Octopus inequality_). The first ingredient allows an induction argument on \(n\), the size of the graph, well-compatible with the particle-hole symmetry of SEP. Such a symmetry (or, alternatively, the fact that SEP may be obtained as a projection of the interchange process) is a property that seems to be lacking for SIP.
Negative dependence -- a form of negativity of correlations of all orders -- is nicely exploited in [11] for the non-conservative reversible SEP. We remark that SIP is positive (rather than negative) dependent, see, e.g., [12, 13].
The arguments in [13] for the binomial splitting process build on a \(L^{2}\)-contraction inequality established in [1] for the "dual" averaging process. BEP is one of the continuum duals of SIP (e.g., [11]); nevertheless, the analogue of such a contraction estimate is not known for BEP.
#### 1.3.3. Discussion on proof strategy
Our approach combines in an elementary way two main ingredients:
* self-duality of the interacting particle system;
* comparison inequalities.
Self-duality/consistency of SIP (cf. (2.2) and (3.3) below) ensures a certain rigidity and structure of eigenvalues and eigenfunctions. This immediately yields the upper bound in Theorem 1.1 (Section 2), and this is what allows to effectively set off an induction argument on \(k\), the total number of particles (rather than \(n\), the size of the graph, as in [10]), for the lower bound in Section 3. In closing the proof of the induction step from \(k-1\) to \(k\), we employ comparison inequalities. Inspired by the recent work [14] on zero-range dynamics, in Lemma 3.2 we rearrange SIP's Dirichlet form so to reduce our task to an estimate of the spectral gap not of the whole system, but rather of the _\(k\)th particle only_, _uniformly over the positions of the remaining \(k-1\) particles_. Finally, the min-max theorem for eigenvalues (e.g., [13, Theorem 1.2.10]) yields the desired single-particle spectral gap inequality (Lemma 3.3). For this step, we compare both Dirichlet forms and \(L^{2}\)-norms of such a particle to those of a non-interacting walk. We emphasize that, although the estimate that we get in Lemma 3.3 deteriorates by a factor \(k\), the reduction applied in Lemma 3.2 returns back exactly the same factor \(k\), thus, allows us to conclude the proof for SIP's spectral gap estimates.
The analogous result for BEP is derived by remapping SIP into BEP via an intertwining relation, translating all spectral information from the particle to the diffusion system.
#### 1.3.4. Spectral gap identity and Gibbs samplers
Focusing on interacting systems on more specific graphs, a spectral gap identity in the spirit of that in (1.3) has been proved on 1D geometries also for two models of continuous spins with Gibbs sampler dynamics in [10, 10]. Especially the first one of these models studied in [10] is relevant for the present work. Indeed, when considered on a general graph \(G=(V,(c_{xy})_{x,y\in V})\) and with general site-weights \(\alpha=(\alpha_{x})_{x\in V}\), it corresponds to the Gibbs sampler of \(\operatorname{BEP}(G,\alpha)\): instead of letting energies diffuse as in BEP, the energies are instantaneously set to their local beta-like equilibrium among randomly chosen edges.
While our Theorem 1.3 shows that, as soon as \(\alpha_{\min}\geq 1\), (1.5) for BEP holds on _any graph_, rather surprisingly an analogous identity for its Gibbs sampler version fails on specific geometries. Indeed, the proof in [10] crucially relies on the one-dimensional structure of the model, which grants a monotonicity property of the spectral gap's eigenfunction (similar to that for birth-and-death chains). However, as discussed in [10, Remark 4], the mean-field version of the same model provides a counterexample to such an identity.
It is well-known (e.g., [11, Section 6.3]) that, at least for symmetric systems, this procedure of "instantaneous thermalization" among edges does not alter qualitative properties of the system as, e.g., the form of the reversible measures and the richness of the duality relations. Nonetheless, it does affect dramatically the eigenstructure of the processes and other quantitative features determining convergence to equilibrium, as the comparison between the model in [10] and our result on BEP illustrates. We emphasize that this example also shows that (self-)duality does not guarantee _per se_ the validity of a spectral gap identity.
#### 1.3.5. Open problems
Besides the problem of quantifying the sensitivity with respect to Gibbs-sampler perturbations of the model (as discussed in, e.g., [10, Section 1.2]; see also the previous paragraph), settling the role of the threshold \(\alpha_{\min}=1\) remains open.
More specifically, our results provide only partial answers in the regime
\[\alpha_{\min}\in(0,1)\.\]
This regime corresponds, roughly speaking, to the case in which particle/energy interaction becomes predominant over the mechanism of independent diffusion. Here, our results state that a spectral gap comparison is robust over the underlying geometry \(G=(V,(c_{xy})_{x,y\in V})\), but do not say anything about the sharpness of the first-order dependence on the parameter \(\alpha_{\min}\in(0,1)\). We emphasize that such a threshold appears also in other related works, e.g., [10, pp. 2453-4], as well as [1, 2], and also there sharp results are not available.
When \(\alpha_{\min}\in(0,1)\), we observe that our proof techniques fail to give a lower bound for \(\operatorname{gap}_{\operatorname{SIP}}\) matching the upper bound. This can be checked already on simple geometries with \(n=3\) or \(n=4\) sites, in which the lower bound in Lemma 3.3 is essentially sharp, thus, cannot be further improved. However, even in those examples in which such estimates fail, a direct inspection with small values of \(k\in\mathbb{N}\) confirms the validity of (1.3).
Finally, we recall that on the complete graph \((c_{xy}\equiv 1/n)\) the spectrum of \(-L_{k}\) is fully explicit and given (without counting multiplicities) by
\[\frac{\ell}{n}\left(|\alpha|+\ell-1\right)\,\qquad\ell=0,1,\ldots,k\.\]
Hence, the spectral gap identity in (1.3) holds true for all positive site-weights \(\alpha=(\alpha_{x})_{x\in V}\) in this mean-field setting. We conclude by remarking that a non-trivial metastable picture of SIP emerges from the asymptotic regime \(\alpha_{x}\equiv\alpha\to 0\) only on the timescale \(\alpha^{-1}\) and for \(k\)-particle systems with \(\alpha\log k=o(1)\) ([1, 1]). Since \(\operatorname{gap}_{\operatorname{RW}}(\alpha)\approx\alpha^{-1}\) as \(\alpha\to 0\), this suggests -- at least at the heuristic level -- that \(\operatorname{gap}_{\operatorname{SIP}}(\alpha)\approx\operatorname{gap}_{ \operatorname{RW}}(\alpha)\). If this were true, the lower bound in Theorem 1.1 would not capture this.
### Structure of the paper
The rest of the paper is organized as follows. The upper bound in Theorem 1.1 is proved in Section 2. The proof of the lower bound in Theorem 1.1 occupies the whole Section 3, which is further divided into four subsections. In Section 4, we present the proof of Theorem 1.3.
## 2. Proof of upper bound in Theorem 1.1
In the remainder of the article, since the graph \(G\) is fixed, we may drop the dependence on \(G\) for the objects. For example, we express
\[\operatorname{gap}_{\operatorname{SIP}}(G,\alpha)=\operatorname{gap}_{ \operatorname{SIP}}(\alpha)\qquad\text{and}\qquad\operatorname{gap}_{ \operatorname{RW}}(G,\alpha)=\operatorname{gap}_{\operatorname{RW}}(\alpha)\.\]
We start with establishing the upper bound in Theorem 1.1. The simple idea underlying its proof is that, as in the case of SEP and other systems enjoying a suitable form of consistency/self-duality, observables of a few particles may be "lifted" to observables of many particles, yet yielding coherent statistics. In particular, eigenfunctions for \(\operatorname{RW}(\alpha)\) "lift" to eigenfunctions for \(\operatorname{SIP}_{k}(\alpha)\); this is rigorously demonstrated in Lemma 2.3 below.
For each \(k\in\mathbb{N}\), the _annihilation operator_\(\mathfrak{a}_{k}:\mathbb{R}^{\Xi_{k-1}}\to\mathbb{R}^{\Xi_{k}}\) is defined, for \(g\in\mathbb{R}^{\Xi_{k-1}}\) and \(\eta\in\Xi_{k}\), as
\[\mathfrak{a}_{k}g(\eta):=\sum_{x\in V}\eta_{x}\,g(\eta-\delta_{x})\,\]
where \(\mathbb{R}^{\emptyset}\) is conventionally understood as the space of constants. Intuitively, \(\mathfrak{a}_{k}g\) evaluates the value at \(\eta\in\Xi_{k}\) by summing up all the values of \(g\) evaluated at a \((k-1)\)-particle configuration chosen inside \(\eta\) uniformly at random. This motivates also to say that
corresponds to the operation of "removing a particle uniformly at random". Moreover, it holds, for every \(k>\ell\in\mathbb{N}\), \(g\in\mathbb{R}^{\Xi_{\ell}}\), and \(\eta\in\Xi_{k}\), that
\[(\mathfrak{a}_{k}\circ\cdots\circ\mathfrak{a}_{\ell+1})g(\eta)=\sum_{\zeta\in \Xi_{\ell}}\left(\prod_{x\in V}\binom{\eta_{x}}{\zeta_{x}}\right)g(\zeta). \tag{2.1}\]
Indeed, the left-hand side of (2.1) can be calculated by summing up the values of \(g\) evaluated at a \(\ell\)-particle configuration chosen from \(\eta\) uniformly, which is exactly the right-hand side of (2.1).
In this section, we use two important properties of \(\mathfrak{a}_{k}\), \(k\in\mathbb{N}\), which can be easily verified:
* the operator \(\mathfrak{a}_{k}:\mathbb{R}^{\Xi_{k-1}}\to\mathbb{R}^{\Xi_{k}}\) is injective;
* it holds that \[\mathfrak{a}_{k}L_{k-1}=L_{k}\mathfrak{a}_{k}\.\] (2.2)
Especially, (2.2) implies that removing a particle at random first and then running the system (right-hand side) is equivalent, in distribution, to running the system first and then removing a particle at random (left-hand side).
**Remark 2.1**.: _The concept in \(\mathfrak{a}_{k}:\mathbb{R}^{\Xi_{k-1}}\to\mathbb{R}^{\Xi_{k}}\) of removing a particle uniformly at random is essential in \(\mathrm{SIP}\). This is not the case in other related models such as the interchange process or the binomial splitting process. In these examples, the dynamics restricted to a subset of labeled particles is still Markovian and of the same type as the larger system. Thus, therein, one can fix a specific particle and then lift the remaining particle configuration. Such a property holds for \(\mathrm{SIP}\) not for all subsets of particles, but only for subsets chosen uniformly at random._
The identity (2.2) has the following consequence. Suppose that \(-L_{k-1}g=\lambda g\) holds for some \(\lambda\in\mathbb{R}\) and non-zero \(g\in\mathbb{R}^{\Xi_{k-1}}\). Then,
\[-L_{k}(\mathfrak{a}_{k}g)=\mathfrak{a}_{k}(-L_{k-1}g)=\mathfrak{a}_{k}( \lambda g)=\lambda\mathfrak{a}_{k}g\,\]
so that the new function \(\mathfrak{a}_{k}g\in\mathbb{R}^{\Xi_{k}}\), which is non-zero since \(\mathfrak{a}_{k}\) is injective, is an eigenfunction of \(-L_{k}\) subjected to the same eigenvalue \(\lambda\). Thus, the operator \(\mathfrak{a}_{k}\) lifts the eigenspace of \(-L_{k-1}\) to the eigenspace of \(-L_{k}\). Then, inductively, the composition \(\mathfrak{a}_{k}\circ\cdots\circ\mathfrak{a}_{\ell+1}:\mathbb{R}^{\Xi_{\ell}} \to\mathbb{R}^{\Xi_{k}}\) lifts the eigenspace of the operator \(-L_{\ell}\) to the eigenspace of \(-L_{k}\) for all \(\ell<k\in\mathbb{N}\). Since all the eigenvalues of \(-L_{\ell}\) are also eigenvalues of \(-L_{k}\), it holds in particular that
\[\mathrm{gap}_{k}(\alpha)\leq\mathrm{gap}_{\ell}(\alpha)\.\]
Considering the special case \(\ell=1\) and taking the infimum over all \(k\geq 2\) in the left-hand side, we have verified the upper bound in Theorem 1.1:
**Theorem 2.2** (Upper bound).: _For every \(\alpha=(\alpha_{x})_{x\in V}\), \(\mathrm{gap}_{\mathrm{SIP}}(\alpha)\leq\mathrm{gap}_{\mathrm{RW}}(\alpha)\)._
Before concluding this section, we record a lemma which presents the eigenfunction of \(-L_{k}\) lifted from the original eigenfunction of \(-A_{\alpha}\) subjected to the same eigenvalue.
**Lemma 2.3**.: _Let \(\psi:V\to\mathbb{R}\) be an eigenfunction for \(-A_{\alpha}\) with eigenvalue \(\lambda\geq 0\). Then, for every \(k\in\mathbb{N}\), the function \(f_{\psi,k}\in\mathbb{R}^{\Xi_{k}}\) given by_
\[f_{\psi,k}(\eta):=\sum_{x\in V}\psi(x)\,\eta_{x}\,\qquad\eta\in\Xi_{k}\,\]
_is an eigenfunction for \(-L_{k}\) with the same eigenvalue \(\lambda\geq 0\)._
Proof.: Define \(g\in\mathbb{R}^{\Xi_{1}}\) as \(g(\delta_{x}):=\psi(x)\). It is clear that \(g\) becomes an eigenfunction for \(-L_{1}\) with eigenvalue \(\lambda\geq 0\). Substituting \(\ell=1\) in (2.1), we obtain that
\[(\mathfrak{a}_{k}\circ\cdots\circ\mathfrak{a}_{2})g(\eta)=\sum_{x\in V}\eta_{x} \,g(\delta_{x})=\sum_{x\in V}\eta_{x}\,\psi(x)\,\]
so that we have
\[f_{\psi,k}=(\mathfrak{a}_{k}\circ\cdots\circ\mathfrak{a}_{2})g. \tag{2.3}\]
Since \(g\neq 0\) (which follows from the fact that \(\psi\) is an eigenfunction) and the operators \(\mathfrak{a}_{2}\) through \(\mathfrak{a}_{k}\) are all injective, \(f_{\psi,k}\) is a non-zero function.
It remains to verify that \(-L_{k}f_{\psi,k}=\lambda f_{\psi,k}\) holds. This is an easy consequence of (2.3) and the intertwining relation (2.2):
\[-L_{k}f_{\psi,k}=-L_{k}(\mathfrak{a}_{k}\circ\cdots\circ\mathfrak{a}_{2})g=( \mathfrak{a}_{k}\circ\cdots\circ\mathfrak{a}_{2})(-L_{1}g)=(\mathfrak{a}_{k} \circ\cdots\circ\mathfrak{a}_{2})(\lambda g)=\lambda f_{\psi,k}\.\]
Thus, we conclude the proof.
## 3. Proof of lower bound in Theorem 1.1
In this section, we tackle the lower bound in Theorem 1.1; namely, we prove
\[(1\wedge\alpha_{\min})\,\mathrm{gap}_{\mathrm{RW}}(\alpha)\leq\mathrm{gap}_{ \mathrm{SIP}}(\alpha). \tag{3.1}\]
### Preliminaries
In Section 2, we demonstrated that the eigenfunctions of the operator \(-L_{k-1}\) are lifted via \(\mathfrak{a}_{k}\) to the eigenfunctions of the operator \(-L_{k}\), thereby providing the eigenvalues of \(-L_{k-1}\) as also eigenvalues of \(-L_{k}\). Hence, to find the remaining eigenvalues of \(-L_{k}\) that do not come from the lifting property of \(\mathfrak{a}_{k}\), since \(-L_{k}\) is self-adjoint, it suffices to investigate functions \(f\in L^{2}(\mu_{\alpha,k})\) (cf. (3.2)) that belong to the orthogonal complement of the image of the operator \(\mathfrak{a}_{k}\). Since \(\mathfrak{a}_{k}\) injectively takes all the functions in \(\mathbb{R}^{\Xi_{k-1}}\) into \(\mathbb{R}^{\Xi_{k}}\), such an \(f\) should satisfy a certain mean-zero condition subjected to each \((k-1)\)-particle configuration:
\[\sum_{x\in V}\left(\xi_{x}+\alpha_{x}\right)f(\xi+\delta_{x})=0\,\qquad\xi \in\Xi_{k-1}\.\]
In this sense, it is natural to define, for \(k\in\mathbb{N}\), the _creation operator_\(\,\mathfrak{a}_{k-1}^{\dagger}:\mathbb{R}^{\Xi_{k}}\to\mathbb{R}^{\Xi_{k-1}}\) as follows: for all \(f\in\mathbb{R}^{\Xi_{k}}\) and \(\xi\in\Xi_{k-1}\),
\[\mathfrak{a}_{k-1}^{\dagger}f(\xi):=\sum_{x\in V}\left(\xi_{x}+\alpha_{x} \right)f(\xi+\delta_{x})\,\]
where, again, \(\mathbb{R}^{\emptyset}\) is considered as the space of constants.
It turns out that the two operators \(\mathfrak{a}_{k}\) and \(\mathfrak{a}_{k-1}^{\dagger}\) are indeed closely related to each other, as the following proposition shows. For two functions \(f,g\in\mathbb{R}^{\Xi_{k}}\), we define the inner product \(\langle f\,|\,g\rangle_{\alpha,k}\) as
\[\langle f\,|\,g\rangle_{\alpha,k}:=\sum_{\eta\in\Xi_{k}}\mu_{\alpha,k}(\eta)\, f(\eta)\,g(\eta)\, \tag{3.2}\]
and let \(L^{2}(\mu_{\alpha,k})\) denote the corresponding \(L^{2}\)-space of functions on \(\Xi_{k}\).
**Proposition 3.1**.: _The following two properties are valid for all \(k\in\mathbb{N}\):_
* _(_adjoint property_) for all_ \(f\in\mathbb{R}^{\Xi_{k}}\) _and_ \(g\in\mathbb{R}^{\Xi_{k-1}}\)_,_ \[\langle\mathfrak{a}_{k}g\,|\,f\rangle_{\alpha,k}=\frac{k}{|\alpha|+k-1} \langle g\,|\,\mathfrak{a}_{k-1}^{\dagger}f\rangle_{\alpha,k-1}\ ;\]
* _(_orthogonal decomposition_)_ \(L^{2}(\mu_{\alpha,k})=\mathrm{Im}\,\mathfrak{a}_{k}\oplus_{\perp}\mathrm{Ker} \,\mathfrak{a}_{k-1}^{\dagger}\)
Proof.: We fix \(f\in\mathbb{R}^{\Xi_{k}}\) and \(g\in\mathbb{R}^{\Xi_{k-1}}\). Then,
\[\langle\mathfrak{a}_{k}g\,|\,f\rangle_{\alpha,k}=\sum_{\eta\in\Xi_{k}}\mu_{ \alpha,k}(\eta)\,\mathfrak{a}_{k}g(\eta)\,f(\eta)=\sum_{x\in\mathbb{V}}\sum_{ \eta\in\Xi_{k}:\,\eta_{x}\geq 1}\mu_{\alpha,k}(\eta)\,\eta_{x}\,f(\eta)\,g( \eta-\delta_{x})\.\]
Rearranging by substituting \(\xi:=\eta-\delta_{x}\), this becomes
\[\sum_{x\in V}\sum_{\xi\in\Xi_{k-1}}\mu_{\alpha,k}(\xi+\delta_{x} )\,(\xi_{x}+1)\,f(\xi+\delta_{x})\,g(\xi)\] \[\qquad=\sum_{x\in V}\sum_{\xi\in\Xi_{k-1}}\frac{k\,(\alpha_{x}+ \xi_{x})}{|\alpha|+k-1}\,\mu_{\alpha,k-1}(\xi)\,f(\xi+\delta_{x})\,g(\xi)\,\]
where in the equality we used (1.2). Thus, applying the definitions of \(\mathfrak{a}_{k-1}^{\dagger}\) and \(\langle\cdot\,|\,\cdot\rangle_{\alpha,k-1}\), the right-hand side equals
\[\sum_{\xi\in\Xi_{k-1}}\frac{k}{|\alpha|+k-1}\,\mu_{\alpha,k-1}(\xi)\, \mathfrak{a}_{k-1}^{\dagger}f(\xi)\,g(\xi)=\frac{k}{|\alpha|+k-1}\,\langle g \,|\,\mathfrak{a}_{k-1}^{\dagger}f\rangle_{\alpha,k-1}\,\]
which concludes the proof of part (a).
(b) Suppose that \(f\in\operatorname{Im}\mathfrak{a}_{k}\) and \(g\in\operatorname{Ker}\mathfrak{a}_{k-1}^{\dagger}\). Then, by part (a), since \(f=\mathfrak{a}_{k}h\) for some \(h\in\mathbb{R}^{\Xi_{k-1}}\),
\[\langle f\,|\,g\rangle_{\alpha,k}=\langle\mathfrak{a}_{k}h\,|\,g\rangle_{ \alpha,k}=\frac{k}{|\alpha|+k-1}\langle h\,|\,\mathfrak{a}_{k-1}^{\dagger}g \rangle_{\alpha,k-1}=0\,\]
where the last equality holds since \(g\in\operatorname{Ker}\mathfrak{a}_{k-1}^{\dagger}\). This proves that \(\operatorname{Im}\mathfrak{a}_{k}\) and \(\operatorname{Ker}\mathfrak{a}_{k-1}^{\dagger}\) are orthogonal. Moreover, \(\dim\operatorname{Im}\mathfrak{a}_{k}=|\Xi_{k-1}|\) since \(\mathfrak{a}_{k}\) is injective, and \(\dim\operatorname{Ker}\mathfrak{a}_{k-1}^{\dagger}\geq|\Xi_{k}|-|\Xi_{k-1}|\) by the dimension theorem. Thus, by orthogonality, we conclude that \(\dim\operatorname{Ker}\mathfrak{a}_{k-1}^{\dagger}=|\Xi_{k}|-|\Xi_{k-1}|\), and that \(L^{2}(\mu_{\alpha,k})=\operatorname{Im}\mathfrak{a}_{k}\oplus_{\perp} \operatorname{Ker}\mathfrak{a}_{k-1}^{\dagger}\).
A simple consequence of the previous proposition and (2.2) is that the following identity holds: for \(k\in\mathbb{N}\),
\[\mathfrak{a}_{k-1}^{\dagger}L_{k}=L_{k-1}\mathfrak{a}_{k-1}^{\dagger}. \tag{3.3}\]
Indeed, for all \(f\in\mathbb{R}^{\Xi_{k}}\) and \(g\in\mathbb{R}^{\Xi_{k-1}}\), we calculate using part (a) of Proposition 3.1 as
\[\langle g\,|\,\mathfrak{a}_{k-1}^{\dagger}L_{k}f\rangle_{\alpha,k-1}=\frac{| \alpha|+k-1}{k}\,\langle\mathfrak{a}_{k}g\,|\,L_{k}f\rangle_{\alpha,k}=\frac{| \alpha|+k-1}{k}\,\langle L_{k}\mathfrak{a}_{k}g\,|\,f\rangle_{\alpha,k}\,\]
where the second identity holds since \(L_{k}\) is self-adjoint on \(L^{2}(\mu_{\alpha,k})\). Then, by (2.2) and again by part (a) of Proposition 3.1, this equals
\[\frac{|\alpha|+k-1}{k}\,\langle\mathfrak{a}_{k}L_{k-1}g\,|\,f\rangle_{\alpha,k }=\langle L_{k-1}g\,|\,\mathfrak{a}_{k-1}^{\dagger}f\rangle_{\alpha,k-1}= \langle g\,|\,L_{k-1}\mathfrak{a}_{k-1}^{\dagger}f\rangle_{\alpha,k-1}\,\]
where the last equality follows from the fact that \(L_{k-1}\) is self-adjoint on \(L^{2}(\mu_{\alpha,k-1})\). Thus, we have proved that
\[\langle g\,|\,\mathfrak{a}_{k-1}^{\dagger}L_{k}f\rangle_{\alpha,k-1}=\langle g\, |\,L_{k-1}\mathfrak{a}_{k-1}^{\dagger}f\rangle_{\alpha,k-1}\]
holds for all \(g\in\mathbb{R}^{\Xi_{k-1}}\) and \(f\in\mathbb{R}^{\Xi_{k}}\), which indeed impiles (3.3).
According to part (b) of Proposition 3.1, we easily obtain the following fact:
\[-L_{k}f=\lambda f\qquad\text{if and only if}\qquad f\in\operatorname{Im} \mathfrak{a}_{k}\text{ or }f\in\operatorname{Ker}\mathfrak{a}_{k-1}^{\dagger}. \tag{3.4}\]
### Decomposition of Dirichlet forms
For \(k\in\mathbb{N}\), we define the _Dirichlet form_\(\mathcal{E}_{\alpha,k}(f)\) evaluated at \(f\in\mathbb{R}^{\Xi_{k}}\) as
\[\mathcal{E}_{\alpha,k}(f):=\langle f\,|\,-L_{k}f\rangle_{\alpha,k}\.\]
Moreover, we let \(\mathcal{D}_{\alpha}(\phi)\) denote the Dirichlet form at \(\phi\in\mathbb{R}^{V}\) subjected to RW(\(\alpha\)):
\[\mathcal{D}_{\alpha}(\phi):=\sum_{x\in V}\frac{\alpha_{x}}{|\alpha|}\,\phi(x) \left(-A_{\alpha}\phi\right)(x)=\frac{1}{|\alpha|}\sum_{x,y\in V}\alpha_{x} \alpha_{y}\,c_{xy}\,\phi(x)\left(\phi(x)-\phi(y)\right). \tag{3.5}\]
Then, we have the following variational representation of the spectral gap (e.g., [10]):
\[\mathrm{gap}_{k}(\alpha)=\inf_{f\in\mathbb{R}^{\Xi_{k}}:\,f\neq\mathrm{const. }}\frac{\mathcal{E}_{\alpha,k}(f)}{\mathrm{Var}_{\alpha,k}(f)}\, \tag{3.6}\]
where \(\mathrm{Var}_{\alpha,k}(f):=\langle f\,|\,f\rangle_{\alpha,k}-\langle f \rangle_{\alpha,k}^{2}\), with \(\langle f\rangle_{\alpha,k}:=\langle f\,|\,1\rangle_{\alpha,k}\). Similarly, it holds that
\[\mathrm{gap}_{\mathrm{RW}}(\alpha)=\inf_{\phi\in\mathbb{R}^{V}:\,\phi\neq \mathrm{const.}}\frac{\mathcal{D}_{\alpha}(\phi)}{\mathrm{var}_{\alpha}(\phi )}\, \tag{3.7}\]
where \(\mathrm{var}_{\alpha}(\phi):=\langle\phi\,|\,\phi\rangle_{L^{2}(\alpha)}- \langle\phi\rangle_{L^{2}(\alpha)}^{2}\), with \(\langle\phi\rangle_{L^{2}(\alpha)}:=\langle\phi\,|\,1\rangle_{L^{2}(\alpha)}\). Here, \(L^{2}(\alpha)\) denotes the \(L^{2}\) function space on \(V\) with respect to the probability measure \((\alpha_{x}/|\alpha|)_{x\in V}\).
Suppose that \(f\in\mathrm{Ker}\,\mathfrak{a}_{k-1}^{\dagger}\) for \(k\in\mathbb{N}\). Then, since \(\mathfrak{a}_{k}1(\eta)=\sum_{x\in V}\eta_{x}=k\), we calculate
\[\langle f\,|\,1\rangle_{\alpha,k}=\frac{1}{k}\,\langle f\,|\,\mathfrak{a}_{k} 1\rangle_{\alpha,k}=\frac{1}{|\alpha|+k-1}\,\langle\mathfrak{a}_{k-1}^{\dagger }f\,|\,1\rangle_{\alpha,k-1}=0\,\]
where the second equality holds by part (a) of Proposition 3.1. This implies that the expectation of \(f\) with respect to \(\mu_{\alpha,k}\) is zero, and thus
\[\mathrm{Var}_{\alpha,k}(f)=\sum_{\eta\in\Xi_{k}}\mu_{\alpha,k}(\eta)\,f(\eta)^ {2}. \tag{3.8}\]
In this subsection, we prove the following lemma, which is partially motivated from [11]. The idea of proof is to decompose on \(\mathrm{Ker}\,\mathfrak{a}_{k-1}^{\dagger}\) the \(k\)-Dirichlet form \(\mathcal{E}_{\alpha,k}(\cdot)\) into lower-order Dirichlet forms \(\mathcal{D}_{\beta}(\cdot)\) for some suitably chosen \(\beta=\beta(\alpha)\).
**Lemma 3.2**.: _Suppose that \(f\in\mathrm{Ker}\,\mathfrak{a}_{k-1}^{\dagger}\). Then, it holds that_
\[\mathcal{E}_{\alpha,k}(f)\geq k\left(\inf_{\xi\in\Xi_{k-1}}\mathrm{gap}_{ \mathrm{RW}}(\alpha+\xi)\right)\mathrm{Var}_{\alpha,k}(f)\.\]
Proof.: For \(k\in\mathbb{N}\) and \(f\in\mathrm{Ker}\,\mathfrak{a}_{k-1}^{\dagger}\), we calculate \(\mathcal{E}_{\alpha,k}(f)=\langle f\,|\,-L_{k}f\rangle_{\alpha,k}\) as
\[\sum_{\eta\in\Xi_{k}}\sum_{x,y\in V}\mu_{\alpha,k}(\eta)\,f(\eta)\,\eta_{x}\,c _{xy}\left(\alpha_{y}+\eta_{y}\right)\left(f(\eta)-f(\eta-\delta_{x}+\delta_{y})\right)\]
Writing \(\eta-\delta_{x}=:\xi\in\Xi_{k-1}\) for each fixed \(x\in V\), the right-hand side can be rewritten as
\[\sum_{x\in V}\sum_{\xi\in\Xi_{k-1}}\mu_{\alpha,k}(\xi+\delta_{x})\left(\xi_{x} +1\right)f(\xi+\delta_{x})\sum_{y\in V}c_{xy}\left(\alpha_{y}+\xi_{y}\right) \left(f(\xi+\delta_{x})-f(\xi+\delta_{y})\right)\.\]
By (1.2), it holds that
\[\mu_{\alpha,k}(\xi+\delta_{x})\left(\xi_{x}+1\right)=\frac{Z_{\alpha,k-1}}{Z_ {\alpha,k}}\,\mu_{\alpha,k-1}(\xi)\left(\alpha_{x}+\xi_{x}\right)\,\qquad x\in V\,\ \xi\in\Xi_{k-1}. \tag{3.9}\]
Thus, by using the shortcut \(f_{\xi}(x):=f(\xi+\delta_{x})\) for \(x\in V\) and \(\xi\in\Xi_{k-1}\), we get
\[\mathcal{E}_{\alpha,k}(f)=\frac{Z_{\alpha,k-1}}{Z_{\alpha,k}}\sum_{x\in V}\sum_{ \xi\in\Xi_{k-1}}\mu_{\alpha,k-1}(\xi)\,f_{\xi}(x)\sum_{y\in V}c_{xy}\left( \alpha_{x}+\xi_{x}\right)\left(\alpha_{y}+\xi_{y}\right)\left(f_{\xi}(x)-f_{ \xi}(y)\right)\.\]
Renormalizing and rewriting, this is equal to (recall (3.5))
\[\left(|\alpha|+k-1\right)\frac{Z_{\alpha,k-1}}{Z_{\alpha,k}}\sum_{ \xi\in\Xi_{k-1}}\mu_{\alpha,k-1}(\xi)\sum_{x,y\in V}f_{\xi}(x)\,\frac{c_{xy} \left(\alpha_{x}+\xi_{x}\right)\left(\alpha_{y}+\xi_{y}\right)}{|\alpha|+k-1} \left(f_{\xi}(x)-f_{\xi}(y)\right)\] \[=\left(|\alpha|+k-1\right)\frac{Z_{\alpha,k-1}}{Z_{\alpha,k}}\sum _{\xi\in\Xi_{k-1}}\mu_{\alpha,k-1}(\xi)\,\mathcal{D}_{\alpha+\xi}(f_{\xi})\.\]
By (3.7) with \(\alpha+\xi\) in place of \(\alpha\), the Dirichlet form \(\mathcal{D}_{\alpha+\xi}(f_{\xi})\) in the right-hand side is bounded from below by
\[\operatorname{gap}_{\mathrm{RW}}(\alpha+\xi)\operatorname{var}_{\alpha+\xi}( f_{\xi})\.\]
Thus, we have verified that
\[\mathcal{E}_{\alpha,k}(f)\geq\left(|\alpha|+k-1\right)\frac{Z_{\alpha,k-1}}{Z_ {\alpha,k}}\sum_{\xi\in\Xi_{k-1}}\mu_{\alpha,k-1}(\xi)\,\operatorname{gap}_{ \mathrm{RW}}(\alpha+\xi)\operatorname{var}_{\alpha+\xi}(f_{\xi}). \tag{3.10}\]
Observe that since \(f\in\operatorname{Ker}\mathfrak{a}_{k-1}^{\dagger}\), we have
\[\operatorname{var}_{\alpha+\xi}(f_{\xi}) =\sum_{x\in V}\frac{\alpha_{x}+\xi_{x}}{|\alpha|+k-1}\,f_{\xi}(x) ^{2}-\left(\sum_{x\in V}\frac{\alpha_{x}+\xi_{x}}{|\alpha|+k-1}\,f(\xi+\delta_ {x})\right)^{2}\] \[=\sum_{x\in V}\frac{\alpha_{x}+\xi_{x}}{|\alpha|+k-1}\,f_{\xi}(x) ^{2}\.\]
Thus, plugging this identity into (3.10) yields
\[\mathcal{E}_{\alpha,k}(f) \geq\frac{Z_{\alpha,k-1}}{Z_{\alpha,k}}\,\sum_{\xi\in\Xi_{k-1}} \sum_{x\in V}\mu_{\alpha,k-1}(\xi)\,\operatorname{gap}_{\mathrm{RW}}(\alpha+ \xi)\left(\alpha_{x}+\xi_{x}\right)f_{\xi}(x)^{2}\] \[\geq\left(\inf_{\xi\in\Xi_{k-1}}\operatorname{gap}_{\mathrm{RW}} (\alpha+\xi)\right)\frac{Z_{\alpha,k-1}}{Z_{\alpha,k}}\sum_{x\in V}\sum_{\xi \in\Xi_{k-1}}\mu_{\alpha,k-1}(\xi)\left(\alpha_{x}+\xi_{x}\right)f(\xi+\delta _{x})^{2}\.\]
This last expression outside parenthesis can be rewritten as
\[\frac{Z_{\alpha,k-1}}{Z_{\alpha,k}}\sum_{x\in V}\sum_{\xi\in\Xi_{ k-1}}\mu_{\alpha,k-1}(\xi)\left(\alpha_{x}+\xi_{x}\right)f(\xi+\delta_{x})^{2}\] \[=\sum_{x\in V}\sum_{\xi\in\Xi_{k-1}}\mu_{\alpha,k}\left(\xi+\delta _{x}\right)\left(\xi_{x}+1\right)f(\xi+\delta_{x})^{2}\] \[=\sum_{x\in V}\sum_{\eta\in\Xi_{k}:\eta_{x}\geq 1}\mu_{\alpha,k}( \eta)\,\eta_{x}\,f(\eta)^{2}=k\sum_{\eta\in\Xi_{k}}\mu_{\alpha,k}(\eta)\,f( \eta)^{2}=k\operatorname{Var}_{\alpha,k}(f)\,\]
where the first identity holds by (3.9), the second one holds by substituting \(\eta:=\xi+\delta_{x}\), the third one holds by exchanging the order of summations and \(\eta\in\Xi_{k}\), and the fourth one holds by (3.8). Therefore, we conclude the proof of the lemma.
### Min-max theorem for eigenvalues
Here, we apply the well-known min-max theorem for eigenvalues (e.g., [13, Theorem 1.2.10]) to obtain a lower bound for the term \(\inf_{\xi\in\Xi_{k-1}}\operatorname{gap}_{\operatorname{RW}}(\alpha+\xi)\) that appears in Lemma 3.2.
**Lemma 3.3**.: _For every \(\alpha=(\alpha_{x})_{x\in V}\) and \(k\in\mathbb{N}\), it holds that_
\[\inf_{\xi\in\Xi_{k-1}}\operatorname{gap}_{\operatorname{RW}}(\alpha+\xi)\geq \frac{\alpha_{\min}}{\alpha_{\min}+k-1}\,\operatorname{gap}_{\operatorname{RW }}(\alpha)\.\]
Proof.: Let us compare the Dirichlet forms and the \(L^{2}\)-norms associated to \(\operatorname{RW}(\alpha)\) and \(\operatorname{RW}(\alpha+\xi)\). We claim that, for all \(\phi:V\to\mathbb{R}\) and \(\xi\in\Xi_{k-1}\),
\[\frac{|\alpha|}{|\alpha|+k-1}\,\mathcal{D}_{\alpha}(\phi)\leq\mathcal{D}_{ \alpha+\xi}(\phi)\,\qquad\frac{\alpha_{\min}(|\alpha|+k-1)}{|\alpha|(\alpha_{\min}+k-1)}\, \left\|\phi\right\|_{L^{2}(\alpha+\xi)}^{2}\leq\left\|\phi\right\|_{L^{2}( \alpha)}^{2}. \tag{3.11}\]
The first inequality of (3.11) is trivial, since
\[\frac{|\alpha|}{|\alpha|+k-1}\,\mathcal{D}_{\alpha}(\phi) =\frac{1}{2(|\alpha|+k-1)}\sum_{x,y\in V}c_{xy}\,\alpha_{x}\alpha _{y}\,(\phi(x)-\phi(y))^{2}\] \[\leq\frac{1}{2}\sum_{x,y\in V}c_{xy}\,\frac{\alpha_{x}+\xi_{x}}{ |\alpha|+k-1}\,(\alpha_{y}+\xi_{y})\,(\phi(x)-\phi(y))^{2}=\mathcal{D}_{\alpha +\xi}(\phi)\.\]
The second inequality of (3.11) is also immediate by observing that
\[\frac{\alpha_{\min}(|\alpha|+k-1)}{|\alpha|(\alpha_{\min}+k-1)} \sum_{x\in V}\frac{\alpha_{x}+\xi_{x}}{|\alpha|+k-1}\,\phi(x)^{2} \leq\frac{1}{|\alpha|}\sum_{x\in V}\alpha_{x}\frac{\alpha_{x}+\xi_ {x}}{\alpha_{x}+k-1}\,\phi(x)^{2}\] \[\leq\sum_{x\in V}\frac{\alpha_{x}}{|\alpha|}\,\phi(x)^{2}\,\]
where for the first and second inequalities we used, respectively,
\[\frac{\alpha_{\min}}{\alpha_{\min}+k-1}\leq\frac{\alpha_{x}}{\alpha_{x}+k-1} \qquad\text{and}\qquad\xi_{x}\leq k-1\,\qquad x\in V\.\]
By applying as, e.g., in [13, Theorem 1.2.11] the min-max theorem for eigenvalues and the comparison inequalities in (3.11), we get
\[\frac{\alpha_{\min}}{\alpha_{\min}+k-1}\,\lambda_{j}^{\alpha}\leq\lambda_{j} ^{\alpha+\xi}\,\qquad j=0,1,\dots,n-1\,\]
where \(0=\lambda_{0}^{\alpha}<\lambda_{1}^{\alpha}\leq\dots\leq\lambda_{n-1}^{\alpha}\) are the eigenvalues of the generator \(-A_{\alpha}\) and \(0=\lambda_{0}^{\alpha+\xi}<\lambda_{1}^{\alpha+\xi}\leq\dots\leq\lambda_{n-1} ^{\alpha+\xi}\) are the eigenvalues of the generator \(-A_{\alpha+\xi}\). In particular, for \(j=1\), we obtain the desired comparison inequality for the spectral gaps, which concludes the proof of the lemma.
### Proof of lower bound in Theorem 1.1
Finally, we present a formal proof of the lower bound in Theorem 1.1.
Proof of lower bound in Theorem 1.1.: Recall from (3.1) that we aim to prove that
\[(1\wedge\alpha_{\min})\operatorname{gap}_{\operatorname{RW}}(\alpha)\leq \operatorname{gap}_{k}(\alpha)\,\qquad k\in\mathbb{N}. \tag{3.12}\]
We proceed by an induction on \(k\in\mathbb{N}\). First, (3.12) is obvious for \(k=1\). Next, suppose that (3.12) holds for \(k-1\), and we prove (3.12) for \(k\geq 2\). By (3.6) and (3.4), we have
\[\operatorname{gap}_{k}(\alpha) =\inf_{f\in\mathbb{R}^{\Xi_{k}}:\,f\neq\text{const.}}\frac{\mathcal{ E}_{\alpha,k}(f)}{\operatorname{Var}_{\alpha,k}(f)} \tag{3.13}\] \[=\left(\inf_{f\in\operatorname{Im}_{\mathfrak{a}_{k}:\,f\neq\text {const.}}}\frac{\mathcal{E}_{\alpha,k}(f)}{\operatorname{Var}_{\alpha,k}(f)} \right)\wedge\left(\inf_{f\in\operatorname{Ker}\mathfrak{a}_{k-1}^{\dagger}:\,f \neq\text{const.}}\frac{\mathcal{E}_{\alpha,k}(f)}{\operatorname{Var}_{\alpha,k}( f)}\right)\.\]
Since \(\mathfrak{a}_{k}:\mathbb{R}^{\Xi_{k-1}}\to\mathbb{R}^{\Xi_{k}}\) lifts all the eigenfunctions of \(-L_{k-1}\) to \(-L_{k}\), it is readily verified that
\[\inf_{f\in\operatorname{Im}\mathfrak{a}_{k}:f\neq\operatorname{const.}}\frac{ \mathcal{E}_{\alpha,k}(f)}{\operatorname{Var}_{\alpha,k}(f)}=\operatorname{ gap}_{k-1}(\alpha)\geq(1\wedge\alpha_{\min})\operatorname{gap}_{\operatorname{RW}}( \alpha)\, \tag{3.14}\]
where the inequality holds by the induction hypothesis. Moreover, by Lemmas 3.2 and 3.3, we have
\[\inf_{f\in\operatorname{Ker}\mathfrak{a}_{k-1}^{\dagger}:f\neq\operatorname{ const.}}\frac{\mathcal{E}_{\alpha,k}(f)}{\operatorname{Var}_{\alpha,k}(f)}\geq k \left(\inf_{\xi\in\Xi_{k-1}}\operatorname{gap}_{\operatorname{RW}}(\alpha+\xi )\right)\geq\frac{\alpha_{\min}\,k}{\alpha_{\min}+k-1}\,\operatorname{gap}_{ \operatorname{RW}}(\alpha). \tag{3.15}\]
It is straightforward to check that
\[\frac{\alpha_{\min}\,k}{\alpha_{\min}+k-1}\geq 1\wedge\alpha_{\min}\,\qquad \text{for all }k\geq 2. \tag{3.16}\]
Collecting (3.13), (3.14), (3.15), and (3.16), we conclude that
\[\operatorname{gap}_{k}(\alpha)\geq(1\wedge\alpha_{\min})\operatorname{gap}_{ \operatorname{RW}}(\alpha)\,\]
which proves (3.12) for \(k\). Therefore, by induction on \(k\), we conclude the proof of (3.12) and thus the proof of Theorem 1.1.
## 4. Proof of Theorem 1.3
BEP and SIP are related via an intertwining relation (e.g., [11, Proposition 5.1]): for all \(k\in\mathbb{N}\) and \(f\in\mathbb{R}^{\Xi_{k}}\), we have
\[\mathcal{L}\Lambda f=\Lambda L_{k}f\,\qquad\text{with }\Lambda f(\zeta):=\sum_{ \eta\in\Xi_{k}}\left(\prod_{x\in V}\frac{\zeta_{x}^{\eta_{x}}}{\eta_{x}!} \right)f(\eta). \tag{4.1}\]
Thanks to this connection and the assertion in Theorem 1.1, once we check that the spectrum of \(\mathcal{L}\) on \(L^{2}(\Delta_{V},\pi)\) is pure point, the proof of Theorem 1.3 boils down to verifying the validity of the following two claims:
* \(-\operatorname{gap}_{\operatorname{RW}}(\alpha)\) belongs to the spectrum of \(\mathcal{L}\) (Lemma 4.1);
* each eigenvalue of \(\mathcal{L}\) is also an eigenvalue of \(L_{k}\) given in (1.1), for some \(k\in\mathbb{N}\) (Lemma 4.2).
The fact that \(\mathcal{L}\) has a pure point spectrum may be shown as follows. The generator \(\mathcal{L}\) is self-adjoint on \(L^{2}(\Delta_{V},\pi)\), thus, its spectrum is real. Moreover, for every \(k\in\mathbb{N}\), the generator in (1.4) is easily seen to leave invariant the subspace of all polynomials of degree at most \(k\) in the variables \((\zeta_{x})_{x\in V}\). Each of these subspaces is finite-dimensional, ensuring a decomposition of \(\mathcal{L}\) in terms of finitely-many eigenvalue/eigenfunction pairs when restricted therein. By density of polynomials in \(L^{2}(\Delta_{V},\pi)\), this eigendecomposition, suitably orthonormalized, gives rise to an orthonormal basis of \(L^{2}(\Delta_{V},\pi)\) consisting of eigenfunctions of \(\mathcal{L}\). In conclusion, the generator \(\mathcal{L}\) on \(L^{2}(\Delta_{V},\pi)\) admits a pure point real spectrum.
We verify that \(-\operatorname{gap}_{\operatorname{RW}}(\alpha)\) belongs to the spectrum of \(\mathcal{L}\) in the following lemma, whose proof is analogous to that of Lemma 2.3.
**Lemma 4.1**.: _Let \(\psi:V\to\mathbb{R}\) be an eigenfunction for \(A_{\alpha}\) associated to \(-\operatorname{gap}_{\operatorname{RW}}(\alpha)\). Then, the first-order polynomial \(f:\Delta_{V}\to\mathbb{R}\) in the variables \((\zeta_{x})_{x\in V}\) given by_
\[f_{\psi}(\zeta)\coloneqq\sum_{x\in V}\psi(x)\,\zeta_{x}\,\qquad\zeta\in\Delta_{V}\, \tag{4.2}\]
_is an eigenfunction for \(\mathcal{L}\) associated to the same eigenvalue._
Proof.: We first prove that \(f_{\psi}(\zeta)\coloneqq\sum_{x\in V}\psi(x)\,\zeta_{x}\) is a pure point. By Lemma 4.1, we have
\[f_{\psi}(\zeta)\coloneqq\sum_{x\in V}\psi(x)\,\zeta_{x}\,\qquad\zeta\in\Delta_{V}\, \tag{4.3}\]
where \(\zeta_{x}\) is a pure point. By Lemma 4.1, we have
\[f_{\psi}(\zeta)\coloneqq\sum_{x\in V}\psi(x)\,\zeta_{x}\,\qquad\zeta\in\Delta_{V}\, \tag{4.4}\]
which proves (3.12) for \(k\in\mathbb{N}\). Therefore, by induction on \(k\), we conclude the proof of (3.12) and thus the proof of Theorem 1.1.
## References
* [1] A. A.
Proof.: Recalling (4.1) and writing \(g(\delta_{x}):=\psi(x)\), the function in (4.2) reads as
\[f_{\psi}=\varLambda\,g\.\]
In view of this representation for \(f_{\psi}\), the injectivity of \(\varLambda\), and the intertwining relation (4.1), the desired claim follows as in the proof of Lemma 2.3.
In the following lemma, we show that each eigenfunction \(f\) of \(\mathcal{L}\) corresponds to an eigenfunction of \(L_{k}\), for some \(k\in\mathbb{N}\), both associated to the same eigenvalue.
**Lemma 4.2**.: _Let \(f\in L^{2}(\varDelta_{V},\pi)\) be a non-constant eigenfunction of \(\mathcal{L}\) associated to the eigenvalue \(-\lambda<0\). Then, there exist \(k\in\mathbb{N}\) and \(g\in\mathbb{R}^{\Xi_{k}}\) such that \(g\) is an eigenfunction of \(L_{k}\) associated to the eigenvalue \(-\lambda<0\)._
Proof.: By the discussion at the beginning of this section, all eigenfunctions of \(\mathcal{L}\) are polynomials in the variables \((\zeta_{x})_{x\in V}\) of degree at most \(k\), for some \(k\in\mathbb{N}\). Hence, we may rewrite \(f\) as follows:
\[f(\zeta)=\sum_{\ell=0}^{k}\sum_{\eta\in\Xi_{\ell}}g_{\ell}(\eta)\,\prod_{x\in V }\frac{\zeta_{x}^{\eta_{x}}}{\eta_{x}!}=\sum_{\ell=0}^{k}\varLambda g_{\ell} (\zeta)\, \tag{4.3}\]
for some \(k\in\mathbb{N}\) and functions \(g_{\ell}\in\mathbb{R}^{\Xi_{\ell}}\), \(\ell=0,1,\ldots,k\). In the formula above, we included the factor \(\big{(}\prod_{x\in V}\eta_{x}!\big{)}^{-1}\) just for convenience, so to recover an expression like that in (4.1). Now, by applying the generator \(\mathcal{L}\) to the function \(f\) in (4.3) and using (4.1), we get
\[\mathcal{L}f(\zeta)=\sum_{\ell=0}^{k}\sum_{\eta\in\Xi_{\ell}}L_{\ell}g_{\ell }(\eta)\,\prod_{x\in V}\frac{\zeta_{x}^{\eta_{x}}}{\eta_{x}!}=\sum_{\ell=0}^{ k}\varLambda L_{\ell}g_{\ell}(\zeta)\.\]
The assumption \(\mathcal{L}f+\lambda f=0\) yields
\[\sum_{\ell=0}^{k}\sum_{\eta\in\Xi_{\ell}}\left\{L_{\ell}g_{\ell}(\eta)+ \lambda g_{\ell}(\eta)\right\}\prod_{x\in V}\frac{\zeta_{x}^{\eta_{x}}}{\eta_ {x}!}=0\,\]
which holds for all \(\zeta\in\Delta_{V}\) if and only if each term in the bracket equals zero. Finally, since at least one among the functions \(g_{\ell}\), \(\ell>0\), is non-zero, this ensures that \(-\lambda<0\) is an eigenvalue for \(L_{\ell}\).
**Acknowledgement**.: _SK was supported by NRF-2019-Fostering Core Leaders of the Future Basic Science Program/Global Ph.D. Fellowship Program and the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. 2017R1A5A1015626, 2022R1F1A106366811 and 2022R1A5A6000840). SK wishes to express his gratitude to Institute of Science and Technology Austria (where the project was initiated) and University of Trieste (where the project was completed) for the warm hospitality during his stays. FS thanks Pietro Caputo and Matteo Quattropani for fruitful discussions._
|
2302.01052
|
Site-specific Deep Learning Path Loss Models based on the Method of
Moments
|
This paper describes deep learning models based on convolutional neural
networks applied to the problem of predicting EM wave propagation over rural
terrain. A surface integral equation formulation, solved with the method of
moments and accelerated using the Fast Far Field approximation, is used to
generate synthetic training data which comprises path loss computed over
randomly generated 1D terrain profiles. These are used to train two networks,
one based on fractal profiles and one based on profiles generated using a
Gaussian process. The models show excellent agreement when applied to test
profiles generated using the same statistical process used to create the
training data and very good accuracy when applied to real life problems.
|
Conor Brennan, Kevin McGuinness
|
2023-02-02T12:29:38Z
|
http://arxiv.org/abs/2302.01052v1
|
# Site-specific Deep Learning Path Loss Models based on the Method of Moments
###### Abstract
This paper describes deep learning models based on convolutional neural networks applied to the problem of predicting EM wave propagation over rural terrain. A surface integral equation formulation, solved with the method of moments and accelerated using the Fast Far Field approximation, is used to generate synthetic training data which comprises path loss computed over randomly generated 1D terrain profiles. These are used to train two networks, one based on fractal profiles and one based on profiles generated using a Gaussian process. The models show excellent agreement when applied to test profiles generated using the same statistical process used to create the training data and very good accuracy when applied to real life problems.
Propagation, rural, method of moments, surface integral equation, FAFFA, machine learning, convolutional neural network.
## I Introduction
The modelling of EM wave propagation over terrain is a central problem of wireless network design. The physical scale of the problem has meant that radio planners have historically relied on empirical curve-fitting approaches [1] or knife edge models [2]. More accurate formulations, based on for example, surface integral equations (IE) exist [3], but are slow if un-accelerated. A variety of acceleration techniques exist, most notably the Fast Far Field Approximation (FAFFA) [4]. These computational efficiencies can be optimised using the Tabulated Interaction Method (TIM) [5, 6] which offers rapid, accurate simulation but is currently somewhat restricted in its formulation to relatively smooth surfaces. The purpose of this paper is to develop a model which, like the TIM, is both a) similar in accuracy to the FAFFA and b) an order of magnitude faster to implement. Crucially we seek to develop a model which is capable of being extended to more general problems in the future, an extension which is difficult for the TIM. Machine learning (ML) potentially offers a framework to achieve this goal. It has been widely applied to propagation problems in recent years, including propagation in indoor [7, 8] and urban [9, 10, 11, 12] scenarios. Rural deployments have also been considered such as in [13, 14, 15] some based on relatively simple propagation models and others on the parabolic equation method [16]. In this work we seek to develop an accurate _site-specific_ model, that is one that can take in specific terrain height information as input and generate predictions for path loss along that particular profile. A key issue facing all ML techniques is access to representative, accurate training data in sufficient quantity to produce reliable models. This is particularly an issue in propagation modelling, where measured data is expensive in terms of hardware and man-hours. A popular alternative is to use synthetic data based on simulation, as was done in several of the works cited above. This is the approach taken in this paper too, whereby propagation over tens of thousands of realistic profiles is efficiently analysed using the FAFFA. These data constitute a training set which can then be used to develop an accurate, yet computationally efficient, ML model. Synthetic data is justified in this instance on the reasonable basis that integral equation models have demonstrated very good agreement with measured data [3]. In any event, several comparisons to measured data are presented in section V so that the reader can gauge the performance. The paper is organised as follows. Section II briefly describes the surface electric field integral equation and its efficient solution via the method of moments and FAFFA algorithm. The process used to develop the data sets is presented in section III while the development of the ML models is described in IV. Results are presented in section V and we close with conclusions and an outline of potential future work in section VI.
## II EFIE Formulation
In order to generate training data the path loss over a set of artificially synthesised terrain profiles is computed. The Electric Field Integral Equation (EFIE), solved with the method of moments, is used to compute the path loss over each profile. Figure (1) depicts a 2D problem where \(TM^{z}\) incident fields emanate from a line source and impinge on a 1D surface. A time variation of \(e^{jmath}\) is assumed and suppressed. For ease of implementation the terrain is assumed to be perfectly reflecting in this paper, a reasonable assumption at grazing incidence, but this is not a fundamental restriction and will be relaxed in future work. Under these assumptions the total field, \(E_{z}^{t}\) at a general point \(\mathbf{r}\) above the terrain surface can be written as
\[E_{z}^{t}\left(\vec{r}\right)=E_{z}^{s}\left(\vec{r}\right)+E_{z}^{i}\left( \vec{r}\right), \tag{1}\]
where \(E_{z}^{i}\) is the known incident field, i.e the field from the source that would exist in the absence of the scatterer while \(E_{z}^{s}\) is the unknown scattered field caused by the presence of the terrain. The scattered field can be expressed in terms of an integral involving the surface electric current, \(J_{z}\), as
\[E_{z}^{s}\left(\vec{r}\right)=-\frac{k_{0}\eta_{0}}{4}\int_{C}J_{z}\left(\vec{r }^{\prime}\right)H_{0}^{\left(2\right)}\left(k_{0}\left|\vec{r}-\vec{r}^{ \prime}\right|\right)dl^{\prime}, \tag{2}\]
where the integral takes place over the boundary of the scatterer (in this case the terrain surface) and \(H_{0}^{\left(2\right)}\) is a zero-order Hankel function of the second kind. Applying the boundary condition of zero tangential fields for a point \(\vec{r}\) on the scatterer surface yields the following equation for \(J_{z}\)
\[E_{z}^{i}\left(\vec{r}\right)=\frac{k_{0}\eta_{0}}{4}\int_{C}J_{z}\left(\vec{ r}^{\prime}\right)H_{0}^{\left(2\right)}\left(k_{0}\left|\vec{r}-\vec{r}^{ \prime}\right|\right)\ dl^{\prime}. \tag{3}\]
The method of moments is used to discretise the EFIE. The surface current is expanded using \(N\) pulse basis functions, \(f_{n}\), as
\[J_{z}\left(\vec{r}\right)\simeq\sum_{n=1}^{N}j_{n}f_{n}\left(\vec{r}\right), \tag{4}\]
and point matching is applied at the basis domain centres to obtain a \(N\times N\) dense linear system
\[\mathbf{Zj}=\mathbf{v}. \tag{5}\]
Equation (5) can be solved in a variety of ways but it is computationally very expensive to do so when one considers that \(N\) can be of the order of hundreds of thousands for a typical profile at radio frequencies. Assuming forward scattering (approximating \(\mathbf{Z}\) as a lower-triangular matrix) allows (5) to be solved using a straightforward, but slow, process of back substitution given by
\[Z_{mm}j_{m}=V_{m}-\sum_{n<m}Z_{mn}j_{n}\text{ for }m=1\ldots N. \tag{6}\]
Forward scattering is a reasonable assumption for the gently undulating terrain profiles considered in this paper, but nonetheless is an approximation that will be relaxed in future work.
### _FAFFA Acceleration_
Solution via (6) is effective but very slow and not suitable for the generation of training data which requires path-loss analysis for thousands of profiles. To expedite the process the Fast Far Field Approximation (FAFFA) was implemented. The FAFFA proceeds by assembling local collections of neighbouring pulse basis functions into \(M\) groups. With such a decomposition (6) can be equivalently written as
\[Z_{mm}j_{m} = V_{m}-\sum_{l^{\prime}<l}\sum_{n\in G_{l^{\prime}}}Z_{mn}j_{n}- \underset{n\in G_{l},n<m}{\sum}Z_{mn}j_{n}\] \[\text{ for }l=1\ldots M,m\in l.\]
The essence of the FAFFA is to replace the independent interactions between individual basis functions in separate groups with approximate interactions written in terms of a small number of computations that are extensively re-used. The approximation can be derived with reference to figure (2).
Pulse basis functions \(m\) and \(n\) are situated in two groups, \(G_{l}\) and \(G_{l^{\prime}}\) with group centres \(l\) and \(l^{\prime}\) respectively. A simple consideration of the geometry shows that
\[\left|\vec{r}_{mn}\right|\simeq\left|\vec{r}_{ln}\right|-\vec{r} _{ml}\cdot\hat{r}_{ln} \tag{8}\] \[\simeq\left|\vec{r}_{ln}\right|-\vec{r}_{ml}\cdot\hat{r}_{ll^{ \prime}}, \tag{9}\]
which for \(\left|\vec{r}_{mn}\right|\) large allows the associated matrix element to be approximated as
\[Z_{mn}\simeq Z_{ln}e^{jk_{0}\hat{r}_{ml}\cdot\hat{r}_{ll^{\prime}}}. \tag{10}\]
Inserting this into (7) yields
\[Z_{mm}j_{m} = V_{m}-\sum_{l^{\prime}<l}e^{jk_{0}\hat{r}_{ml}\cdot\hat{r}_{ll^{ \prime}}}\sum_{n\in G_{l^{\prime}}}Z_{ln}j_{n} \tag{11}\] \[-\sum_{n\in G_{l},n<m}Z_{mn}j_{n}.\]
for \(l=1\ldots M,m\in l\). The key computational advantage of the FAFFA is that for each pair of groups \(G_{l}\) and \(G_{l^{\prime}}\) once the summation \(\sum_{n\in l^{\prime}}Z_{ln}j_{n}\), representing the fields scattered from group \(G_{l^{\prime}}\) to the centre of group \(G_{l}\), has been computed it can then be stored and repeatedly used to efficiently approximate the fields scattered from group \(G_{l^{\prime}}\) to _every_ point \(m\) in group \(G_{l}\). In practice the groups are linear segments connecting sampled terrain heights. Consequently the final sum on the right hand side of (11), representing the interactions within a group, takes the form of a discrete convolution which can be efficiently computed using a Fast Fourier Transform, yielding an additional saving. Once the surface current has been computed using (11) total fields, and thus path-loss, at selected points above the terrain profile can be computed using (1) and (2). Finally, in order to use this 2D simulation to make real-world predictions, the total electric field at each point is multiplied by \(\frac{1}{\sqrt{R}}\), where
Fig. 1: Geometry for the EFIE
Fig. 2: Geometry for the FAFFA
\(R\) is the distance from the transmitter. This ensures that the power density decays as \(\frac{1}{R^{2}}\) in free space, and serves as a heuristic conversion from a 2D field to a 3D field so as to best compare to measured data.
## III Machine Learning Data Set
Synthetic data was created and used to develop two distinct ML models. In both cases this involved randomly creating 8000 profiles and solving for the electric field at sampled locations above each profile. In order to focus on an examination of the ability of ML to model the physical scattering effects of the terrain we kept some parameters constant in each realisation. These parameters were frequency (assued to be 970\(MHz\)), transmitter height and location (10.4\(m\) over the leftmost terrain point), and receiver height (2.4\(m\) over the terrain at sampled points \(50m\) apart). Each profile comprised 256 sampled \((x,y)\) values where \(x\) is range (in increments of 50\(m\)) and \(y\) is height (chosen randomly in one of two ways, outlined below). The FAFFA was applied to efficiently solve for the fields at the sampled receiver locations. Each of the 8000 elements of the data set thus comprised 256 \((x,y)\) points denoting the field point locations and a vector of 256 corresponding path loss values in dB. We developed two distinct ML models, one trained using synthetic profiles which were realisations of a Gaussian random process, while the second was trained using realisations created using a fractal-generating algorithm.
The first model, referred to as \(ML_{GP}\), was based on a data set of size 8000 created using profiles which were realisations of a Gaussian random process. The root mean square height was set to 20\(m\) while the correlation length was \(800m\). The second model, referred to as \(ML_{F}\) was based on a data set of size 8000 created using profiles which were random fractals created using the Diamond-Square algorithm [17] with variance set to 30 and fractal parameter \(H=1.2\). Some typical realisations (along with model validation results) for both profile types are shown in Fig. (4).
## IV ML model development
The goal of the machine learning model is to predict a \(D\)-dimensional path loss profile associated with a given \(D\) dimensional terrain profile. In this case \(D\) refers to the number of \(50m\) linear segments used to describe the profile. Here, we propose to use a deep neural network, \(f_{\theta}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{D}\), with parameters \(\theta\) and train it using stochastic gradient methods on the synthetic training data. There are various model architectures that could be used for this, including convolutional neural networks (CNN), recurrent neural networks (RNN), and Transformer-based architectures. In this work we opted for a CNN based model, as unlike RNNs, these models can produce the entire path loss profile in a single pass and are often simpler to train than Transformer-based approaches. The network architecture is shown in Fig. 3 and is based on U-Net [18], which is widely used in medical image and general semantic segmentation. The architecture is modified to handle 1D signals by replacing all 2D convolutional and batch normalization layers with their 1D counterparts. Three dropout layers (\(p=\frac{1}{2}\)) are also added to around the central bottleneck (after the 3rd and 4th downsampling layers and after the first upsampling layer) to provide regularization and reduce overfitting. Upsampling is done by linear interpolation without transposed convolutions. The output layer of the network produces a single scalar value for each input and no activation function is used on this layer.
We reduce the number of parameters in the network by approximately 50% by halving the number of channels on the internal layers when compared to the original U-Net model. We also widen the kernel size on the initial two convolution layers from 3 to 11 to provide more spatial context.
### _Adam Optimiser, calibration and data augmentation_
The network was trained using a mean squared error loss function for 25 epochs over the training data using the Adam optimiser [19] with an initial learning rate of \(10^{-2}\) and a weight decay \(\lambda=10^{-5}\). The batch size was set to 128. The model is evaluated on the validation set after each epoch and the model with the lowest validation loss is retained for testing. The learning rate is stepped down by a factor of 10 when the validation loss has not reduced for 10 epochs. The initialization and batch normalization layers in neural networks are typically calibrated so that the output layer produces values that are approximately distributed according to a standard normal distribution before training. If left unchanged, this will cause the loss value to be very large in the early epochs, as the average value of the target is approximately -134 dB. To compensate, we set the bias value of the output layer to -134dB, which makes the network have an error close to that of a regressor that always produces the mean value (around 630 dB\({}^{2}\)). This accelerates training since the network does not need to spend a long time during training moving the network predictions towards the range and scale of the targets. Since the path loss is independent of the absolute height of the terrain profile and source/receiver points, (in that it only depends on the relative distance between the totality of points), the network should produce a result that is invariant to changes in the absolute height of the input profile. To encourage this, we perform random data augmentation at training time to modify the absolute height of the input profile (keeping the relative vertical displacement of the source and receivers fixed at 10.4\(m\)
Fig. 3: Network Architecture
and \(2.4m\) respectively). Specifically, we add a random scalar \(\epsilon\sim\mathcal{N}(0,30)\) to each input before passing it through the network. Other possible approaches to making the network produce outputs that are invariant to profile height shifts are to either normalize the profile to always start at height 0, or to use the profile derivatives as inputs instead of the profile heights. Empirically, however, we found the data augmentation approach more effective, likely because this approach also provides a degree of additional regularization.
### _Uncertainty prediction_
In addition to a point prediction of the path loss, in some applications it is useful to also provide an estimation of the uncertainty of the predictions. To facilitate this, we also trained a variation of the networks that also estimates the variance by modifying the network to produce two outputs:
\[\mu_{\theta}(\mathbf{x})=f_{\theta}(\mathbf{x})_{1},\quad\log\sigma_{\theta} ^{2}(\mathbf{x})=f_{\theta}(\mathbf{x})_{2}, \tag{12}\]
where now \(f_{\theta}:\mathbb{R}^{D}=\mathbb{R}^{D\times 2}\) and the model predicts the log of the variance to ensure that the predicted variance is always positive. Assuming that the distribution of the target conditioned on the terrain profile \(p(\mathbf{y}\mid\mathbf{x})\sim\mathcal{N}(\mu_{\theta}(\mathbf{x}),\sigma_ {\theta}(\mathbf{x})I)\) gives a negative log likelihood loss of:
\[-\log p(\mathbf{y}\mid\mathbf{x})=\frac{1}{2}\sum_{k=1}^{D}\left(\frac{(y_{k}- \mu_{\theta}(\mathbf{x})_{k})^{2}}{\sigma_{\theta}^{2}(\mathbf{x})_{k}}+\log \sigma_{\theta}^{2}(\mathbf{x})_{k}\right). \tag{13}\]
This variant of the model is again trained to minimize the expected value of the above loss using stochastic estimates on batches of size 128. It takes 3 times longer to train than the version that only produces point estimates (75 epochs).
### _Computation times and Validation_
Both model variants are relatively fast to train, requiring less than 5 minutes (NVIDIA GeForce RTX 3090). Inference takes approximately 2ms per profile on both GPU and CPU hardware (AMD Ryzen 9 5950x). Inference can also be done in batches, leading to approximately 10-20\(\times\) better throughput in our experiments (batch size 128). In each case the 8000 profiles were randomly split into 7500 for model development and 500 for validation.
Fig. (4) shows some randomly selected validation tests. Two examples with Gaussian profiles are shown on the left while the right shows two examples of fractal profiles. The results show the ability of the ML model to accurately solve problems involving profiles of the same type.
## V Results
The validation results of section IV-C confirm that the ML model can rapidly reproduce the predictions of a full-wave EM solver with good accuracy when applied to profiles of a similar type (i.e. Gaussian or fractal). To be of use in radio planning the models must generalise and be applicable to more general real-world profiles. To do this we ran the trained models for several profiles for which measured data was available. The results are shown in Figs (5). The terrain profile is shown on top while the bottom plot shows the measured path gain and predictions using the FAFFA, the two ML models and a knife edge model based on Deygout's method. Good agreement for the two ML models is noted with \(M_{F}\), the model based on fractal data, outperforming \(M_{GP}\) the model based on Gaussian profiles. This is perhaps un-surprising given the fractal-like appearance of real terrain profiles. Table I provides statistics about the performance of the models. \(\overline{\epsilon}_{F}\) and \(\sigma_{F}\) are average error and standard deviation of ML predictions relative to the FAFFA predictions, while \(\overline{\epsilon}_{M}\) and \(\sigma_{M}\) are statistics (for all models) based on a comparison to the measured data. In the context of the ML models the most important data are arguably \(\overline{\epsilon}_{F}\) and \(\sigma_{F}\). This is because the ML models are trained on the FAFFA model and not the measurements. The FAFFA prediction thus serves as the upper-bound on their performance and their ability to match the measured data is thus limited to the performance of the FAFFA model. Any enhanced agreement with measurements over that of FAFFA, while superficially welcome, would not necessarily be reproduced for other profiles and could not be considered indicative of an enhanced predictive capability. Nonetheless it should be noted that, when compared to the measurements, both ML models have accuracy which is similar to the FAFFA, while having a greatly reduced computational burden which is similar to the knife edge model.
the model is discussed in section (IV-B).
## VI Conclusions and future work
This paper has presented two ML models for predicting path loss over rural terrain. The ML models are based on deep learning convolutional networks trained with synthetic data. The training data was obtained by generating path loss over thousands of randomly generated profiles, which were generated using either a Gaussian process or a fractal-generating algorithm. The training path loss data was obtained using the FAFFA, an accelerated method of moments solver. In both cases the agreement between the ML models and the validation data was excellent. When applied to real life profiles the model trained on fractal profiles was more accurate with both ML models outperforming a knife edge model. Future work will concentrate on identifying a more representative data set and developing a more general model (which will be valid for multiple frequencies, arbitrary transmitter and receiver heights etc).
|
2308.14362
|
Exciton-exciton Interaction in Monolayer MoSe$_2$ from Mutual Screening
of Coulomb Binding
|
The potential for low-threshold optical nonlinearity has received significant
attention in the fields of photonics and conceptual optical neuron networks.
Excitons in two-dimensional (2D) semiconductors are particularly promising in
this regard as reduced screening and dimensional confinement foster their
pronounced many-body interactions towards nonlinearity. However, experimental
determination of the interactions remains ambiguous, as optical pumping in
general creates a mixture of excitons and unbound carriers, where the impacts
of band gap renormalization and carrier screening on exciton energy counteract
each other. Here by comparing the influences on exciton ground and excited
states energies in the photoluminescence spectroscopy of monolayer MoSe$_2$, we
are able to identify separately the screening of Coulomb binding by the neutral
excitons and by charge carriers. The energy difference between exciton ground
state (A-1s) and excited state (A-2s) red-shifts by 5.5 meV when the neutral
exciton density increases from 0 to $4\times 10^{11}$ cm$^{-2}$, in contrast to
the blue shifts with the increase of either electron or hole density. This
energy difference change is attributed to the mutual screening of Coulomb
binding of neutral excitons, from which we extract an exciton polarizability of
$\alpha_{2D}^{\rm exciton} = 2.55\times 10^{-17}$ eV(m/V)$^2$. Our finding
uncovers a new mechanism that dominates the repulsive part of many-body
interaction between neutral excitons.
|
Ke Xiao, Tengfei Yan, Chengxin Xiao, Feng-ren Fan, Ruihuan Duan, Zheng Liu, Kenji Watanabe, Takashi Taniguchi, Wang Yao, Xiaodong Cui
|
2023-08-28T07:19:22Z
|
http://arxiv.org/abs/2308.14362v1
|
# Exciton-exciton Interaction in Monolayer MoSe\({}_{2}\) from Mutual Screening of Coulomb Binding
###### Abstract
The potential for low-threshold optical nonlinearity has received significant attention in the fields of photonics and conceptual optical neuron networks. Excitons in two-dimensional (2D) semiconductors are particularly promising in this regard as reduced screening and dimensional confinement foster their pronounced many-body interactions towards nonlinearity. However, experimental determination of the interactions remains ambiguous, as optical pumping in general creates a mixture of excitons and unbound carriers, where the impacts of band gap renormalization and carrier screening on exciton energy counteract each other. Here by comparing the influences on exciton ground and excited states energies in the photoluminescence spectroscopy of monolayer MoSe\({}_{2}\), we are able to identify separately the screening of Coulomb binding by the neutral excitons and by charge carriers. The energy difference between exciton ground state (_A-1s_) and excited state (_A-2s_) red-shifts by 5.5 _meV_ when the neutral exciton density increases from 0 to \(4\times 10^{11}cm^{-2}\), in contrast to the blue shifts with the increase of either electron or hole density. This energy difference change is attributed to the mutual screening of Coulomb binding of neutral excitons, from which we extract an exciton polarizability of \(\alpha_{2D}^{exciton}=2.55\times 10^{-17}eV\left(\frac{m}{V}\right)^{2}\). Our finding uncovers a new mechanism that dominates the repulsive part of many-body interaction between neutral excitons.
## Introduction
Excitons are neutral quasiparticles that can be free of permanent electrical dipole and multipoles as well. Coulomb interactions of its electron and hole constituents, on the other hand, give rise to many-body interactions of these composite bosons which can take various forms.[1-3] These complicated
interactions dynamically modify the exciton resonance energy and potentially lead to optical nonlinearity. Excitons also interact with electrons or holes in doped systems, which can lead to renormalization of band gap and screening of Coulomb binding,[4, 5] and formation of Fermi polaron, [6-10] etc.
Monolayer transition metal dichalcogenides (TMDs) have provided a platform to explore exciton phenomena in the two-dimensional (2D) limit. Owing to quantum confinement and the reduced dielectric screening in 2D, excitons in monolayer TMDs exhibit giant binding energy with energetically well separated Rydberg states, [11-16] promising the exploration of excitonic many-body phenomena and optoelectronic applications in ambient conditions.
One standing question regarding 2D excitons is how the exciton resonant energy is affected by the various factors arising from the enhanced Coulomb interaction, which normally coexist in experimental systems.[4] The static screening effect arising from the environmental susceptibility has been recognized as a control to tune the exciton properties with various approaches of dielectric engineering.[17-24] Compared to dielectric environments, doping the monolayer with a bath of electrons/holes,[22] or even excitons, can have a greater impact on the Coulomb interaction and consequently the exciton binding energy. Intuitively, charge carriers and charged excitons (trions) can screen Coulomb interaction, reducing the exciton binding energy, and tend to blueshift exciton resonance. In the meantime, the quasiparticle band gap gets renormalized which tends to redshift exciton resonance.[25-31] Distinguishing the competing factors in a quantitative manner therefore remains an experimental challenge. Furthermore, even neutral excitons may play a role in Coulomb screening like polarizable atoms or molecules, and affects binding energies of each other. Such effect, however, remains unexplored.
In this letter we report the experimental determination of the mutual screening of Coulomb binding of neutral excitons in high quality monolayer MoSe\({}_{2}\) with photoluminescence spectroscopy. We utilize the energy difference \(\Delta E=E_{1s}-E_{2s}\) between the 1s exciton ground state and the 2s excited Rydberg state to unambiguously monitor the change of exciton binding energy, where the contribution of band gap normalization can be completely excluded. From the narrow spectra resonances, we observe a clear red shift of \(\Delta E\) by 5.5 _meV_ when the neutral exciton density increases from 0 to \(4\times 10^{11}cm^{-2}\), while the trion density remains negligibly low. In contrast, increasing electron or hole density leads to a blue shift in \(\Delta E\). These opposite trends unambiguously suggest that the electron-hole binding in an exciton can be appreciably screened by the surrounding excitons, through inducing electrical polarization in these neutral quasiparticles. The exciton polarizability extracted from our PL measurements, \(\sim 2.55\times 10^{17}eV\left(\frac{m}{V}\right)^{2}\), is in excellent agreement with the nonlinear Stark shift measured in applied electric fields.[32-34] This realizes a repulsive many-body interaction of neutral excitons which has a dominating strength as compared to the exciton interactions from Coulomb exchange [1, 35], whereas its sensitive dependence on the exciton Rydberg orbitals further distinguishes it from the effective attractive part from bandgap renormalization.
## Results
Figure 1 summarizes the exciton-density dependent photoluminescence spectra under the excitation of 2.331eV. The prominent PL peaks around 1.646eV and 1.618eV are assigned to the band edge exciton _A-1s_ and its trion _AT-1s_ (charge bounded exciton). The two weak PL peaks (magnified X200 with respect to that of _A-1s_) at ~1.808eV and ~1.844eV are attributed to the first excited state of \(A\) exciton, labelled as _A-2s_ and \(B\) exciton, labelled as _B-1s_, respectively, according to ref [8, 9]. Here \(A\) exciton and \(B\) exciton originate from the spin splitting of the conduction band and valence band at _K_(_K_) valley where the direct band gap is located.[36] The PL intensities of _A-2s_ and _B-1s_ are observed two orders of magnitude weaker than that of _A-1s_ since both are either the excited state or not the band edge excitons. The ground state exciton and its trion (_A-1s_, _AT-1s_) undergo an apparent quantum yield reduction with increasing excitation power (Fig.S2), which may be attributed to Auger recombination. [37] The excitation power dependent PL measurement is then utilized to estimate the exciton density. (Supplementary Note 2). More interestingly, all the excitons and trion (_A-1s_, _AT-1s_, _B-1s_, _A-2s_) undergo an obvious redshift
Figure 1: The intensity-dependent PL spectral map of excitons (a) ground state of band-edge exciton (_A-1s_) and the corresponding charge bound exciton (_AT-1s_), (b) the first excited state of band edge exciton (_A-2s_) state and the ground-state spin-off exciton (_B-1s_). The PL intensities of _A-2s_ state and _B-1s_ are magnified by 200X for better comparison. The trends of exciton energy indicated by the dashed lines show redshift under the increased excitation intensity. (c) The peak energy shifts of _A-1s_, _AT-1s_, _B-1s_ and _A-2s_ states v.s. the exciton density (c.f. Table S1 in SI for the quantitative estimation). (d) The energy difference \(\Delta E\) between _A-1s_ and _A-2s_ states shrinks at the elevated exciton density. The inset shows the exciton wavefunction of _A-1s_ and _A-2s_ states in real space. The scare bar represents 4nm.
(Fig.1(a-b)) at the elevated exciton density. Specifically, the exciton energy shift determined from the Lorentzian fitting of the PL spectra are summarized in Fig.1(c). The energy shifts of _A-1s_, _AT-1s_ and _B-1s_ show a similar dependence on the exciton density. Contrarily, the peak energy of _A-2s_ redshifts obviously at a steeper slope than that of ground state (Table.S1).
Usually, the bandgap renormalization induced by photo doping leads to a redshift of the quasiparticle bandgap, whereas the screening effect induced by photo doping results in the decrease of exciton binding energy and consequently leads to an energy blueshift. Therefore, from our experimental results in monolayer MoSe\({}_{2}\) (Fig.1c), the bandgap renormalization effect appears to be dominant as all exciton states observed (_A-1s_, _AT-1s_, _B-1s_, _A-2s_) experience a redshift at the elevated excitation intensity.
It is worth mentioning that the bandgap renormalization has the same influence on _A-1s_ and _A-2s_ excitons since the electrons and holes of _A-1s_ and _A-2s_ excitons come from the same band edges. Therefore, the energy difference between _A-1s_ and _A-2s_ states are solely dependent on the exciton binding energy. Fig.1(c) shows that the energy difference between _A-1s_ and _A-2s_ states gradually shrinks with the elevated excitation intensity. The different energy shift slope (Fig.1(c)) and the shrinking energy difference may imply that the ground (1s) and the first excited states (2s) experience screening effect to a different extent due to their different Bohr radius (\(r_{B}^{1s}\)\(\sim\)\(1nm\) v.s. \(r_{B}^{2s}\)\(\sim\)\(3nm\)).
**Figure 2.** Charge-density dependent photoluminescence spectra of (a) _A-1s_ and _A7-1s_ and (b) _A-2s_ and _B-1s_ excitons at 10K. The PL intensities of _A-2s_ state and _B-1s_ are magnified by 200X for better comparison. (c) The peak energy of _A-1s_, _AT-1s_, _B-1s_, _A-2s_ states as a function of the electron and hole density. (d) The energy difference \(\Delta E\) between _A-1s_ and _A-2s_ excitons displays clearly blue-shift at the elevated electron/hole density, contrasting to red-shift at the increased exciton density. The inset figure shows the schematic device structure in the carrier-density dependent PL measurement. The charge density is tuned by an electrostatic gating via _h-BN_ cap layer (~20nm thick). Paste figure above the legend.
In principle, the photo excitation can inject unbound photo carriers in addition to the neutral excitons. To distinguish their effects on the different energy shifts of _A-1s_ and _A-2s_, we conduct a series of PL experiments with electrostatic doping. Figure 2 summarizes our carrier-density dependent PL results. At the elevated carrier density, both _A-1s_ and _A-2s_ undergo a blue shift (Fig.2(a-b)), which is consistent with the previous reports.[23] We attribute the blue shift to the exciton-polaron effect based on many-body charge-exciton interaction.[6, 8, 38] On the other hand, \(\Delta_{2s-1s}\), the energy difference between _A-1s_ and _A-2s_ states increases at elevated carrier densities as Fig.2(c) shows. This contrasting shift of \(\Delta_{2s-1s}\) as functions of carrier density (positive slope) vs. the experimentally observed negative slope with exciton intensity ambiguously suggests that the latter is not from the unbound photo carriers. This is further corroborated by the weak trion emission (Fig.1(a)) in comparison with that of _A-1s_ throughout the entire range of excitation intensity, which has implied a very low density of the unbound photocarriers.
The above analysis points to interaction between neutral excitons as the cause of \(\Delta_{2s-1s}\) decreasing with the excitation intensity. As we detail below, while these excitons do not carry charge and dipole at rest, the Coulomb interaction between the electron and hole constituents can polarize adjacent excitons, which in turn screens the Coulomb and leads to reduction in the binding energy. The opposite trends of the charge- and exciton-density dependence in the _1s_ -2s energy difference suggests that our measured redshift under the increased exciton density provides a lower bound for this mutual screening effect among the neutral excitons.
To quantitatively examine the neutral exciton mutual screening effect, we numerically solve the Schrodinger equation of exciton binding
\[\left(-\frac{\hbar^{2}}{2\mu}\nabla^{2}+V(r)\right)\varphi_{1}=E_{1}\varphi_{ 1}\]
with the well-established Rytova-Keldysh form[39, 40]\((V(r)=-\frac{e^{2}}{8\varepsilon_{0}r_{0}}\left[H_{0}\left(\frac{\kappa r}{r _{0}}\right)-Y_{0}\left(\frac{\kappa r}{r_{0}}\right)\right])\) which describes the screened Coulomb interaction in two-dimensional geometry with parameters of dielectric constant (\(\kappa\)) and screening length (\(r_{0}\)). The screening length \(r_{0}\) accounts both the in-plane electric polarizability of pristine monolayer TMDs and that from the polarizability of the neutral exciton bath (Supplementary Note 6). Hence, we fix the effective reduced mass (\(\mu=0.27m_{e}\)) and dielectric constant (\(\kappa=4.5\)) according to ref[41] while setting the screening length as the sole variable to reflect the change of dielectric environment with exciton density.
The solution indicates that the binding energy of the ground state and the excited-states excitons are influenced by the screening length to different extents, as summarized in Fig.3(a). It is noted that the energy shift of the higher excited state is less sensitive to the variation of the screening length, namely \(\frac{\Delta E_{2s}}{\Delta r_{0}}<\frac{\Delta E_{1s}}{\Delta r_{0}}\). This could be understood in the way that the wavefunction of _A-1s_ state has much smaller radius than that of _A-2s_ state (c.f. inset in Fig.1(d)), and consequently is influenced more by the change of screening length in the Keldysh potential (Supplementary Note 8). By fitting our exciton-density
Figure 2: Charge-density dependent photoluminescence spectra of (a) _A-1s_ and _A7-1s_ and (b) _A-2s_ and _B-1s_ excitons at 10K. The PL intensities of _A-2s_ state and _B-1s_ are magnified by 200X for better comparison. (c) The peak energy of _A-1s_, _AT-1s_, _B-1s_, _A-2s_ states as a function of the electron and hole density. (d) The energy difference \(\Delta E\) between _A-1s_ and _A-2s_ excitons displays clearly blue-shift at the elevated electron/hole density, contrasting to red-shift at the increased exciton density. The inset figure shows the schematic device structure in the carrier-density dependent PL measurement. The charge density is tuned by an electrostatic gating via _h-BN_ cap layer (~20nm thick). Paste figure above the legend.
dependent PL results (Fig.1(d)) with the model, we can extract the screening length as a function of the exciton density as shown in Fig.3(b). (Supplementary Note 7) It implies that a change of the exciton density by \(\sim\)\(4.5\times 10^{11}cm^{-2}\) effectively tunes the screening length by \(\Delta r_{0}\sim 0.25\)nm at \(r_{0}\)\(\sim 4.2nm\).
The screening length is linearly proportional to the in-plane electric polarizability of pristine monolayer TMDs and that of the injected neutral exciton bath: \(r_{0}=\frac{1}{2\pi\kappa e_{0}}(\alpha_{2D}^{Mose_{2}}+na_{2D}^{exciton})\), from which we could extract the pristine 2D electric polarizability of monolayer MoSe: \(\alpha_{2D}^{MoSe_{2}}=3.28\times 10^{-19}\left(\frac{c}{\nu}\right)\), and the 2D exciton polarizability \(\alpha_{2D}^{exciton}=2.55\times 10^{-17}eV\left(\frac{m}{\nu}\right)^{2}\). This in-plane exciton polarizability agrees well with the previous calculations [32] and experimentally measured values from the nonlinear Stark effect in applied electric field [33, 34, 42], which further confirms that the exciton-density dependent screening effect originates from excitons polarizability.
Our results show that, in analogy to the carrier screening effect (Fig.3(c)-middle) which modifies the effective Coulomb potential, neutral exciton can also induce a screening effect with a relatively modest but clearly visible contribution (Fig.3(c)-right). Fig.3(b) plots the calculated binding energies of _A-1s_ and _A-2s_ excitons as functions of the exciton density. Together with the measured _A-1s_ and _A-2s_ exciton resonances, we can also extract the bandgap renormalization as a function of the exciton density, which is well consistent with the photoinduced bandgap renormalization measurements [25, 26] and calculations [43, 44].
Figure 3: (a) The screening length fitted from the experimental data as a function of the exciton density. Inset: the calculated binding energy variation of _A-1s_, _A-2s_ exciton states as functions of the screening length (\(r_{0}\)). (b) The extracted binding energy change of _A-1s_, _A-2s_ and bandgap renormalization as functions of exciton density. (c) Schematics of charge screening and exciton polarization. The dashed line
represents the charge or dipole screening effect. The more accurate description could be found at supplementary information.
## Discussion
The mutual screening of neutral excitons effectively realizes a repulsive interaction between these composite bosons. From the linear dependence on exciton density (c.f. Fig. 1 and 3), we can write the exciton Hamiltonian as \(H_{X}=\sum_{l,k}E_{l,k}(n)\hat{\mathcal{A}}_{l,k}^{\dagger}\hat{\mathcal{A}}_{l,k },\ E_{l,k}(n)=E_{l,k}(0)+\eta_{l}n\), where \(E_{l,k}(0)\) is the exciton dispersion of Rydberg state \(l\), \(n=(\sum_{l,k}\hat{\mathcal{A}}_{l,k}^{\dagger}\hat{\mathcal{A}}_{l,k})\) is the exciton density (predominantly the 1s exciton). \(\eta_{l}\) are the slopes of the dashed lines in Fig. 3(b), which correspond to the interaction strength of exciton in Rydberg state \(l\) with the bath of 1s excitons. In a mean-field description, this repulsive interaction can be written as \(H_{int}=\sum_{l,k}\eta_{l,1s}\hat{\mathcal{A}}_{l,k}^{\dagger}\hat{\mathcal{A} }_{l,k}\left(\sum_{k}\hat{\mathcal{A}}_{1,k}^{\dagger}\hat{\mathcal{A}}_{1s,k }\right)\). Our measurements determined the effective interaction strength of 1s excitons \(\eta_{1s,1s}=1.44\times 10^{-11}meV\cdot cm^{2}\), and between 1s and 2s excitons \(\eta_{2s,1s}=1.66\times 10^{-12}meV\cdot cm^{2}\). In the literature, Coulomb exchange is generally considered as the dominant cause for the repulsive part of interaction between neutral excitons [1, 35]. We find that \(\eta_{1s,1s}\) from this mutual screening is one order of magnitude larger as compared to that of the exchange exciton-exciton interaction (c.f. Supplementary Note 9 and Fig.S8). So in monolayer TMDs, this mutual screening mechanism becomes the dominant contribution to the repulsive part.
In summary, we investigate the energy difference between excitonic Rydberg energies in monolayer MoSe\({}_{2}\) as a function of charge and exciton densities by photoluminescence spectroscopy. This energy difference reflects the exciton binding energy and is immune to the band gap renormalization. The contrasting dependence of the energy difference between _A-1s_ and _A-2s_ states on the exciton density (negative slope) _v.s._ charge density (positive slope) supports that the screening effect originates from the polarizability of the neutral excitons. With the model of Rytova-Keldysh potential, the exciton polarizability is extracted to be \(2.55\times 10^{-17}eV\left(\frac{m}{V}\right)^{2}\) up to the exciton density of \(4.5\times 10^{11}cm^{-2}\), well consistent with previous experimental results from nonlinear Stark effect in in-plane electric field. This mutually screening effect between the excitons effectively manifests as a new many-body interaction between the tightly bound excitons in the two-dimensional geometry. This finding calls for microscopic formulation of the exciton-exciton interaction beyond the existing framework that treats the Rydberg orbitals as rigid ones.
## Materials and Methods
### Crystal growth:
Bulk MoSe\({}_{2}\) crystals are grown using the chemical vapor transport (CVT) method. Silica tubes are loaded with Mo powder (99.9%), slightly excessive Se ingot (99.999%), and a small amount of iodine as transport agents. The tubes are then evacuated and sealed. Next, the silica tubes are placed in the reaction zone at 950 \({}^{\circ}\)C and the growth zone at 900 \({}^{\circ}\)C. After a duration of fifteen days, large-sized bulk MoSe\({}_{2}\) crystals are obtained in the cold zone.
### Sample preparation:
The monolayer MoSe\({}_{2}\) and few-layer h-BN are mechanically exfoliated onto a Si substrate with a 285 nm SiO\({}_{2}\) film. Subsequently, the monolayer MoSe\({}_{2}\) is encapsulated by h-BN using the dry transfer method. (43) For the device utilized in carrier density-dependent PL measurement, the dry-transfer method is employed to stack different 2D materials in the sequence of h-BN/Graphite/MoSe/h-BN/Graphite. Additionally, few-layer graphite is employed to establish better contact between the sample and the Au electrode, as illustrated in Figure S1.
## Exciton density dependent PL measurement:
PL spectroscopy was performed based on a home-made confocal microscopy system, using solid state 532nm continuous laser (_Excelisor 532, Spectra-Physics_). Signals were collected in the reflection configuration via a notch filter and dispersed by spectrograph (_Shamrock 193_) prior to detection with an built-in EMCCD (_Andor_). Half-wave plate and polarized beam splitter cube were used to change the excitation power automatically with a program to avoid the artificial factors.
## Acknowledgments
The work was supported by the National Key R&D Program of China (2020YFA0309600), Guangdong-Hong Kong Joint Laboratory of Quantum Matter and the University Grants Committees/Research Grants Council of Hong Kong SAR (AoE/P-701/20, 17300520). K.W. and T.T. acknowledge support from the Elemental Strategy Initiative conducted by the MEXT, Japan (Grant Number JPMXP0112101001) and JSPS KAKENHI (Grant Numbers 19H05790, 20H00354 and 21H05233). R.D and Z.L. acknowledge support from the Singapore Ministry of Education Tier 3 Programme "Geometrical Quantum Materials" AcRF Tier 3 (MOE2018-T3-1-002), AcRF Tier 2 (MOE2019-T2-2-105). The authors thank Mr. Mingyang Liu, Dr. Bairen Zhu and Mr. Huiyuan Zheng for fruitful discussion.
|
2308.04171
|
Core interface optimization for multi-core neuromorphic processors
|
Hardware implementations of Spiking Neural Networks (SNNs) represent a
promising approach to edge-computing for applications that require low-power
and low-latency, and which cannot resort to external cloud-based computing
services. However, most solutions proposed so far either support only
relatively small networks, or take up significant hardware resources, to
implement large networks. To realize large-scale and scalable SNNs it is
necessary to develop an efficient asynchronous communication and routing fabric
that enables the design of multi-core architectures. In particular the core
interface that manages inter-core spike communication is a crucial component as
it represents the bottleneck of Power-Performance-Area (PPA) especially for the
arbitration architecture and the routing memory. In this paper we present an
arbitration mechanism with the corresponding asynchronous encoding pipeline
circuits, based on hierarchical arbiter trees. The proposed scheme reduces the
latency by more than 70% in sparse-event mode, compared to the state-of-the-art
arbitration architectures, with lower area cost. The routing memory makes use
of asynchronous Content Addressable Memory (CAM) with Current Sensing
Completion Detection (CSCD), which saves approximately 46% energy, and achieves
a 40% increase in throughput against conventional asynchronous CAM using
configurable delay lines, at the cost of only a slight increase in area. In
addition as it radically reduces the core interface resources in multi-core
neuromorphic processors, the arbitration architecture and CAM architecture we
propose can be also applied to a wide range of general asynchronous circuits
and systems.
|
Zhe Su, Hyunjung Hwang, Tristan Torchet, Giacomo Indiveri
|
2023-08-08T10:00:14Z
|
http://arxiv.org/abs/2308.04171v1
|
# Core interface optimization for multi-core neuromorphic processors
###### Abstract
Hardware implementations of Spiking Neural Networks (SNNs) represent a promising approach to edge-computing for applications that require low-power and low-latency, and which cannot resort to external cloud-based computing services. However, most solutions proposed so far either support only relatively small networks, or take up significant hardware resources, to implement large networks. To realize large-scale and scalable SNNs it is necessary to develop an efficient asynchronous communication and routing fabric that enables the design of multi-core architectures. In particular the core interface that manages inter-core spike communication is a crucial component as it represents the bottleneck of Power-Performance-Area (PPA) especially for the arbitration architecture and the routing memory. In this paper we present an arbitration mechanism with the corresponding asynchronous encoding pipeline circuits, based on hierarchical arbiter trees. The proposed scheme reduces the latency by more than 70% in sparse-event mode, compared to the state-of-the-art arbitration architectures, with lower area cost. The routing memory makes use of asynchronous Content Addressable Memory (CAM) with Current Sensing Completion Detection (CSCD), which saves approximately 46% energy, and achieves a 40% increase in throughput against conventional asynchronous CAM using configurable delay lines, at the cost of only a slight increase in area. In addition as it radically reduces the core interface resources in multi-core neuromorphic processors, the arbitration architecture and CAM architecture we propose can be also applied to a wide range of general asynchronous circuits and systems.
Multi-core neuromorphic processors, core interface, arbitration architecture, asynchronous CAM
## I Introduction
Neuromorphic processors are event-based processing architectures that adopt in-memory computing strategies and brain-inspired principles of computation to implement computational models of Spiking Neural Networks (SNNs) [1]. Due to their asynchronous and spike-based data-driven processing nature, they have the potential of achieving ultra-low power computations for edge-computing applications. An efficient way to build large-scale SNN processing systems, from both the modeling and implementation perspective, is to adopt a multi-core architecture design approach [2, 3, 4, 5, 6]. In these systems each core consists of a neuro-synaptic array comprising digital or mixed-signal synapse and neuron soma circuits, and an asynchronous digital core interface. The core interface is responsible for receiving input events and delivering them to the target synapses, and for transmitting the soma output spikes to target synapses and neurons, within the same core, or across multiple cores.
The most common communication protocol use to transmit spikes from source neurons to destination ones in neuromorphic systems is based on the Address-Event Representation (AER) [7]. Figure 1 shows how the parallel output events from the neurons in a neuron core are encoded and time-multiplexed on a shared digital bus to provide support for inter-core and inter-chip communication. An arbiter in the core output interface ("Arb" in Fig. 1) is used to manage potential collisions from multiple coincident neuron requests, and to grant access to the data bus to one neuron at a time. The routing memory in the core input interface ("Mem" in Fig. 1) is used as Look Up Table (LUT) for storing and configuring the neural connections. A LUT can also be present in the
Fig. 1: AER communication pipeline: each time a neuron spikes its address is encoded and transmitted on a shared bus using asynchronous circuits. Collisions (potential parallel spikes) are managed through asynchronous arbitration circuits, and neural network connectivity schemes are programmed via local memory Look Up Tables (LUT).
output interface, depending on the different routing methods adopted. An other type of memory that is commonly used in neuromorphic processors is the Content Addressable Memory (CAM), as its in-memory search operations can be instrumental for the network routing, when used directly in the synapse arrays of the neuromorphic cores.
Despite recent improvements in ultra-low power neuron designs [8] and high performance data packet switches [9], the optimization of the neuromorphic core interfaces remains a daunting task. This is especially true for the arbitration architecture, when it includes the encoding pipeline (as is the case presented here), and the routing memory. Both these elements represent the most important factors for the Power-Performance-Area (PPA) bottleneck of multi-core neuromorphic processors, as a function of neural network size. For example, in the neuromorphic processors proposed in [6], the power consumption of the arbiter and routing memory takes up more than 80% of the total power budget.
### Contributions of this work
In this work we substantially reduce the core interface hardware overhead for multi-core neuromorphic processors, by designing a novel asynchronous arbitration architecture in the core output interface and a new asynchronous CAM architecture in the core input interface. Specifically, the work presented:
* provides a new arbitration mechanism based on a hierarchical arbiter tree (HAT) and its asynchronous encoding pipeline circuits.
* compares the new arbitration architecture to other existing arbitration architectures.
* provides a new asynchronous CAM architecture based on CSCD, with feedback control and speculative sense.
* presents custom-designed CAM circuits, with comparisons to conventional asynchronous CAM circuits.
We show how this arbitration architecture has improved performance, compared to previously proposed arbitration schemes, with up to 78.3% lower latency figures and less area cost. To the best of our knowledge, no other asynchronous CAM architecture has been developed so far, which takes advantage of CSCD to perform robust search operations. The proposed CAM architecture achieves 40.4% throughput increase and 46.7% energy reduction with slight area increase compared to conventional asynchronous CAM arrays.
In the following section we discuss the background of arbiter and CAM circuits used in neuromorphic processors. In Section III we present the new arbitration architecture. Section IV presents the new CAM architecture, and in Section V we conclude the paper.
## II Background
This section reviews the background on arbitration architecture and asynchronous CAM. It introduces the existing arbitration architectures and the asynchronous CAM, which forms the foundation of the new work of this paper.
### _Arbitration schemes_
Purohit and Manohar [10] reviewed drawbacks and benefits of different arbitrating approaches. Arbitrating and encoding neuron's address based on a binary tree topology is suitable for applications with low event-rates and small neuron cluster size, because the request only needs to propagate through \(\log_{2}(N)\) stages. However, area cost and latency become worse when the neuron cluster size increases since the number of two-input arbiters increase linearly and every neuron's request has to propagate the whole arbiter tree. The probability of grant overlapping also increases as the depth of arbiter tree increases. The "greedy tree" represents an improvement to the original binary tree in situations where multiple input requests arrive within a very short time period [11]. But it suffers strict timing requirement which restricts the use for general applications [11]. Both binary and greedy tree schemes have high power consumption, since each granted output of the arbiter drives \(\log_{2}(N)\) address lines of the logarithmic encoder.
Another approach is to use a arbitrating mechanism with a ring-based topology. This approach can quickly service a burst of localized events but becomes worse when sparse events are far apart in space, because the token has to travel for a long distance, for each input request, when requests are sparse. Purohit and Manohar [10] propose a hierarchical token ring (HTR) method which can service sparse events like a binary tree and quickly scan through a section of the array like a linear token ring. But it needs to change the number of processes in the rings and the number of levels of hierarchy to make the design tailored to different application scenarios, which is difficult to be implemented in a dynamic neuromorphic system since the neuron firing rates are dynamically changed. The high area cost of HTR also makes it hard to scale up the neuron core's size since the number of two-input arbiter also increases linearly as the number of neuron increases.
Here, we present a new arbitration mechanism based on multiple small arbiter tree and the circuits implementation of corresponding asynchronous encoding pipeline, which has lowest latency compared with all of other arbitration architectures when the events are sparse. During the burst event mode, HAT get similar performance with HTR and token-ring, but only needs \(\log_{2}(N)\) two-input arbiters which makes it low area cost. Since HAT only use multiple small arbiter trees, which reduces the risk of grant overlapping in deep arbiter tree and makes the architecture more robust.
### _Cam_
CAM cells have been widely used as a way to accelerate the search operation in large LUTs, due to their single-cycle parallel search operation abilities [12]. Neuromorphic processors usually use CAMs in addressable synapses to increase the flexibility of network mapping, especially when the network has sparse connectivity [6]. Various CAM design approaches have been previously introduced. The CAM architecture based on the NOR-type CAM cell (reliable and fast) [12] and current-race match-line sense amplifier (MLSA) is widely used. This sensing scheme pre-charges the match-line (ML) low and evaluates
the ML state by charging the ML with a current supplied by a current source. The benefits of this scheme over the precharge-high schemes are the simplicity of the threshold circuitry and the extra savings in search-line (SL) power due to the elimination of the SL precharge phase and also there is no charge-sharing problem [13]. In every CAM cell, in addition to a 6T-SRAM cell, there are three transistors for bit comparison. When the stored data and search data on the SL are same (in the MATCH case), ML pull-down path (ML to GND) is disconnected then the ML can be charged until MLSA generates pulse as a input spike to the target neuron. On the other hand, when the stored-data and search data are opposite(in the MISMATCH case), the ML pull-down path is formed and ML can't be charged. The Off signal from the dummy CAM entry as shown in Fig. 6 is to terminate the current source in every MLSA, which is designed to be "always MATCH" with the worst case (assumed to be the last one to produce a MATCH signal). Numerous switching of ML in MATCH case and direct current flow in pull-down path in MISMATCH case come at the cost of huge dynamic power consumption. Moreover, for event-driven neuromorphic processors, designing a robust asynchronous CAM architecture without sacrificing performance and energy efficiency is an another challenge, which is still a blank in field of asynchronous circuits.
Moradi _et al._[6] use the same CAM architecture described above as an asynchronous target memory with multiple tags. Each CAM entry (tag) represents the address of source neuron that the target neuron is subscribed to. To minimize the area, this asynchronous CAM architecture is designed following a bundled-data style instead of (Quasi-delay-insensitive) QDI style. The search operation of the asynchronous CAM architecture follows a standard four-phase handshaking protocol to communicate with the handshake(HS) block. In order to guarantee the correct handshaking communication between the HS block and the CAM array, it is necessary to make two appropriate timing assumptions. The first one is that the presence of valid input data should be earlier than the request signal which is used to enable the searching operation. This is the common timing constraint in bundled-data design style which is not difficult to satisfy. The second timing assumption is made when sending the acknowledge signal to HS block, to ensure that the search operation in the whole CAM array is completed. This assumption represents a key challenge for this asynchronous CAM architecture, because of the mismatch of current source circuits in MLSA and of the different numbers of MISMATCH bits in the different CAM entries (which results in different ML wiring capacitance load). These issues make it difficult to evaluate the time for completing the search operation and to make correct assumptions. As shown in Fig. 5(a), a configurable delay line is used to leave enough timing margin for finishing the whole searching operation, which is a trade-off between performance and robustness. To avoid the false negative error, high cycle time has to be the cost, which becomes the bandwidth bottleneck in multi-core neuromorphic processors.
To address this problem we propose a novel asynchronous CAM architecture, which makes use of the CSCD technique to eliminate the second timing assumption. CSCD exploits the fact that charging and discharging parasitic capacitance of internal nodes in digital circuits occur only when the signal is in transition to determine the working state of the circuit. CSCD has already been used in asynchronous bundled-data pipeline circuits to take the advantages of the cost-efficient characteristics of bundled-data design without suffering the disadvantages of PVT-sensitive matching delay cells [14]. The CSCD used in CAM architecture we propose is to detect the current flow change during the searching operation and act as an acknowledge signal generator. There are also two novel mechanisms in the new CAM architecture: feedback control and speculative sense to significantly reduce the power consumption in MATCH and MISMATCH cases respectively.
## III Proposed hierarchical arbiter tree
In this section, the new arbitration mechanism and corresponding asynchronous encoding pipeline circuits are presented, followed by the experimental results and discussion.
### _Arbitration Mechanism_
Figure 2 shows an example of 64 neurons encoded by a hierarchical arbitration mechanism, which needs a 6four-inputs deep arbiter tree to encode 64 neurons using 6 bits if there is no any hierarchical arbitration. Based on HAT method, arbitration and encoding can be done for every 2bits. As shown in Fig. 1(a), the cluster with 16 neurons (highlight in green) share the pin Req[0] and Grant[0] in high level arbiter "ArbiterH" in Fig. 3, which is usually implemented by pull-down transistors and pull-up circuits to reduce area cost instead of using OR gate tree [6, 11]. The neurons highlight in Fig. 1(b) and Fig. 1(c), which share the pin Req[0] Grant[0] of medium level arbiter "ArbiterM" and the pin Req[0] Grant[0] of low level arbiter "ArbiterL" respectively. The arbitration starts from high level arbitration. Only when the arbiter gives the grant to one of the four neuron clusters, the active neurons in that cluster can send the requests to the medium level arbiter. The operation
Fig. 2: Hierarchical arbitration mechanism schemes.
relationship between the medium level arbiter and the low level arbiter is the same as that. The arbiter will not give grant to another cluster until all of active neurons in the current cluster have been encoded.
### _Asynchronous encoding pipeline circuits_
Inspired by high-capacity dynamic pipeline using static logic (static HC) in [15], HAT is implemented as shown in Fig. 3. Three levels hierarchical arbitration is shown as an example and every hierarchy level uses a low cost four-input arbiter trees. The output of this asynchronous pipeline circuits is used as the LUT pointer index to get routing data packet or directly sent to the Network-on-Chip (NoC). The four-phase QDI circuits is used here because it's more compatible with the neuron handshake circuits and robust. The working flow is divided by four stages. The first stage is "masking stage", as shown in Fig. 3(a), which adopts the static logic to decouple the long-term handshaking protocol in neuron handshake circuits, from the rapid handshaking and release in the arbiter. Compared with [15], a C-element is added to make the handshaking strictly follow four-phase handshaking and more robust. The complete detection (CD) block after masking gate is to detect if there are still active requests from low level or medium level, if so, the circuits can't release the medium level grant or high level grant respectively. This means the architecture doesn't need to encode higher level bits every time when it handles lower level data packet handshaking, which is more energy efficient.
The second stage is "arbitration stage", which provides a one-hot output to the third stage "first static HC pipeline". The one-hot output from the third stage is sent back as grant signals to reset arbiter's input, and at the same time it acts as input of the QDI encoder. The CD block after "first static HC pipeline" is used to test if the output is valid. To avoid grant overlapping problem of arbiter, CD block here is consisted of XOR gates instead of OR gates. The output data of QDI encoder will be sent to the last stage "second static HC pipeline", which merges the encoded data from three levels as a complete data packet including 6 bits that represents neuron address. The CD block after the "second static HC pipeline" is used to evaluate the complete data packet and acknowledge the previous stage.
The outputs of three CD blocks after the QDI encoder of the "first static HC pipeline" are used to evaluate if the data of the corresponding level is still valid and reset Ack generator if not, which deasserts Ack signal to enable the "first static HC pipeline" and complete an entire cycle. As shown in Fig. 3(b), the CD block in the low level resets ack generator whenever a complete 6 bits data packet is captured by the "second static HC pipeline". The medium level CD block and high level CD block can only reset ack generator successfully when there is no valid neuron request in lower level.
### _Timing Analysis_
The proposed arbitration architecture involves three pipeline-related timing constraints, the first two of which are directly transformed from those in the original static HC pipeline [15]. _(i)_ The first timing constraint is the hold timing of the pipeline register. The closing of latch should be earlier than data reset on the input channel, which can be simply satisfied since there is a round-trip communication to reset the input data.
_(ii)_ The second timing constraint is that the current register should be re-opened later than the input data reset. This timing constraint is also easy to satisfy because the re-open operation needs a complete four-phase handshake.
_(iii)_ The third timing constraint is related to the HAT mechanism, which is in the Ack generator. The V_M and V_L should be changed faster than the D_H D_M and D_L, otherwise the Ack generator will be reset wrongly and re-open the "first static pipeline stage" too early. In practice, this timing constraint is simple to satisfy, since the CD block after masking registers provides output whenever the input data is valid, however the CD block after the QDI encoder needs to wait for the valid
Fig. 4: (a) Masking gate; (b) Ack generator.
Fig. 3: Asynchronous pipeline circuits.
data going through the arbiter, the first static HC pipeline and the encoder.
### _Experimental results and discussion_
Table I II and III shows the comparison of theoretical calculation results and pre-layout results between HAT and other arbitration architectures. All arbitration architecture designs are mapped using a 22FDX FDSOI standard cell library. Analog mutual exclusion elements (mutexes) in the two-input arbitters are implemented with standard-cell equivalent version [16]. Gate size is decided by SPICE simulations and use set_dont_touch command during synthesis to avoid any optimization on it. The asynchronous sequential C-elements are implemented using combinational gates with feedback. We care more about latency than throughput is because SNNs is more sensitive to the temporal information. On the other hand, the neuron handshake circuit usually has several stages of pipeline buffers and the time interval of two neuron spikes is longer than the arbitration encoding time, which will relax the requirement on throughput. All latency results are in typical operating conditions. The latency of greedy tree in burst mode is not considered here because it highly depends on the response time of the neuron, which is the same reason as it in [10]. The synthesis tool flow is similar with [17]. We use the generic GTECH Synopsys library to implement the very low-level but technology-independent specification, which can help us has full control over the gate-level logic function. During the synthesis, only gate sizing and buffer insertion are allowed. The set_max_delay command is applied to all of the timing paths in order to get high performance. The clock and reset paths have higher weight of delay constraint during technology mapping, which is to avoid violations of minimum pulse width and hold time.
Two different cases (sparse event and full frame burst event) similar with [10] are considered. A random event request from N neurons is selected and the latency is measured from neuron request to output request. In the asynchronous pipeline circuits we proposed, the output request is the output signal of last pipeline's CD block, which represents the complete neuron address data packet is valid. The average latency is measured for N neurons. For full frame burst events, all neurons fire in the short time window, which is the starting point of latency measurement, then the latency is measured between staring point and last output request signal. For the theoretical calculation results of latency, we assume the latency in moving the handshake signal between two stages is small compared to the latency of handling the events and normalized by two-input arbiter's latency.
## IV Proposed CAM architecture
Given the issues of low performance and low energy efficiency in the conventional asynchronous CAM architecture, this section introduces a new CAM architecture with CSCD, and also the mechanisms of feedback control and speculative sense in MLSA.
### _CAM Architecture with CSCD_
Fig. 6 shows the difference between the conventional asynchronous CAM architecture and the new CAM architecture we propose. In conventional asynchronous CAM architecture, the request signal from handshake(HS) block is sent to CAM array and the dummy CAM entry in parallel. Here we assume the request and acknowledge signals follow a four-phase handshake protocol. The always on dummy CAM entry provides the MATCH signal whenever it receives the request signal, which is used as the Off signal to terminate charging the ML in all of the CAM entries and also sent back to the HS block as an acknowledge signal after a configurable delay line. The delay here results in high cycle time, which is also a trade-off between performance and robustness. To solve this key challenge, we propose the CAM architecture with CSCD block as presented in Fig. 5(b). The first concept of a CSCD sensor was published by [18]. The CSCD sensor is inserted between the logic function unit and power supply to detect current flow. The sensor produces a low output when no current flowing through the logic (i.e., the logic is not working), and produces a high output when the combinational logic transitions. Here we use the CSCD block to evaluate the current flowing through the CAM Array during search operation.
As shown in Fig. 7, CSCD block includes the current sensing circuits and HS circuits which does four-phase handshake with HS block of the CAM architecture. The complete working flow of this CAM architecture is that (a)HS block provides a request signal after the data is valid on the SL and SLB to enable the searching operation of the CAM array, and deasserts the reset signal of register in the CSCD block. (b)Current sensing block evaluates the current flowing through the CAM array and generates the rising edge when the CAM array is doing searching operation. (c)After the Off signal from the dummy CAM entry terminates charging ML, the falling edge from current sensing circuits triggers the register to provide the acknowledge signal. (d)The HS block deasserts the request signal after it receives the acknowledge signal from CSCD, which is used to precharge ML in CAM array to GND and also reset acknowledge signal in CSCD block to finish the whole four-phase handshake. The current sensing circuits is basically similar with it in [14]. Usually during the operation of precharging ML to GND, the current change can't make current sensing circuits generate a pulse since the duration of current change is too short for the high speed amplifier to detect. The same is true during CAM writing operation.
### _Feedback control and speculative sense_
In order to reduce the dynamic power during searching operation. We propose the mechanisms of feedback control and speculative sense in MLSA. Fig. 8 shows the CAM array based on NOR-type CAM cell and current-race MLSA with
Fig. 5: The scalability of latency.
Fig. 6: (a) Conventional asynchronous CAM architecture; (b) Asynchronous CAM architecture with CSCD.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & \multicolumn{3}{c|}{**Normalized area cost**} \\ \cline{2-4} & _Number of two-input arbiter_ & _N=64_ & _N=256_ \\ \hline Binary tree & \(N-1\) & 63 (72.3) & 255 (277.4) \\ \hline Greedy tree & \(N-1\) & 63 (83.4) & 255 (286.7) \\ \hline Token-ring & \(N\) & 64 (79.1) & 256 (272.5) \\ \hline Hier-ring & \(N+2\sqrt{N}\) & 80 (89.2) & 288 (296.3) \\ \hline Hier-tree & \(3\log_{4}N\) & 9 (59.4) & 12 (192.4) \\ \hline \end{tabular}
\end{table} TABLE III: Normalized area cost
Fig. 7: CSCD block.
feedback control and speculative sense, which has n CAM entries and 10 CAM cells in each CAM entry. The feedback control and speculative sense can reduce power in MATCH and MISMATCH respectively. In MATCH case, the current source in MLSA charges the ML until the voltage is higher than the threshold of the transistor T0, which makes MLSA generate a high output. Here we use the output signal as a feedback signal to close current source. Feedback control mechanism makes the MLSA do adaptive sensing and quickly close charging by itself when the CAM entry is MATCH, so there is no need to wait for the Off signal from dummy CAM entry, which usually reduces around 40% voltage swing on ML in MATCH case.
In MISMATCH case, direct current flowing through ML pull-down path causes a significant dynamic power consumption. The basic idea here is still trying to close the current source as soon as possible. Since the gate voltage of the tail transistor PD in every CAM cell changes quickly after the data on the SL and SLB is valid, which can be used to do the early detection whether the data bit in CAM cell is MATCH or MISMATCH. We add extra one pin sen_n in every CAM cell as shown in Fig. 8 to sense the matching status before the searching operation(the Req arrives). The signal from the sense node will go through the OR gate in the MLSA and directly close the current source if the corresponding CAM cell is MISMATCH. We can also just extract the last several sense nodes close to the MLSA to do speculative sense if the CAM entry has hundreds of bits, which can reduce the wire routing effort in layout. Assuming the input data is random and every CAM entry has N bits, the probability of the MISMATCH bits occurring in last n bits is \(\frac{2^{N}-2^{N}-\frac{n+1}{2}}{2^{N}}\). As the example presented in Fig. 8, extracting the last 3 bits from 10 bits CAM entry has 87.6% probability closing the current source in advance when the CAM entry is MISMATCH and test vector is evenly random.
To make sure the CSCD block can detect any current flow change under different matching cases, it's necessary to find the worst case for CSCD block, current flow change of which is smallest. Four different matching cases are analyzed here: a) All of the CAM entries are MATCH, which means all of the MLSA can adaptively close current source by feedback control mechanism. b) All of the CAM entries have MISMATCH bits in the last three CAM cells, then all of the MLSA can close current source by speculative sense mechanism before the request signal arrives and the current flow change in this case is only because of the charging current in dummy CAM entry. c) The MISMATCH bits in all of the CAM entries only occur in the first eight CAM cells. The direct current in the pull-down path from VDD to GND exits in every CAM entry and can't be closed by speculative sense mechanism. d) Random matching cases. The current flow change is smallest when all of the CAM entries have MISMATCH bits in the last three CAM cells based on the simulation results. Although it barely occurs in real application, it's important to make sure CSCD block has enough margin to generate the acknowledge signal in this case.
### _Timing Analysis_
The CSCD block eliminates the second timing constraint introduced in section II and acknowledges the HS block after the CAM array finishes searching operation. In addition to the first timing constraint, which is that the request signal should be later than the valid data, there are two timing constraints related to HS circuits in CSCD block. One is minimum pulse width of clock signal, which corresponds to the duration of voltage Vs change and the time interval of two search operations. As introduced above, the minimum duration of voltage Vs change occur in the case that all of the CAM entries have MISMATCH bits in the last three CAM cells. The pulse width of current sensing circuits output is much longer than the minimum pulse width of clock signal even if in this case. The time interval of two search operations is also longer than the minimum pulse width of clock signal in practice. The other timing constraint is the minimum pulse width of reset signal, which is also simple to satisfy. The reason is same as above.
### _Experimental results and discussion_
Evaluations are now presented for the new asynchronous CAM architecture. Results are obtained for two different sizes of full customized asynchronous CAM arrays with 16 CAM entries and 512 CAM entries respectively which are mapped to a 22FDX process FDSOI library. Each of proposed CAM architecture is compared to the conventional asynchronous CAM architecture in [6] as the baseline architecture without CSCD, feedback control and speculative sense in terms of performance, power and area. Each CAM entry has 11 bits. The CAM array with 16 CAM entries is shown in Fig. 9. Both CAM architectures use the same four-phase HS block. We carefully add dummy cells on the request signal path to make that the request signal has slightly higher capacitance load than SL and SLB, which is to satisfy the first timing constraint introduced in section II. Based on multiple Monte Carlo simulation results, the size of PD transistors in dummy CAM entry are chosen to be 20% larger than the PD transistors in other CAM entries, which makes the charging speed of the dummy CAM entry slower than other MATCH CAM entries. The 8bits configurable delay line is used in conventional asynchronous CAM architecture to fine tune the configurable delay. We start from 0 delay
Fig. 8: CAM Array with feedback control and speculative sense.
and increase it incrementally until there is no error signal, which is usually 30% higher than the delay from request signal generation to dummy CAM entry output. The sense nodes in the last three CAM cells are extracted for speculative sense mechanism since they are closer to MLSA.
_Cycle time:_ Different from the evaluation for arbiters introduced in section III, we care more about the throughput of CAM architecture since the configurable delay line results large latency of asserting and deasserting stage in four-phase handshake. The average cycle time comparison between the conventional CAM architecture, proposed CAM architecture only with CSCD, proposed CAM architecture with feedback control or speculative sense and the complete proposed CAM architecture is shown in Fig. 10, which can be directly translated to throughput performance. The average cycle time is calculated by multiple times random searching operation in the typical operating condition, which means the data input and the initial data content stored in the CAM array are random but the same for different CAM architectures. The complete proposed CAM architecture shows improvement for both 16x11 and 512x11 design points: 35.5% and 40.4%, respectively, over the conventional design. The higher performance improvement can be got in larger CAM array since the configurable delay line has to have higher delay as the CAM array size increases. In contrast, CSCD is not affected a lot by this, which provides a high acknowledge signal immediately when it detects that the CAM array finishes the searching operation. The result validates the benefits of CSCD. An interesting property of CSCD is that it also benefits from the feedback control and speculative sense as presented in Fig. 10, since terminating charging earlier makes the current go back to zero in advance, which reduces the delay of providing acknowledge signal from CSCD block. More importantly, assuming that a configurable delay line always has higher delay than the searching operation is not robust because of device mismatch. Asynchronous CAM architecture with CSCD can eliminate the trade-off between performance and robustness introduced in Section II.
_Area:_ Post-layout areas are compared for the baseline vs. new CAM architecture, at both design points. The final layout area is estimated by summing up the all of cells areas, including the CSCD and HS block. For 16x11 CAM architecture, the baseline design has an area of 225.3 \(\mu m^{2}\), while the new CAM architecture occupies 245.5 \(\mu m^{2}\). A 8.9% increase of area is observed for 16x10 CAM architecture. The area increase is because of the CSCD block and the OR gate in MLSA, but there is no area increase in CAM cell even if we add extra one pin, which is important for scaling up the CAM array size. Such as for 512\(\times\)11 CAM architecture, the baseline and new designs have an area of 7242.1 \(\mu m^{2}\) and 7620.6 \(\mu m^{2}\), respectively. The area overhead increase of the new approach becomes less: only 5.2%.
_Energy consumption:_ This section reports the average energy consumption of both CAM architectures at 512x11 design point when all of CAM entries are MATCH, all of CAM entries are MISMATCH and random data searching. Although the first two extreme cases barely occur in neuromorphic processors, they are considered here since we want to specify the different power saving by different mechanisms. The MISMATCH bits are distributed randomly in 11 bits CAM entry.
As shown in Fig. 11, only feedback control and CSCD contribute to energy saving when all of CAM entries are MATCH, which turns out to be 35.8% lower than the baseline architecture. When considering all MISMATCH case, the new CAM architecture taking advantage of speculative sense shows 40.2% energy reduction. The newly designed CAM architecture based on CSCD, combined with the feedback control and speculative sense, results in 46.7% energy saving when the CAM Array is provided random data input, which is the most energy efficient design choice.
Fig. 10: Average search cycle time.
Fig. 9: Layout of a CAM array with 16 CAM entries.
## V Conclusion
We proposed a HAT arbitration architecture to meet the demands of low-cost core interfaces for multi-core neuromorphic processors. In particular, we presented the encoding pipeline circuits in the core output interface and novel asynchronous CAM circuits based on a CSCD block, used in the core input interface. We showed how the latency of new arbitration architecture is reduced by a factor up to 78.3%, for sparse event operations, with a lower area cost compared to alternative state-of-the-art arbitration architectures. The proposed asynchronous CAM architecture achieves a 40.4% increase in throughput by CSCD, and a 46.7% energy saving, due to feedback control and speculative sense mechanisms. The current sensing circuits with lower sensing latency and power consumption is the direction to explore in the future, such as using current-mirror amplifier with local positive feedback to reduce the latency. Another challenge is to design suitable power rails for digital CAM array and analog CSCD circuits, which should minimize cross-talk and area cost.
## Acknowledgment
The authors would like to thank Davide Bertozzi and Steven M. Nowick for the training of asynchronous circuits design flow and the insight of robust asynchronous memory and thank Tugba Demirci for the guidance on CAM circuit design.
|
2303.06414
|
Finsler manifolds with Positive Weighted Flag Curvature
|
The flag curvature is a natural extension of the sectional curvature in
Riemannian geometry. However there are many non-Riemannian quantities which
interact the flag curvature. In this paper, we introduce a notion of weighted
flag curvature by modifying the flag curvature with the non-Riemannian
quantity, T-curvature. We show that a proper open forward complete Finsler
manifold with positive weighted flag curvature is necessarily diffeomorphic to
the Euclidean space.
|
Zhongmin Shen, Runzhong Zhao
|
2023-03-11T14:42:26Z
|
http://arxiv.org/abs/2303.06414v2
|
# Finsler manifolds with Positive Weighted Flag Curvature
###### Abstract
The flag curvature is a natural extension of the sectional curvature in Riemannian geometry. However there are many non-Riemannian quantities which interact the flag curvature. In this paper, we introduce a notion of weighted flag curvature by modifying the flag curvature with the non-Riemannian quantity, \(T\)-curvature. We show that a proper open forward complete Finsler manifold with positive weighted flag curvature is necessarily diffeomorphic to the Euclidean space.
**Keywords:**
**MR(2000) subject classification:** 53C60, 53B40
## 1 Introduction
The problem of the interrelation between the local metric properties and the global topology or global geometry of a manifold has long been one of the important topics in Riemannian geometry. The most prominent results include the Gauss-Bonnet theorem, the Hadamard-Cartan theorem, Myer's theorem, the sphere theorem, Bishop-Gromov volume comparison, and many more (see e.g. [1][9]). We are particularly interested in the Gromoll-Meyer theorem that a complete Riemannian open manifold \(M\) with positive sectional curvatue \(K>0\) must be diffeomorphic to \(R^{n}\). For Finsler manifolds, the flag curvature is a natural extension of the sectional curvature in Riemannian geometry. One is wondering if the Gromoll-Meyer theorem still holds for Finsler manifolds with positive flag curvature. The study of this problem will lead to a better understanding on the flag curvature. It turns out that this problem is more sophiscated than we think, since the Finsler structure is not only controlled by the flag curvatutre, but also by some non-Riemannian quantities. For example, the \(S\)-curvature introduced in [13], was used in a combination with the Ricci curvature to give the diameter bounds of Finsler manifolds and a Bishop-Gromov type volume comparison theorem[8][13]. It was also used in Wu's construction of weighted flag curvature from which he proved a comparison theorem for the Laplacian of distance functions on Finsler manifolds ([17]).
In this paper, we will use the non-Riemannian quantity, known as the \(T\)-curvature[14] to modify the flag curvature, then establish the Gromoll-Meyer theorem for Finsler manifolds. As seen in Lemma 3.1 below, the \(T\)-curvature is closely related to the Hessian of distance functions on Finsler manifolds. Since the Hessian of distance functions gives the normal curvature of level surfaces of distance functions, it would not be surprising that the \(T\)-curvature plays an important role in the study of the geometry of hypersurfaces in Finsler manifolds. Another situation where the \(T\)-curvature naturally comes into play is the study of variations on Finsler manifolds. Thus a good understanding of this quantity will allow us to incorporate many useful variational techniques in Riemannian geometry into Finsler geometry.
We shall define a weighted flag curvature \(K^{\alpha}\) by modifying the flag curvature with the \(T\)-curvature by (3). We prove the following
**Theorem 1.1**: _Let \((M,F)\) be posvitively complete open proper Finsler manifold. Assume that \(K^{\alpha}>0\) for some \(\alpha>0\), then \(M\) is diffeomorphic to \(R^{n}\)._
The weighted flag curvature condition \(K^{\alpha}>0\) is reduced to the sectional curvature condition \(K>0\) when the Finsler metric is Riemannian. Thus Theorem 1.1 generalizes the Gromoll-Meyer theorem.
This notion of flag curvature weighted by the \(T\)-curvature might have other applications. For example, it is known that a bound for this weighted flag curvature gives an estimate of the Hessian of distance functions; and a positive lower bound of this weighted flag curvature gives a diameter bound of the manifold. Thus the weighted flag curvature deserves further study.
## 2 Preliminaries
A _Minkowski norm_\(F\) on a vector space \(V\) is a nonnegative function satisfying
* \(F\) is \(C^{\infty}\) on \(V\setminus\{0\}\);
* \(F\) is positively homogeneous of degree \(1\), in the sense that \(F(tv)=tF(v)\) for all \(t\geq 0\) and \(v\in V\);
* \(F\) is strongly convex, in the sense that the matrix \(g_{ij}(v)=\frac{1}{2}\left[F^{2}\right]_{v^{i}v^{j}}(v)\) is positive definite for all \(v\neq 0\).
A _Finsler metric_\(F\) on a manifold \(M\) is a function on the tangent bundle \(TM\), \(C^{\infty}\) on the slit tangent bundle \(TM\setminus 0\) whose restriction on each tangent space \(T_{x}M\) is a Minkowski norm. Note that given a nowhere vanishing vector field \(Y\) on \(U\subset M\), \(g_{Y}\) defines a Riemannian metric on \(U\).
For a Finsler metric \(F\) on an \(n\)-dimensional manifold \(M\), the length of a \(C^{\infty}\) curve \(c:[a,b]\to M\) is
\[L(c):=\int_{a}^{b}F(c^{\prime}(t))dt\]
and the "distance" \(d(p,q)\) from a point \(p\in M\) to another point \(q\in M\) is defined to be the infimum of the lengths of all piecewise smooth curves \(c:[a,b]\to M\) with \(c(a)=p\) and \(c(b)=q\). We shall note that a Finsler metric is in general not reversible, i.e., \(F(v)\neq F(-v)\) for a general vector \(v\in TM\). As a consequence, the distance \(d\) is in general not symmetric. Geodesics are characterized in local coordinates by
\[\frac{d^{2}\gamma^{i}}{dt^{2}}+2G^{i}\left(\gamma(t),\frac{d\gamma}{dt}\right)=0\]
where
\[G^{i}=\frac{1}{4}g^{il}\left\{\left[F^{2}\right]_{x^{k}y^{l}}y^{k}-\left[F^{2 }\right]_{x^{l}}\right\}\]
are called _geodesic spray coefficients_ with \(\left(g^{ij}\right)\) being the inverse matrix of \(\left(g_{ij}\right)\). A Finsler metric on \(M\) is said _positively complete_ if every geodesic \(\gamma:(a,b)\to M\) can be extended to a geodesic \(\gamma:(a,\infty)\to M\).
Using geodesic spray coefficients \(G^{i}\), we may define
\[N^{i}_{j}:=\frac{\partial G^{i}}{\partial y^{j}}\]
which is sometimes called the _nonlinear connection_ in literatures. The second-order partial derivative of \(G^{i}\), namely \(\Gamma^{k}_{ij}:=\frac{\partial^{2}G^{i}}{\partial y^{i}\partial y^{j}}\) is known as the coefficients of the _Berwald connection_. Hence the _covariant derivative_\(D\) of the Berwald connection is given in locally coordinates by
\[D_{y}U=\left[dU^{i}(y)+U^{j}\Gamma^{i}_{jk}(x,y)y^{k}\right]\frac{\partial}{ \partial x^{i}}\]
where \(y\in T_{x}M\) and \(U\in\Gamma^{\infty}(TM)\).
The _Riemann curvature_\(\mathbf{R}_{y}=R^{i}_{\ k}\frac{\partial}{\partial x^{i}}\otimes dx^{k}:T_{x}M\to T _{x}M\) is given by
\[R^{i}_{\ k}=2\frac{\partial G^{i}}{\partial x^{k}}-\frac{\partial^{2}G^{i}}{ \partial x^{l}\partial y^{k}}y^{l}+2G^{l}\frac{\partial^{2}G^{i}}{\partial y^ {k}\partial y^{l}}-\frac{\partial G^{i}}{\partial y^{l}}\frac{\partial G^{l} }{\partial y^{k}}\]
in local coordinates. In the case when \(F\) is a Riemannian metric, where \(\left(g_{ij}\right)\) depends on \(x\in M\) only, we have \(R^{i}_{\ k}=R^{\ i}_{\ j\ kl}y^{j}y^{l}\) with \(R^{\ i}_{\ j\ kl}\) being the components of the Riemannian curvature tensor.
There are many non-Riemannian quantities in Finsler geometry. Among them are the Cartan torsion
\[C_{ijk}=\frac{1}{4}\left[F^{2}\right]_{y^{i}y^{j}y^{k}}\]
which characterizes Euclidean metrics, the Landsberg curvature
\[L_{ijk}=-\frac{1}{2}y^{l}g_{lm}\frac{\partial^{3}G^{m}}{\partial y^{i}\partial y ^{j}\partial y^{i}}\]
which measures the change of the Cartan torsion along geodesics. In the following section, we shall focus on another non-Riemannian quantity, the T-curvature, that is needed in defining the weight flag curvature.
T-curvature
For a vector \(y\in T_{x}M\setminus\{0\}\), let \(Y\) be a geodesic field such that \(Y_{x}=y\). Let \(\hat{g}:=g_{Y}\) and \(\hat{D}\) denote the Levi-Civita connection of \(\hat{g}\). Define
\[T_{y}(v):=g_{y}(D_{v}V-\hat{D}_{v}V,y)\]
where \(V\) is a vector field with \(V_{x}=v\). In local coordinates,
\[T_{y}(v)=y^{l}g_{kl}\Big{\{}\Gamma^{k}_{jm}(x,v)-\Gamma^{k}_{jm}(x,y)\Big{\}}v ^{j}v^{m}.\]
We define the _Hessian_ of a function \(f\) in the direction \(v\) as \(H^{2}f(v):=\frac{d^{2}}{dt^{2}}f(\gamma(t))|_{t=0}\) where \(\gamma:(-\varepsilon,\varepsilon)\to M\) is a geodesic with \(\gamma^{\prime}(0)=v\). The \(T\)-curvature is a quantity that relates the Hessian of a distance function \(\rho\) with respect to \(F\) and that with respect to the induced Riemannian metric \(g_{\nabla\rho}\).
**Lemma 3.1**: _([12]) For a distance functuon \(\rho=\rho(x)\) on \((M,F)\), we have_
\[H^{2}\rho(v)=\hat{H}^{2}\rho(v)-T_{\nabla\rho}(v). \tag{1}\]
_where \(\nabla\rho\) is the gradient of the function \(\rho\), \(H^{2}\rho\) denotes the Hessian of \(\rho\) with respect to \(F\) and \(\hat{H}^{2}\rho\) the Hessian of \(\rho\) with respect to \(\hat{g}:=g_{\nabla\rho}\)._
The T-curvature has the following properties
* \(T_{\lambda y}(v)=\lambda T_{y}(v),\,\forall\lambda>0\) and \(\forall v\in T_{x}M\setminus\{0\}\),
* \(T_{y}(\lambda v)=\lambda^{2}T_{y}(v),\,\forall\lambda>0\) and \(\forall v\in T_{x}M\setminus\{0\}\),
* \(T_{y}(y)=0\),
* \(\lim_{v\to 0}T_{y}(v)=0\).
The T-curvature is closely related to the Berwald curvature \(B^{k}_{jml}\) given by
\[B^{k}_{jml}:=\frac{\partial\Gamma^{k}_{jm}}{\partial y^{l}}(x,y)=\frac{ \partial^{3}G^{k}}{\partial y^{j}\partial y^{m}\partial y^{l}}(x,y).\]
Let
\[B^{k}_{jmls}:=\frac{\partial B^{k}_{jml}}{\partial y^{s}}=\frac{\partial^{4}G ^{k}}{\partial y^{j}\partial y^{m}\partial y^{l}\partial y^{s}}(x,y).\]
\[B^{k}_{jmls}:=\frac{\partial B^{k}_{jmls}}{\partial y^{t}}=\frac{\partial^{5 }G^{k}}{\partial y^{j}\partial y^{m}\partial y^{l}\partial y^{s}\partial y^{ t}}(x,y).\]
By homogeneity,
\[B^{k}_{jml}y^{l}=0,\hskip 14.226378ptB^{k}_{jmls}y^{s}=-B^{k}_{jml}, \hskip 14.226378ptB^{k}_{jmlst}y^{t}=-2B^{k}_{jmls}.\]
Note that
\[L_{lst}:=-\frac{1}{2}y_{k}B^{k}_{lst}.\]
Let be \(u\) such that \(g_{y}(y,u)=0\). We have
\[T_{y}(y+su) = y_{k}B^{k}_{jml}y^{y}y^{m}u^{l}s+\frac{1}{2}y_{k}B^{k}_{jmlt}y^{ y}y^{m}u^{l}u^{t}s^{2}+y^{k}B^{k}_{jml}u^{l}(y^{j}u^{m}+y^{m}u^{j})s^{2}\] \[+\frac{1}{6}y_{k}B^{k}_{jmlst}u^{l}u^{s}u^{t}y^{j}y^{m}s^{3}+\frac {1}{2}y_{k}B^{k}_{jmlt}u^{l}u^{t}(y^{j}u^{m}+y^{m}u^{j})s^{3}+y_{k}B^{k}_{jml}u ^{l}u^{j}u^{m}s^{3}+o(s^{3})\] \[= \frac{1}{3}y_{k}B^{k}_{lst}u^{l}u^{s}u^{t}s^{3}+o(s^{3})=-\frac{ 2}{3}L_{lst}u^{l}u^{s}u^{t}s^{3}+o(s^{3}).\]
We obtain the following
**Lemma 3.2**: _For any \(u\in T_{x}M\) with \(g_{y}(y,u)=0\),_
\[T_{y}(y+su)=-\frac{2}{3}L_{y}(u,u,u)s^{3}+o(s^{3}).\]
_Hence_
\[L_{y}(u,u,u)=-\frac{3}{2}\lim_{s\to 0}\frac{T_{y}(y+su)}{s^{3}}.\]
In the sequel by an extension of a vector \(v\in T_{\gamma(0)}M\) along a curve \(\gamma:(-\varepsilon,\varepsilon)\to M\) we will mean a (smooth) vector field \(V(t)\) along \(\gamma\) with \(V(0)=v\).
Denote by \(\gamma_{y}(t)\) the geodesic with \(\gamma_{y}^{\prime}(0)=y\). Let \(E(t)\) be a parallel extension of \(v\) along \(\gamma_{y}\). Define \(\dot{T}_{y}(v)\) by
\[\dot{T}_{y}(v):=\left.\frac{d}{dt}\left[T_{\gamma_{y}^{\prime}(t)}(E(t)) \right]\right|_{t=0}.\]
The following properties and lemma follow from those of the \(T\)-curvature.
* \(\dot{T}_{\lambda y}(v)=\lambda^{2}\dot{T}_{y}(v)\), \(\forall\lambda>0\) and \(\forall v\in T_{x}M\setminus\{0\}\),
* \(\dot{T}_{y}(\lambda v)=\lambda^{2}\dot{T}_{y}(v)\), \(\forall\lambda>0\) and \(\forall v\in T_{x}M\setminus\{0\}\),
* \(\dot{T}_{y}(y)=0\).
**Lemma 3.3**: _For any \(u\in T_{x}M\) with \(g_{y}(y,u)=0\),_
\[\dot{T}_{y}(y+su)=-\frac{2}{3}\dot{L}_{y}(u,u,u)s^{3}+o(s^{3}).\]
_Hence_
\[\dot{L}_{y}(u,u,u)=-\frac{3}{2}\lim_{s\to 0}\frac{\dot{T}_{y}(y+su)}{s^{3}}.\]
_Here \(\dot{L}\) is the derivative of the Landsberg curvature along a geodesic,_
\[\dot{L}_{y}(u,u,u):=\left.\frac{d}{dt}L_{\gamma_{y}^{\prime}(t)}(U(t),U(t),U(t) )\right|_{t=0}\]
_where \(\gamma_{y}\) is the geodesic with \(\gamma_{y}^{\prime}(0)=y\) and \(U\) is a parallel extension of \(u\) along \(\gamma_{y}\)._
Observe that \(L_{y}(u,u,u)\) is positively homogeneous of degree \(1\) in \(y\).
In practice, \(\dot{T}\) will be calculated in local coordinates using a general extension (not necessarily parallel) \(V(t)\) of \(v\), by
\[\dot{T}_{y}(v)=\left.\left(\frac{d}{dt}\left[T_{\gamma_{y}^{\prime}(t)}(V(t)) \right]-\frac{\partial T_{y}(v)}{\partial v^{i}}D_{\gamma^{\prime}}(V(t))^{i} \right)\right|_{t=0}.\]
In full detail, we have
\[\dot{T}_{\gamma_{y}^{\prime}(t)}(V(t)) = \dot{\gamma_{y}}^{\ l}g_{kl}(\gamma_{y},\dot{\gamma_{y}})\left\{ \frac{\partial\Gamma_{jm}^{k}}{\partial\gamma_{y}s}(\gamma_{y},V)\dot{\gamma_ {y}}^{\ s}-\frac{\partial\Gamma_{jm}^{k}}{\partial y^{s}}(\gamma_{y},V)N_{t}^ {s}(\gamma_{y},\dot{\gamma_{y}})V^{t}\right.\] \[\left.-\frac{\partial\Gamma_{jm}^{k}}{\partial\gamma_{y}s}( \gamma_{y},\dot{\gamma_{y}})\dot{\gamma_{y}}^{\ s}+2\frac{\partial\Gamma_{jm}^{k}}{ \partial y^{s}}(\gamma_{y},\dot{\gamma_{y}})G^{s}(\gamma_{y},\dot{\gamma_{y}}) \right\}V^{j}V^{m}\] \[+\dot{\gamma_{y}}^{\ l}g_{jl}\left\{\Gamma_{jm}^{k}(\gamma_{y},V) -\Gamma_{jm}^{k}(\gamma_{y},\dot{\gamma_{y}})\left\{-N_{s}^{j}V^{s}V^{m}-N_{s} ^{m}V^{s}V^{j}\right\}.\]
Hence
\[\dot{T}_{y}(v) = y^{l}g_{kl}(y)\left\{\frac{\partial\Gamma_{jm}^{k}}{\partial x^{s }}(v)y^{s}-\frac{\partial\Gamma_{jm}^{k}}{\partial y^{s}}(v)N_{t}^{s}(y)v^{t}\right.\]
\[-\frac{\partial\Gamma^{k}_{jm}}{\partial x^{s}}(y)y^{s}+2\frac{\partial \Gamma^{k}_{jm}}{\partial y^{s}}(y)G^{s}(y)\Bigg{\}}v^{j}v^{m}\] \[-2y^{l}g_{jl}\left\{\Gamma^{k}_{jm}(v)-\Gamma^{k}_{jm}(y)\right\} N^{j}_{s}(y)v^{s}v^{m}.\]
In particular, if \(V(t)=f(t)E(t)\) where \(E(t)\) is parallel, we have
\[\begin{split} f(t)^{2}\dot{T}_{\gamma^{\prime}(t)}(E(t))=\dot{T }_{\gamma^{\prime}(t)}(V(t))=&\frac{d}{dt}T_{\gamma^{\prime}(t)} (V(t))-\frac{\partial T_{\gamma^{\prime}}(V)}{\partial V^{i}}D_{\gamma^{ \prime}}(V)^{i}\\ =&\frac{d}{dt}T_{\gamma^{\prime}(t)}(V(t))-\frac{ \partial T_{\gamma^{\prime}}(V)}{\partial V^{i}}\frac{f^{\prime}(t)V^{i}(t)}{ f(t)}\\ =&\frac{d}{dt}T_{\gamma^{\prime}(t)}(V(t))-2\frac{f ^{\prime}(t)}{f(t)}T_{\gamma^{\prime}(t)}(V(t))\\ =&\frac{d}{dt}T_{\gamma^{\prime}(t)}(V(t))-2f^{ \prime}(t)f(t)T_{\gamma^{\prime}(t)}(E(t))\end{split} \tag{2}\]
Using the \(T\)-curvature we define a _weighted flag curvature_\(K^{\alpha}(y,v)\) for \(y\in T_{x}M\setminus\{0\}\) and \(v\in T_{x}M\setminus\mathrm{span}\{y\}\) by
**Definition 3.4**: _For a pair of linearly independent vectors \(y,v\in T_{x}M\), set_
\[K^{\alpha}(y,v):=\frac{1}{F(y)^{2}}\left[\frac{g_{y}(R_{y}(v),v)}{g_{y}(v^{ \perp},v^{\perp})}+\dot{T}_{y}(v)\sqrt{\frac{g_{y}(v,v)}{g_{y}(v^{\perp},v^{ \perp})^{3}}}-\alpha T_{y}^{2}(v)\frac{g_{y}(v,v)}{g_{y}(v^{\perp},v^{\perp}) ^{3}}\right] \tag{3}\]
_where \(v^{\perp}=v-g_{y}(v,y)y/F(y)^{2}\) is the orthogonal component of \(v\) relative to the span of \(y\), with respect to the inner product \(g_{y}\). We say \(K^{\alpha}\geq K\) (resp. \(K^{\alpha}>K\)) if for any \(y\in T_{x}M\setminus\{0\}\) and \(v\in T_{x}M\setminus\mathrm{span}\{y\}\),_
\[K^{\alpha}(y,v)\geq K(\mathrm{resp.}\ >K). \tag{4}\]
**Remark 3.5**: We shall remark that the curvature \(K^{\alpha}(y,v)\) as defined above depends not only on \(y\) and the plane \(\mathrm{span}\{y,v\}\), but also on the (direction of the) vector \(v\), in sharp contrast to the flag curvature \(K(y,v)\).
The lemmas 3.2 and 3.3 ensure the validity of the condition (4); however, letting \(v=y+su\) where \(u\in T_{x}M\) is a \(g_{y}\)-unit vector orthogonal to \(y\) with respective to the inner product \(g_{y}\), and taking the limit \(s\to 0^{+}\), we have
\[K(y,u)-\frac{2}{3}\frac{\dot{L}_{y}(u,u,u)}{F(y)}-\frac{4\alpha}{9}L_{y}(u,u,u) ^{2}\geq K(\mathrm{resp.}\ >K)\]
for all \(u\in T_{x}M\) such that \(g_{y}(u,u)=1\) and \(g_{y}(y,u)=0\), where \(K(y,u)\) is the flag curvature of the flag \((\mathrm{span}\{y,u\},y)\).
## 4 Busemann function
Let \((M,F)\) be a positively complete Finsler manifold. For a point \(p\in M\), denote by \(S(p,t):=\{x\in M\mid d(p,x)=t\}\) the forward geodesic sphere of radius \(t\) centered at \(p\). We shall show that the functions
\[b_{p}^{t}(x):=t-d(x,S(p,t))\]
converge uniformly on compact sets to
\[b_{p}(x):=\lim_{t\to+\infty}b_{p}^{t}(x).\]
The function \(b_{p}\) is called the _Busemann function_ at \(p\). We say \((M,F)\) is _proper_ if \(b_{p}\) is a proper function at some point \(p\). In general, it is not easy to check if a Busemann function is proper. However, it is easy to show that for a positively complete Finsler manifolds with small ends,
\[\limsup_{r\to+\infty}\frac{\mathrm{Diam}(S(p,r))}{r}<1,\]
\(b_{p}\) is proper. Here \(\mathrm{Diam}(A):=sup_{p,q\in A}d(x,y)\)
**Lemma 4.1**:
* \(b_{p}^{t}(x)\) _is bounded_ \[-d(x,p)\leq b_{p}^{t}(x)\leq d(p,x).\] (5)
* _for any_ \(d(p,x)\leq t_{1}\leq t_{2}\)_,_ \[b_{p}^{t_{1}}(x)\geq b_{p}^{t_{2}}(x).\] (6)
* _for any_ \(x_{1},x_{2}\in M\)_,_ \[-d(x_{1}.x_{2})\leq b_{p}^{t}(x_{1})-b_{p}^{t}(x_{2})\leq d(x_{2},x_{1}).\] (7)
_Proof_: (a) Let \(z\in S(p,t)\) such that \(d(x,S(p,t))=d(x.z)\). By triangle inequalties
\[b_{p}^{t}(x) = t-d(x,z)\] \[= d(p,z)-d(x,z)\] \[\leq d(p,x).\] \[b_{p}^{t}(x) = d(p,z)-d(x,z)\] \[\geq -d(x,p).\]
(b) Let \(x^{\prime}\in S(p,t_{2})\) such that \(d(x,x^{\prime})=d(x,S(p,t_{2}))\). Let \(\sigma:[0,a]\to M\) be a minimal geodesic from \(x\) to \(x^{\prime}\). Let
\[s_{o}:=d(x,S(p,t_{2}))-t_{2}+t_{1}.\]
We have \(0\leq s_{o}\leq d(x,S(p,t_{2}))\).
\[d(p,\sigma(s_{o}))\geq t_{2}-d(\sigma(s_{o}),S(p,t_{2}))=t_{2}-d(x,S(p,t_{2})) +s_{o}=t_{1}.\]
Thus \(\sigma(s_{o})\in M\setminus B(p,t_{1})\).
\[d(x,S(p,t_{1}))\leq d(x,\sigma(s_{o}))=s_{o}=d(x,S(p,t_{2}))-t_{2}+t_{1}.\]
Thus \(b_{p}^{t_{1}}(x)\geq b_{p}^{t_{2}}(x)\).
(c) Let \(z\in S(p,t))\) such that \(d(x_{1},z)=d(x_{1},S(p,t))\).
\[b_{p}^{t}(x_{1})-b_{p}^{t}(x_{2}) = d(x_{2},S(p,t))-d(x_{1},S(p,t))\] \[\leq d(x_{2},z)-d(x_{1},z)\] \[\leq d(x_{2},x_{1}).\]
Thus
\[b_{p}^{t}(x_{2})-b_{p}^{t}(x_{1})\leq d(x_{1},x_{2}).\]
We obtain (7).
Q.E.D.
Therefore \(b_{p}^{t}\) converges to a function \(b_{p}\) uniformly on compact subsets. It follows from (7) that
\[-d(x_{1},x_{2})\leq b_{p}(x_{1})-b_{p}(x_{2})\leq d(x_{2},x_{1}). \tag{8}\]
**Lemma 4.2**: _Let \((M,F)\) be positively complete and \(p\in M\). For any point \(q\in M\), there is a ray \(\sigma_{q}:[0,+\infty)\to M\) issuing from \(q\) such that_
* _for all_ \(t>0\)_,_ \[b_{p}^{q,t}(x):=b_{p}(q)+t-d(x,\sigma_{q}(t)).\] _supports_ \(b_{p}(x)\) _at_ \(q\)_, i.e._ \(b_{p}^{q,t}(x)\leq b_{p}(x)\) _for all_ \(x\in M\) _and_ \(b_{p}^{q,t}(q)=b_{p}(q)\)_._
* _for all_ \(t\geq 0\)_,_ \[b(\sigma_{q}(t))=b_{p}(q)+t.\]
_Proof_: Take a squence \(t_{n}\to+\infty\) and a squence of points \(x_{n}\in S(p,t_{n})\) such that \(d(q,x_{n})=d(q,S(p,t_{n}))\). Take a normal minimal geodesic \(\sigma_{n}:[0,s_{n}]\to M\) from \(q\) to \(x_{n}\). Then \(\sigma_{n}\) cobverges to a ray \(\sigma_{q}:[0,\infty)\to M\). For a sufficiently large \(t_{n}\),
\[d(q,S(p,t_{n}))=t+d(\sigma_{n}(t),S(p,t_{n})).\]
Observe that
\[b_{p}(x)-b_{p}^{q,t}(x) = b_{p}(x)-b_{p}(q)-t+d(x,\sigma_{q}(t))\] \[= \lim_{n\to+\infty}\Big{\{}[t_{n}-d(x,S(p,t_{n}))]-[t_{n}-d(q,S(p,t _{n}))]-t+d(x,\sigma_{q}(t))\Big{\}}\] \[\geq \lim_{n\to+\infty}\Big{\{}-d(x,\sigma_{n}(t))+d(x,\sigma_{q}(t)) \Big{\}}\] \[\geq \lim_{n\to+\infty}-d(\sigma_{q}(t),\sigma_{n}(t))=0.\]
This proves (a).
Observe that for any \(s>0\) and sufficiently large \(t_{n}\),
\[t_{n}-d(\sigma_{n}(s),S(p,t_{n}))=t_{n}-[d(q,S(p,t_{n}))-s].\]
\[-d(\sigma_{q}(s),\sigma_{n}(s))\leq d(\sigma_{n}(s),S(p,t_{n}))-d(\sigma_{q}(s ),S(p,t_{n}))\leq d(\sigma_{n}(s),\sigma_{q}(s)).\]
Letting \(n\to\infty\), we obtain
\[b_{p}(\sigma_{q}(s))=b_{p}(q)+s.\]
Q.E.D.
## 5 Smoothing Theorem
A function \(f:M\to{\bf R}\) is said to be _locally Lipschitz_ if it is Lipschitz on every compact subset \(K\subset M\); it is said to be _geodesically convex_ if it is convex along geodesics. Alternately, geodesic convexity can be described in the following way. Let \(f\) be a continuous function defined in a neighborhood of \(p\in M\), \(v\in S_{p}M\) be a unit vector, and \(c:(-\varepsilon,\varepsilon)\to M\) be a geodesic with \(c(0)=p\) and \(c^{\prime}(0)=v\). Define
\[Cf(p,v):=\liminf_{r\to 0}\frac{1}{r^{2}}\Big{\{}f(c(r))+f(c(-r))-2f(c(0)) \Big{\}}.\]
It's easy to see that \(f\) is geodesically convex if and only if for each compact set \(K\subset M\), there is \(\lambda_{K}>0\) such that \(Cf(p,v)\geq\lambda_{K}\) for all \(p\in K\) and \(v\in T_{p}M\).
The following is also readily seen from the above construction:
**Lemma 5.1**: _Let \(f\) be a continuous function defined in a neighborhood \(U\) of \(p\), \(\gamma:(-\varepsilon,\varepsilon)\to U\) be a unit speed geodesic with \(\gamma(0)=p\) and \(\gamma^{\prime}(0)=v\), and \(g:(-\varepsilon,\varepsilon)\to{\bf R}\) a continuous function which supports \(f\) along \(\gamma\) in the sense that \(g(0)=f(\gamma(0))\) and \(g(t)<f(\gamma(t))\) for all \(t\neq 0\). Then_
\[Cf(p,v)\geq Cg(p,v).\]
_In particular, if for any compact subset \(K\subset U\), the supporting functions \(g\) can be chosen for every \(p\in K\) and \(v\in S_{p}M\) so that \(g^{\prime\prime}(0)\) are uniformly bounded below by some \(\lambda_{K}>0\), then \(f\) is geodesically convex._
We have the following theorem for locally Lipschitz geodesically convex functions. For the proof we refer the reader to the appendix.
**Theorem 5.2**: _Let \(M\) be a Finsler manifold, and \(f:M\to{\bf R}\) be a locally Lipschitz geodesically convex function on \(M\). Given any \(\varepsilon>0\), there is a \(C^{\infty}\) geodesically convex function \(g:M\to{\bf R}\) such that \(|g-f|<\varepsilon\) on \(M\)._
Since Morse functions are dense in the space of \(C^{\infty}\) functions, the existence of a proper geodesically convex Morse function will follow given a proper geodesically convex function.
Proof of Theorem 1.1
In this section we prove Theorem 1.1, a generalization of the Gromoll-Meyer theorem to Finsler geometry. In view of the Morse theory, it suffices to show that there exists a smooth proper Morse function \(M\to{\bf R}\) with a unique critical point of index \(0\).
The main tool we will use is the variation formulae
**Theorem 6.1**: _([12]) Let \(\gamma:[a,b]\to M\) be a unit speed geodesic, and_
\[H:[a,b]\times(-\varepsilon,\varepsilon)\to M\]
_be a piecewise \(C^{\infty}\) variation of \(\gamma\), with variation field \(V\) along \(\gamma\). Denote by \(L(s)\) the length of the curve \(H(\cdot,s):[a,b]\to M\), we have_
\[L^{\prime}(0)=\int_{a}^{b}g_{\gamma^{\prime}}(D_{\gamma^{\prime}}V,\gamma^{ \prime})dt=g_{\gamma^{\prime}(b)}(V(b),\gamma^{\prime}(b))-g_{\gamma^{\prime} (a)}(V(a),\gamma^{\prime}(a))\]
\[L^{\prime\prime}(0)= \int_{a}^{b}\left[g_{\gamma^{\prime}}(D_{\gamma^{\prime}}(V^{ \perp}),V^{\perp}))-g_{\gamma^{\prime}}(R_{\gamma^{\prime}}(V^{\perp}),V^{ \perp})\right]dt\] \[+\left[F(V(b))^{2}g_{\gamma^{\prime}(b)}(\kappa_{b}(0),\gamma^{ \prime}(b))-F^{2}(V(a))g_{\gamma^{\prime}(a)}(\kappa_{a}(0),\gamma^{\prime}(a ))\right]\] \[+\left[T_{\gamma^{\prime}(a)}(V(a))-T_{\gamma^{\prime}(b)}(V(b))\right]\]
_where \(V^{\perp}=V-g_{\gamma^{\prime}}(\gamma^{\prime},V)\gamma^{\prime}\) is the orthogonal component of \(V\) relative to the span of \(\gamma^{\prime}\), and \(\kappa_{t}\) is the geodesic curvature of the curve \(H(t,\cdot):(-\varepsilon,\varepsilon)\to M\), given by_
\[\kappa_{t}(s)=\frac{1}{F\left(\frac{\partial H}{\partial s}\right)^{2}}\left[ \frac{\partial^{2}H^{i}}{\partial s^{2}}+2G^{i}\left(\frac{\partial H}{ \partial s}\right)\right]\frac{\partial}{\partial x^{i}}\]
_in local coordinates._
Note in particular that \(\kappa_{t}(s)=0\) if \(H(t,\cdot)\) is a geodesic.
The key idea to prove Theorem 1.1 lies in
**Lemma 6.2**: _Let \(M\) be as in Theorem 1.1 and \(p\in M\) be a fixed point. Then there exists a \(C^{2}\) function \(\chi:{\bf R}\to{\bf R}\) such that \(\chi\circ b_{p}\) is a proper, geodesically convex function._
_Proof_.By properness \(b_{p}\) is bounded from below. Let \(a=\inf_{x\in M}b_{p}(x)\). For \(r\geq a\), we define
\[P(r)=\inf\{K^{\alpha}(y,v)\mid y\in T_{x}M,b_{p}(x)\leq r+1\}.\]
This is well-defined because \(K^{\alpha}(y,v)\) extends to a continuous homogeneous function in \(y\) and \(v\) on the product of slit tangent bundle \(TM\setminus\{0\}\times TM\setminus\{0\}\) by lemmas 3.2 and 3.3. Let
\[Q(r)= \max\left\{8\left(1+\frac{1}{\alpha}\right)\frac{1}{P(r)},1\right\}\] \[\tilde{P}(r)= \inf\{K^{\alpha}(y,v)\mid y\in T_{x}M,r+1\leq b_{p}(x)\leq r+Q(r)\}\] \[K(r)= \inf\{K(y,v)\mid y\in T_{x}M,b_{p}(x)\leq r+Q(r)\}\] \[S(r)= \max\left\{\frac{1}{8}P(r),\frac{Q(r)K(r)}{3}-\frac{1}{Q(r)}, \frac{3}{8}P(r)-\left(\frac{3}{2\alpha}+1\right)\frac{1}{Q(r)}-\frac{Q(r)K(r)} {6}\right\}\]
and
\[\chi(t)=\int_{a}^{t}\exp\left(\int_{a}^{s}S(x)dx\right)ds.\]
So \(\chi\) is \(C^{2}\) on \([a,\infty)\) and
1. \(\chi^{\prime}(r)\geq 1\) for \(r\geq a\);
2. \(\chi^{\prime\prime}(r)=S(r)\chi^{\prime}(r)\) for \(r\geq a\).
The first condition above, together with the properness of \(b_{p}\), shows that \(\chi\circ b_{p}\) is proper.
To show that \(\chi\circ b_{p}\) is geodesically convex, fix a point \(q\in M\) with \(b_{p}(q)=r\). As in lemma 4.2 take a geodesic ray \(\sigma_{q}(t)\) from \(q\) such that for any \(t>0\),
\[b_{p}^{q,t}(x)=b_{p}(q)+t-d(x,\sigma_{q}(t))\]
which supports \(b_{p}(x)\) at \(q\) and \(b_{p}(\sigma(t))=t+b_{p}(q)\). Let \(v\in T_{p}M\) be a \(g_{\sigma_{q}^{\prime}(0)}\)-unit tangent vector, and \(E(t)\) be its parallel extension along \(\sigma_{q}(t)\). We construct a variation \(H:[0,Q(r)]\times(-\varepsilon,\varepsilon)\to M\) of \(\sigma_{q}\) by requiring \(H(t,\cdot):(-\varepsilon,\varepsilon)\to M\) to be the geodesic with \(H(t,0)=\sigma_{q}(t)\) and
\[\frac{\partial}{\partial s}H(t,s)\bigg{|}_{s=0}=\left(1-\frac{t}{Q(r)}\right) E(t)\]
where \(\varepsilon\) is a small positive number so that these geodesics are well-defined. Denote by \(\gamma\) the geodesic \(H(0,\cdot)\), so that \(\gamma(0)=q\), \(\gamma^{\prime}(0)=v\).
Let \(c_{s}(t):=H(t,s)\) and
\[L(s):=\int_{0}^{Q(r)}F(c_{s}(t),c_{s}^{\prime}(t))dt.\]
We define a function \(f\) along the geodesic \(\gamma\) by
\[f\circ\gamma(s):=b_{p}(q)+Q(r)-L(s).\]
Since
\[L(s)\geq d(\gamma(s),\sigma_{q}(Q(r))\]
we have
\[f(x)\leq b_{p}(q)+Q(r)-d(x,\sigma_{q}(Q(r)))=b^{q,Q(r)}(x)\leq b_{p}(x)\]
for all \(x\) on the geodesic \(\gamma\). With a little abuse of notation we will denote
\[df(v)=\left.\frac{\partial}{\partial s}f(\gamma(s))\right|_{s=0}\]
and
\[H^{2}f(v)=\left.\frac{\partial^{2}}{\partial s^{2}}f(\gamma(s))\right|_{s=0}\]
and similarly for \(\chi\circ f\).
By the first variation formula,
\[-df(v)=L^{\prime}(s)=-g_{\sigma_{q}^{\prime}(0)}(\sigma_{q}^{\prime}(0),v).\]
So we have
\[|df(v)|^{2}=1-\tau^{2}\]
where \(\tau=\sqrt{g_{\sigma_{q}^{\prime}(0)}(v^{\perp},v^{\perp})}\) with \(v^{\perp}=v-g_{\sigma_{q}^{\prime}(0)}(\sigma_{q}^{\prime}(0),v)\sigma_{q}^{ \prime}(0)\).
Let \(E^{\perp}\) be the parallel extension of \(v^{\perp}\) along \(\sigma_{q}\), and put
\[V^{\perp}(t)=\Big{(}1-\frac{t}{Q(r)}\Big{)}E^{\perp}(t).\]
Observe that \(V^{\perp}\) is the orthogonal component of the variation field of \(H\), relative to the span of \(\sigma_{q}^{\prime}\).
By the second variational formula
\[-H^{2}f(v)=L^{\prime\prime}(0) = \int_{0}^{Q(r)}\Big{\{}g_{\sigma_{q}^{\prime}(t)}\Big{(}D_{\sigma _{q}^{\prime}(t)}V^{\perp}(t),D_{\sigma_{q}^{\prime}(t)}V^{\perp}(t)\Big{)}-g _{\sigma_{q}^{\prime}(t)}\Big{(}\mathbf{R}_{\sigma_{q}^{\prime}(t)}(V^{\perp} (t)),V^{\perp}(t)\Big{)}\Big{\}}dt\] \[+\int_{0}^{Q(r)}\Big{\{}g_{\sigma_{q}^{\prime}(t)}\Big{(}D_{ \sigma_{q}^{\prime}(t)}V^{\perp}(t),D_{\sigma_{q}^{\prime}(t)}V^{\perp}(t),D_{ \sigma_{q}^{\prime}(t)}V^{\perp}(t)\Big{)}-g_{\sigma_{q}^{\prime}(t)}\Big{(} \mathbf{R}_{\sigma_{q}^{\prime}(t)}(V^{\perp}(t)),V^{\perp}(t)\Big{)}\Big{\}}dt\] \[+\int_{0}^{Q(r)}\Big{\
\[+T_{\sigma^{\prime}_{q}(0)}(v)\] \[= \frac{1}{Q(r)}\tau^{2}-\int_{0}^{Q(r)}\left(1-\frac{t}{Q(r)}\right)^ {2}g_{\sigma^{\prime}_{q}(t)}\Big{(}{\bf R}_{\sigma^{\prime}_{q}(t)}(E(t)),E(t) \Big{)}dt+T_{\sigma^{\prime}_{q}(0)}(v).\]
Combining these we have
\[S(r)|df(v)|^{2}+H^{2}f(v) \tag{9}\] \[= S(r)(1-\tau^{2})-\frac{\tau^{2}}{Q(r)}+\int_{0}^{Q(r)}\left(1- \frac{t}{Q(r)}\right)^{2}g_{\sigma^{\prime}_{q}(t)}\left({\bf R}_{\sigma^{ \prime}_{q}(t)}(E(t)),E(t)\right)dt\] \[+\int_{0}^{Q(r)}\frac{d}{dt}T_{\sigma^{\prime}_{q}(t)}\left(\left( 1-\frac{t}{Q(r)}\right)E(t)\right)dt\] \[= S(r)(1-\tau^{2})-\frac{\tau^{2}}{Q(r)}+(1-\tau)\int_{0}^{Q(r)} \left(1-\frac{t}{Q(r)}\right)^{2}g_{\sigma^{\prime}_{q}(t)}\left({\bf R}_{ \sigma^{\prime}_{q}(t)}(E(t)),E(t)\right)dt\] \[+\int_{0}^{Q(r)}\left(1-\frac{t}{Q(r)}\right)^{2}\left[\tau g_{ \sigma^{\prime}_{q}(t)}\left({\bf R}_{\sigma^{\prime}_{q}(t)}(E(t)),E(t) \right)+\dot{T}_{\sigma^{\prime}_{q}(t)}(E(t))\right]dt\] \[-\int_{0}^{Q(r)}\frac{2}{Q(r)}\left(1-\frac{t}{Q(r)}\right)T_{ \sigma^{\prime}_{q}(t)}(E(t))dt\] \[\geq S(r)(1-\tau^{2})-\frac{\tau^{2}}{Q(r)}+(1-\tau)\int_{0}^{Q(r)} \left(1-\frac{t}{Q(r)}\right)^{2}K(r)\tau^{2}dt\] \[+\int_{0}^{Q(r)}\left(1-\frac{t}{Q(r)}\right)^{2}\alpha\frac{T_{ \sigma^{\prime}_{q}(t)}(E(t))^{2}}{\tau^{3}}dt+\int_{0}^{1}\left(1-\frac{t}{ Q(r)}\right)^{2}P(r)\tau^{3}dt+\int_{1}^{Q(r)}\left(1-\frac{t}{Q(r)}\right)^{2} \tilde{P}(r)\tau^{3}dt\] \[-\int_{0}^{Q(r)}\frac{2}{Q(r)}\left(1-\frac{t}{Q(r)}\right)T_{ \sigma^{\prime}_{q}(t)}(E(t))dt\] \[\geq S(r)(1-\tau^{2})-\frac{\tau^{2}}{Q(r)}+(1-\tau)\tau^{2}\frac{Q(r )}{3}K(r)+\int_{0}^{1}\left(1-\frac{t}{Q(r)}\right)^{2}P(r)\tau^{3}dt-\int_{0} ^{Q(r)}\frac{\tau^{3}}{\alpha}\frac{1}{Q(r)^{2}}dt\] \[\geq \left(\frac{1}{4}P(r)-\frac{1}{\alpha Q(r)}-\frac{Q(r)K(r)}{3} \right)\tau^{3}-\left(S(r)+\frac{1}{Q(r)}-\frac{Q(r)K(r)}{3}\right)\tau^{2}+S (r)\]
where for the second equality we used the relation (2) and for the next inequality we used the lower bounds of \(K^{\alpha}\) and \(K\).
Put
\[\Delta(\tau)=\left(\frac{1}{4}P(r)-\frac{1}{\alpha Q(r)}-\frac{Q(r)K(r)}{3} \right)\tau^{3}-\left(S(r)+\frac{1}{Q(r)}-\frac{Q(r)K(r)}{3}\right)\tau^{2}+S (r),\]
Our choices of \(Q(r),K(r)\) and \(S(r)\) imply that
\[\Delta(0)=S(r)\geq\frac{1}{8}P(r)\] \[\Delta(1)=\frac{1}{4}P(r)-\frac{1}{Q(r)}\left(1+\frac{1}{\alpha} \right)\geq\frac{1}{8}P(r)\] \[S(r)+\frac{1}{Q(r)}-\frac{Q(r)K(r)}{3}\geq 0,\quad 2\left(S+\frac{1}{Q(r) }-\frac{Q(r)K(r)}{3}\right)\geq 3\left(\frac{1}{4}P(r)-\frac{1}{\alpha Q(r)}- \frac{Q(r)K(r)}{3}\right)\]
We need the following trivial lemma
**Lemma 6.3**: _Let \(f(x)=ax^{3}+bx^{2}+c\) be a degree \(3\) polynomial. If \(-\frac{2b}{3a}\notin(0,l)\) or \(a<0\), then \(f|_{[0,l]}\) attains minimal value at either \(0\) or \(l\)._
By the above lemma, we get
\[H^{2}(\chi\circ f)(v)=\left(S(r)|df(v)|^{2}+H^{2}f(v)\right)\chi^{\prime}(f(q) )\geq\frac{P(r)}{8}>0\]
Note that for \(y,v\in T_{q}M\setminus\{0\}\), \(g_{y}(v,v)/F(v)^{2}\) is uniformly bounded on any compact subset of \(M\). Hence for any \(q\in b_{p}^{-1}((-\infty,r])\), and any \(F\)-unit speed geodesic \(\gamma(t)\) with \(\gamma(0)=q\), we have a uniform positive lower bound of \((\chi\circ f)^{\prime\prime}(0)\) with \(f\) constructed as above. It follows from lemma 5.1 that \(\chi\circ b_{p}\) is indeed geodesically convex on the manifold \(M\). Q.E.D.
**Remark 6.4**: We defined the weighted flag curvature with the coefficent of the \(\dot{T}\) term being \(1\), which turned out to be crucial in the proof. In fact, by possibly choosing an upper bound of the flag curvature \(K\), we would get an estimate similar to (9) starting from the condition
\[\frac{1}{F(y)^{2}}\left[\frac{g_{y}(R_{y}(v),v)}{g_{y}(v^{\perp},v^{\perp})}+ \beta\dot{T}_{y}(v)\sqrt{\frac{g_{y}(v,v)}{g_{y}(v^{\perp},v^{\perp})^{3}}}- \alpha T_{y}^{2}(v)\frac{g_{y}(v,v)}{g_{y}(v^{\perp},v^{\perp})^{3}}\right]>0.\]
However, with a generic \(\beta\neq 1\), we will end up with \(\Delta(1)=\frac{1}{4\beta}P(r)-\frac{A}{Q(r)}-BQ(r)\), where \(A\) is a constant independent of \(r\), and \(B\neq 0\) depends on \(\alpha\), \(\beta\) and the curvature bound \(K\) only. Consequently, no positive lower bound of \(\Delta(1)\) is guaranteed without further restriction of the flag curvature.
This is a consequence of the fact that the combination \(K(y,u)+\dot{T}_{y}(u)\) plays a crucial role in the second variation when \(u\) is orthogonal to \(y\) with respect to \(g_{y}\).
_Proof of Theorem 1.1._ Because \(b_{p}\) is Lipschitz and \(\chi\) is \(C^{2}\), the function constructed in Lemma 6.2 hence locally Lipschitz. It follows from Theorem 5.2 that there is a geodesically convex proper Morse function \(F\) on \(M\). Since \(F\) is geodesically convex, its critical point has index \(0\). Thus Theorem 1.1 follows from the standard Morse theory. Q.E.D.
It is unknown to the authors whether the weighted flag curvature condition \(K^{\alpha}>0\) implies the properness of the manifold \(M\). However, there are known examples of non-proper open complete manifolds of positive Ricci curvature in Riemannian geometry[16].
## Appendix
We now sketch a proof of the smoothing theorem 5.2.
As in [5], for each compact subset \(K\subset M\), we may define a metric \(d_{K}\) on the space of \(C^{\infty}\) functions in a neighborhood of \(K\), which is independent of the choice of the metric on the manifold and gives the \(C^{\infty}\) topology on the function space.
By corollary 1 of Theorem 4.1 in [5], the proof of theorem 5.2 reduces to the following 3 lemmas:
**Lemma A**: _The set of locally Lipschitz geodesically convex functions has the maximum closure property, in the sense that given two locally Lipschitz geodesically convex functions \(f_{1},f_{2}\), then \(\max(f_{1},f_{2})\) is locally Lipschitz and geodesically convex._
**Lemma B**: _The set of locally Lipschitz geodesically convex functions has the \(C^{\infty}\)-stability property, in the sense that given any compact set \(K\subset M\) and locally Lipschitz geodesically convex function \(f\), then there is a positive number \(\varepsilon>0\) such that for any \(C^{\infty}\) function \(\phi\) with \(d_{K}\) norm less than \(\varepsilon\), \(f+\phi\) is locally Lipschitz and geodesically convex on a neighborhood of \(K\)._
The above two lemmas are elementary. The last one is less trivial:
**Lemma C**: _The set of locally Lipschitz geodesically convex functions has the local approximation property, in the sense that given any \(x\in M\), there is an open neighborhood \(U\) of \(x\) with the following property: Let \(L\subset K\) be compact sets and \(V\) be an open set such that \(K\subset V\subset U\). Given any geodesically convex function \(f\) on \(V\), \(C^{\infty}\) on \(L\), there exists an open neighborhood \(W\) of \(K\) in \(V\), such that for any positive constant \(\varepsilon>0\), there is a \(C^{\infty}\) geodesically convex function \(g\) on \(V\), satisfying \(\sup_{K}|g-f|<\varepsilon\) and \(d_{L}(f,g)<\varepsilon\)._
_Proof._We choose a Riemannian metric \(g\) on \(M\), and let \(U\) be a pre-compact neighborhood of \(x\) in \(M\). Now choose \(W\) and \(\delta\) so that the \(\delta\)-neighborhood of \(W\), with respect to \(g\), is contained in \(V\). We further assume that \(\exp^{g}\) maps the \(\delta\)-ball in \(T_{p}M\) diffeomorphically onto its image for all \(p\in W\) where \(\exp^{g}\) is the exponential map of \(g\). For any \(p\in W\), let
\[f_{\delta}(p)=\frac{1}{\delta^{n}}\int_{T_{p}M}f(\exp^{g}_{p}(v))\phi\left( \frac{\left\|v\right\|_{g}}{\delta}\right)d\mu_{p}\]
where \(\phi\) is a nonnegative smooth function supported in \([-1,1]\), constant in a neighborhood of \(0\), and satisfies \(\int_{{\bf R}^{n}}\phi(\left\|v\right\|)dv=1\), and \(d\mu_{p}\) is the Lebesgue measure on \(T_{p}M\) relative to the Riemannian inner product \(g\). A standard argument in Riemannian geometry shows that for sufficiently small \(\delta\), \(f_{\delta}\) is a well-defined \(C^{\infty}\) function on \(W\), and that \(f_{\delta}\) converges to \(f\) as \(\delta\to 0\), in the \(C^{0}\) topology on \(K\) and in the \(C^{\infty}\) topology on \(L\).
Fix \(p\in W\), and an \(F\)-unit vector \(v\in T_{p}M\), and let \(\gamma:(-\varepsilon,\varepsilon)\to V\) be the \(F\)-geodesic with \(\gamma^{\prime}(0)=v\). Let \(\mathrm{P}_{s}u\) be the vector in \(T_{\gamma(s)}M\) obtained from \(u\in T_{p}M\) by a \(g\)-parallel transport along \(\gamma\). Then
\[f_{\delta}(\gamma(-t))+f_{\delta}(\gamma(t))=\frac{1}{\delta^{n}}\int_{T_{p}M }\left[f(\exp^{g}_{\gamma(-t)}\mathrm{P}_{-t}u)+f(\exp^{g}_{\gamma(t)} \mathrm{P}_{t}u)\right]\phi\left(\frac{\left\|u\right\|_{g}}{\delta}\right)d \mu_{p}\]
Let \(u\in T_{p}M\) be such that \(\left\|u\right\|_{g}\leq\delta\), and \(c_{0}:(-\varepsilon,\varepsilon)\to M\) be defined by \(c_{0}(t)=\exp^{g}_{\gamma(t)}\mathrm{P}_{t}u\). Choosing \(\delta\) small enough, there is a unique (not necessarily normal) \(F\)-geodesic \(\gamma_{u}:(-\varepsilon,\varepsilon)\) with \(\gamma_{u}(0)=c_{0}(0)\) and \(\gamma^{\prime}_{u}(0)=c^{\prime}_{0}(0)\).
By the smooth dependence of the solutions of ordinary differential equation on the initial conditions, we check that both \(c_{0}\) and \(\gamma_{u}\) converge to \(\gamma\) in the \(C^{\infty}\) topology. Then by Lemma 3 in SS3 of [4] it can be shown that for any given \(\alpha>0\),
\[d^{g}(c_{0}(t),\gamma_{u}(t))\leq\alpha t^{2}\]
holds for all sufficiently small \(\left\|u\right\|_{g}\leq\delta\) and \(t\). Now let \(L_{p}\) be a \(g\)-Lipschitz constant of \(f\) on \(\overline{V}\), this implies
\[f_{\delta}(\gamma(-t))+f_{\delta}(\gamma(t)) \geq\frac{1}{\delta^{n}}\int_{T_{p}M}\left[f(\gamma_{u}(-t))+f( \gamma_{u}(t))-2L\alpha t^{2}\right]\phi\left(\frac{\left\|u\right\|_{g}}{ \delta}\right)d\mu_{p}\] \[=\frac{1}{\delta^{n}}\int_{T_{p}M}\left[f(\gamma_{u}(-t))+f( \gamma_{u}(t))\right]\phi\left(\frac{\left\|u\right\|_{g}}{\delta}\right)d\mu _{p}-2L_{p}\alpha t^{2}\]
Hence
\[f_{\delta}(\gamma(-t)) +f_{\delta}(\gamma(t))-2f_{\delta}(x)\] \[\geq\frac{1}{\delta^{n}}\int_{T_{p}M}\left[f(\gamma_{u}(-t))+f( \gamma_{u}(t))-2f(\gamma_{u}(0))\right]\phi\left(\frac{\left\|u\right\|_{g}}{ \delta}\right)d\mu_{p}-2L_{p}\alpha t^{2}\]
Since \(f\) is geodesically convex, by taking a uniform upper bound of \(F(\gamma^{\prime}_{u}(0))\) for \(\left\|u\right\|_{g}\leq\delta\) on \(\overline{W}\), we have that
\[\frac{1}{\delta^{n}}\int_{T_{p}M}\left[f(\gamma_{u}(-t))+f(\gamma_{u}(t))-2f( \gamma_{u}(0))\right]\phi\left(\frac{\left\|u\right\|_{g}}{\delta}\right)d\mu _{p}\geq M_{0}t^{2}\]
for some \(M_{0}>0\). A suitable choices of \(\alpha\) in turn shows that \(f_{\delta}\) is also geodesically convex for sufficiently small \(\delta\). Q.E.D.
|
2301.07173
|
Towards Voice Reconstruction from EEG during Imagined Speech
|
Translating imagined speech from human brain activity into voice is a
challenging and absorbing research issue that can provide new means of human
communication via brain signals. Endeavors toward reconstructing speech from
brain activity have shown their potential using invasive measures of spoken
speech data, however, have faced challenges in reconstructing imagined speech.
In this paper, we propose NeuroTalk, which converts non-invasive brain signals
of imagined speech into the user's own voice. Our model was trained with spoken
speech EEG which was generalized to adapt to the domain of imagined speech,
thus allowing natural correspondence between the imagined speech and the voice
as a ground truth. In our framework, automatic speech recognition decoder
contributed to decomposing the phonemes of generated speech, thereby displaying
the potential of voice reconstruction from unseen words. Our results imply the
potential of speech synthesis from human EEG signals, not only from spoken
speech but also from the brain signals of imagined speech.
|
Young-Eun Lee, Seo-Hyun Lee, Sang-Ho Kim, Seong-Whan Lee
|
2023-01-02T05:10:31Z
|
http://arxiv.org/abs/2301.07173v1
|
# Towards Voice Reconstruction from EEG during Imagined Speech
###### Abstract
Translating imagined speech from human brain activity into voice is a challenging and absorbing research issue that can provide new means of human communication via brain signals. Endeavors toward reconstructing speech from brain activity have shown their potential using invasive measures of spoken speech data, however, have faced challenges in reconstructing imagined speech. In this paper, we propose NeuroTalk, which converts non-invasive brain signals of imagined speech into the user's own voice. Our model was trained with spoken speech EEG which was generalized to adapt to the domain of imagined speech, thus allowing natural correspondence between the imagined speech and the voice as a ground truth. In our framework, automatic speech recognition decoder contributed to decomposing the phonemes of generated speech, thereby displaying the potential of voice reconstruction from unseen words. Our results imply the potential of speech synthesis from human EEG signals, not only from spoken speech but also from the brain signals of imagined speech.
## Introduction
Brain signals contain various information related to human action or imagery, making them valuable materials for understanding human intentions. Brain-computer interface (BCI) is a technology of analyzing user's brain activity to derive external commands to control the environment through brain signals, therefore, can benefit paralyzed or locked-in patients [1]. Brain-to-speech (BTS) is a novel research stream in the field of BCI, which aims to directly synthesize audible speech from brain signals [11, 12]. While current studies of decoding speech from human brain signals mainly focus on using spoken speech brain signals measured with invasive methods [1, 13, 14, 15], reconstructing imagined speech using non-invasive modalities is a fascinating issue that can convert user's imagined speech into real voice. However, due to the fundamental constraint of the imagined speech lacking the ground truth (GT) voice, it is challenging to synthesize user's own voice from imagined speech brain signals.
Since reconstructing speech from brain signals of spoken speech has shown its potential [16, 15, 17], we anticipate that there must be a relevant brain activation that may encode significant features of the speech. Imagined speech is known to resemble the neural activation path of spoken speech, which is mainly located on the ventral sensorimotor cortex (vSMC)[23, 14, 15, 16]. If imagined speech has similar features to spoken speech, it may be possible to link the spoken speech brain signals, spoken speech audio, and imagined speech brain signals. Furthermore, if we could train and infer phonemes from imagined speech, several unseen words composed of already trained phonemes can also be reconstructed from the trained word sets.
In this study, we proposed NeuroTalk framework that can correlate imagined speech electroencephalography (EEG) with spoken speech EEG and their corresponding audio, to reconstruct voice from imagined speech. The imagined utterances were decoded from EEG signals to reconstruct voice at a word level. Moreover, we estimated the possibility of reconstructing unseen words using the pre-trained model trained with only few words, to potentially expand the degree of freedom using the model trained with minimal words including various phonemes. Based on our results, we aim to find the potential of speech reconstruction from imagined speech brain signals to the user's own voice. The main contributions are as follows:
### Main Contribution
* We propose a generative model based on multi-receptive residual modules with recurrent neural networks that can extract frequency characteristics and sequential information from neural signals, to generate speech from non-invasive brain signals.
* The fundamental constraint of the imagined speech-based BTS system lacking the ground truth voice have been addressed with the domain adaptation method to link the imagined speech EEG, spoken speech EEG, and the spoken speech audio.
* Unseen words were able to be reconstructed from the pretrained model by using character-level loss to adapt various phonemes. This implies that the model could learn the phoneme level information from the brain signal, which displays the potential of robust speech generation by training only several words or phrases.
## Background
### Speech-related Paradigms
Speech-related paradigms mainly used in the BTS studies can be largely divided into three categories: spoken speech, mimed speech, and imagined speech [16]. While spoken speech indicates the natural speech that accompanies vocal output and movement of the articulators, mimed speech does not produce vocal output but accompanies the movement of the mouth and tongue as if speaking out loud [16, 17]. Imagined speech is the mode of internally imagining speech, accompanying both the imagery of the mouth movement and the vocal sound, without producing actual movement or voice [17].
### Invasive Approach
Invasive measurements involve surgical process of implementation inside the skull to capture brain activation directly from the cortex. Therefore, medical risks and difficulties exist to be applied for healthy users [20]. However, due to the high signal-to-noise ratio (SNR), many previous studies focused primarily on synthesizing speech from invasive brain signals. Studies using electrocorticography [1, 1, 1, 1] and attempts to decode speech from deeper brain structures using stereotactic electroencephalography depth electrodes [1, 1, 1, 2, 2] have reported the possibility of speech reconstruction using spoken speech data.
### Non-invasive Approach
ElectroencephalographyEEG is the most widely used non-invasive modality for practical use, since it does not involve any surgical process and are relatively easy to access [11]. However, non-invasive measures have relatively low SNR and artifact problems compared to the invasive modalities, which makes it hard to extract user's intention from brain signals [1].
#### Spoken speech based Bts
Speech reconstruction from spoken speech or mimed speech brain signals, kinematic or EMG data have shown potential [1, 12, 13]. However, a spoken speech-based BTS system cannot be the final solution for the essential goal of BCI, since it is not a silent communication (if the user can speak out, there's no need to reconstruct speech from brain signals), and it cannot be used for patients who cannot speak or move.
#### Decoding imagined speech
Current technologies of decoding imagined speech from EEG have shown promising results in terms of classification problems [21, 16, 17, 18]. Previous works about imagined speech mostly targeted classification tasks, or text decoding from EEG signals [16]. However, it is challenging to expand the number of classes for the classification scenario [11, 12]. Also, to provide an intuitive system that can generate voice from the brain signals, speech reconstruction from the imagined speech is crucial.
#### Imagined speech based Bts
The fundamental constraint of speech reconstruction from EEG of imagined speech is the inferior SNR, and the absence of vocal ground truth corresponding to the brain signals. Therefore, speech synthesis from imagined speech with non-invasive measures has so far not led to convincing results [10]. Attempts to reconstruct speech from invasive data during whispered and imagined speech have existed, however, yet have been reported relatively inferior performance with even invasive measures [1]. Speech synthesis from imagined speech may be the key to open a new era of human communication from current voice or text-based to brain-based communication. Also, this may be a technology that can help patients who are unable to speak or those who might lose their voice in the future.
## Method
In this section, we describe the model frameworks used in this paper, including generator, discriminator, vocoder and automatic speech recognition (ASR), as well as losses including reconstruction loss, generative adversarial network (GAN) loss, and connectionist temporal classification (CTC), as shown in Figure 1. The collected brain signals of spoken speech and imagined speech are represented as feature embeddings to extract the optimal features from brain signals. The generator applying GAN [1] reconstructs a mel-spectrogram to match the target voice during spoken speech. The reconstruction loss for the generator is determined as the difference between the reconstructed mel-spectrogram from the EEG signals and the ground truth mel-spectrogram during spoken speech. The discriminator classifies the validity of whether the input samples of mel-spectrogram are real or fake, and calculates an adversarial loss for the generator and discriminator. ASR model is a speech-to-text model, which can represent the speech as a contextual sequence of discrete units [1]. The pretrained vocoder converts the mel-spectrogram to a reconstructed voice, which is then transformed into characters by the pretrained ASR model. The pre-trained ASR model transforms the voice into text, and calculates the CTC loss for the generator.
Since the voices were not recorded during the imagined speech, voices during the spoken speech were used as the ground truth. To match the EEG to the voice of spoken speech, dynamic time warping (DTW) was applied between the reconstructed mel-spectrogram from EEG and the mel
spectrogram of voice during spoken speech. Furthermore, domain adaptation (DA) was conducted to transfer the architecture of spoken speech to that of imagined speech.
### Architectures
Embedding vectorIt is known that spatial, temporal, and spectral information are all important for speech-related brain signals, and vector-based brain embedding features can represent the contextual meaning in brain signals [1, 13]. The embedding vector was generated using common spatial pattern (CSP) to maximize spatial patterns and log-variance to extract temporal oscillation patterns. CSP finds the optimal spatial filters using covariance matrices [1], and helps to decode the brain signals related to speech [13, 14].
To reduce the difference between the data distribution of spoken EEG and imagined EEG, CSP filters were shared with both EEG signals. CSP filters were trained with imagined EEG, which contains pure brain signals only, rather than spoken EEG which may contains some noise. By sharing the CSP filters, spoken EEG domain was adapted to the subspace of imagined EEG.
The CSP filters were trained with eight CSP features and sixteen segments without overlap using training dataset. Each trial of EEG signals has a size of time point \(\times\) channels (5000 \(\times\) 64). After applying CSP, the embedding vector, transformed from EEG signals, has 104 features \(\times\) 16 time segments, where the features consist of 13 classes \(\times\) 8 CSP features.
GeneratorThe main architecture of the proposed generator consists of gated recurrent unit (GRU) [15] to capture the sequence information and several residual blocks to capture the temporal and spatial information with preventing vanishing gradient issue. Figure 1(a) describe the generator in detail. The input of the generator is given as the embedding vector of EEG signals and the output is generated as mel-spectrogram. The embedding vector goes through pre-convolution layer consisting 1d convolution and concatenates the features from bi-directional GRU extract the sequence features. To match the output size of the mel-spectrogram, 1d convolution layer was applied. After that, the generator upsamples it using transposed convolution with stride of two or three, and multi-receptive field fusion (MRF) module, sum of outputs of multiple residual blocks with different kernel size, follows.
DiscriminatorThe discriminator is similarly composed in the opposite direction to the generator, described in Figure 1(b). The input of the discriminator is the mel-spectrogram and the output is the validity of real/fake voice. Moreover, the discriminator was trained with their class only using mel-spectrogram from voice. The input goes through pre-convolution layer consisting 1d convolution. And then, the upsampling layer using transposed convolution and MRF module are conducted. After that, the bi-directional GRU extracts the sequence features, and the validity is estimated with the classifier.
Vocoder and ASRVocoder and ASR model are used to clarify the reconstructed voice from the brain signal by translating to text. To adjust our framework for a real-time BTS system, we applied a pretrained HiFi-GAN [13] which is a high-quality vocoder with fast inference speed. The same architecture and hyperparameters were applied with the pretrained model 'Universal
Figure 1: Overall frameworks in this study. Imagined speech EEG were given as the input to reconstruct corresponding audio of the imagined word or phrase with the user’s own voice. \(G\) refers generator, which generate mel-spectrogram from embedding vector. \(D\) refers discriminator, which distinguish the validity of input. On the bottom part, the two model, pretrained vocoder \(V\) and a pretrained ASR model \(A\), generate text from mel-spectrogram.
ver.1', trained with Universal dataset.
The ASR is composed of pretrained HuBERT Hsu et al. (2021) with a large configuration, which is a self-supervised learning model of speech representations trained with Libri-Light dataset and fine-tuned with the LibriSpeech dataset.
### Training Loss Term
This section describes losses for training, including reconstruction loss, GAN loss, and CTC loss. The generator uses reconstruction loss \(L_{rec}\), adversarial loss \(L_{adv}\), and CTC loss \(L_{ctc}\), while the discriminator uses adversarial loss \(L_{adv}\).
\[L(G)=\lambda_{g1}L_{rec}(G)+\lambda_{g2}L_{adv}(D;G)+\lambda_{g3}L_{ctc}(G) \tag{1}\]
\[L(D)=\lambda_{d}L_{adv}(D;G) \tag{2}\]
, where loss coefficients are referred to \(\lambda_{g1-3}\) for the generator and \(\lambda_{d}\) for the discriminator.
#### Reconstruction loss
To reinforce the guideline for reconstructing the mel-spectrogram of target, reconstruction loss was applied. Reconstruction loss have been verified in many studies Kong et al. (2020); Isola et al. (2017), which can help improve the efficiency of generator and the fidelity of reconstructed data. Since imagined speech has no reference speech to compare the reconstructed performance, spoken speech audio collected at the same sequence of imagined speech was used as the target audio to compute reconstruction loss. DTW was applied to match the alignment of EEG during spoken/imagined speech and the target spoken voice.
\[L_{rec}(G)=E_{s}[(G(s)-x)^{2}] \tag{3}\]
, where \(s\) refers to the input of the generator such as an embedding vector from EEG signals, and \(x\) refers to the input of the discriminator such as a mel-spectrogram.
#### GAN loss
To reconstruct the mel-spectrogram to follow the real one, adversarial GAN loss \(L_{adv}\) was conducted on the generator \(G\) and discriminator \(D\) as follows.
\[L_{adv}(D;G)=E_{(x,s)}[log(1-D(x))+log(D(G(s)))] \tag{4}\]
\[L_{adv}(G;D)=E_{s}[log(1-D(G(s)))] \tag{5}\]
, where \(x\) refers the input of discriminator such as mel-spectrogram and \(s\) refers the input of generator such as embedding vector from EEG signals.
#### CTC loss
CTC loss is a common metric of the performance for automatic speech recognition systems Graves et al. (2006). CTC loss \(L_{ctc}\) allows to train the model using sequential data without the alignment information. CTC loss was primarily given to guide the prediction of character and phonemes, to enhance the performance of unseen classes.
### Domain Adaptation
The DA strategy was employed to resolve the fundamental constraint of speech reconstruction from imagined speech. Since imagined speech does not accompany the movement of the articulators, it is relatively reliable in terms of movement artifacts accompanied by the mouth movement and the vibration. However, since the ground truth audio for imagined speech does not exist, we designed an adaptation framework that adapts the domain of imagined speech from spoken speech, in order to exploit the natural correspondence of imagined EEG and the voice of spoken speech. The DA process was performed in two steps; 1) sharing the covariance matrix between imagined EEG and spoken EEG by applying the CSP filter of imagined speech and 2) applying transfer learning for the generator and discriminator from the trained model of spoken EEG.
Figure 2: The architecture details in (a) generator, (b) discriminator, and (c) MRF module. The MRF modules in both generator and discriminator were repeated three times in our experiment. \(k_{r}\) indicates the kernel size of residual block and \(D_{r}\) indicates the dilation rates of the residual block.
Sharing subspaceThe CSP weights, trained with a training set (60%) of imagined EEG, were shared to generate embedding vectors. Sharing the CSP filters computed from imagined EEG allows the latent space of spoken EEG to be shifted into a comparable feature space of imagined EEG. Unlike most transfer learning approaches that involve applying the weak domain to the well-trained classifier, we elected to the contrary, bringing the spoken speech feature space to that of the imagined speech. In that case, we could achieve a clear pattern more from the brain signal of speech rather than the movement artifacts or vibration artifacts.
Transfer learningThe model was trained with a training set of spoken EEG, and then fine-tuned with a training set of imagined EEG at a smaller learning rate than the case of spoken EEG. This was to connect with the voice recordings of spoken speech, which acts as the ground truth of imagined speech. The trained model from spoken EEG can assist training the models of imagined EEG that has insufficient information, therefore, the spoken EEG could guide learning from the weak features of imagined EEG.
## Experimental Setup
### Dataset
ParticipantsSix participants volunteered in the study. The study was conducted in accordance with the Declaration of Helsinki, approved by Korea University Institutional Review Board [14]. Informed consent was obtained from all subjects.
#### Paradigms
For the spoken speech session, participants were instructed to naturally pronounce the randomly given thirteen phases, provided as an auditory cue of twelve word-s/phrases (ambulance, clock, hello, help me, light, pain, stop, thank you, toilet, TV, water, and yes) and a silent phase. Speech data were recorded in a rhythmic manner to avoid any visual or auditory disruptions. The imagined speech data was collected in the exactly same manner as the spoken speech, following the previous study [11]. 100 trials of both spoken speech and imagined speech per class were collected for each participant. Therefore, each participant had 1300 trials for the spoken and imagined speech paradigm.
#### Recording
The dataset used in this study consists of scalp EEG recordings of spoken/imagined speech and voice recordings of spoken speech. During the experiment, EEG signals were recorded in the sampling rate of 2500Hz via Brain Vision/Recorder (BrainProduct GmbH, Germany), and the corresponding audio of spoken speech was simultaneously recorded in the sampling rate of 8000Hz. Brain signals were recorded with 64-channel EEG cap with active Ag/AgCl electrode placement following the international 10-10 system.
### Pre-processing
EEG signals were extracted in 2 second intervals for each trial. The data was filtered with a 5th order Butterworth bandpass filter in the high-frequency range of 30-120 Hz which is well-known to contain speech-related information [1, 11]. Notch filter was used to remove the line noise at 60 Hz with harmonics of 120 Hz. The electroocoulography (EOG) and electromyography(EMG) of spoken speech were removed using blind source separation referencing from EOG and EMG [1]. Baseline was corrected by subtracting the average value of 500 ms before each trial. Pre-processing procedures were performed in Python and Matlab using OpenBMI Toolbox [11], BBCI Toolbox [12], and EEGLAB [1]. For the voice data, we resampled the voice signals to 22050 Hz, and reduced the noise using noisereduce library[10, 11].
### Dataset Composition and Training Procedure
By definition, imagined speech does not have reference voice to train the model. However, spoken speech accompanies vocal output, therefore, both audio and the EEG data of each spoken speech utterance was collected in perfectly time-aligned pair. Since the experimental design of imagined speech and spoken speech was completely identical, the voice recording of the identical sequence of spoken speech for each subject was used as the reference voice for the imagined speech evaluation. Also, due to the lack of the reference voice for the imagined speech brain signals, transfer learning was applied with the model trained on spoken speech EEG and spoken speech audio to imagined speech EEG.
The dataset was divided in 5-fold into training, validation, and test dataset according to the random selection with random seed. One unseen word,'stop' was separated from the dataset, and wasn't included in the training set. It was chosen to test unseen case, since every phoneme composing the word'stop' was covered with the remaining 11 words used for the training. That is, we trained 11 words/phrases and a silent phase as training dataset, and validated 12 words/phrases and a silent phase in validation and test dataset including the unseen word.
Generator and discriminator were first trained using spoken EEG data with ground truth of voice for each trial. As the imagined speech does not have each trial of voice, the voice during spoken speech was used as ground truth. To match the time points between EEG and voice of spoken speech, DTW was applied to the synthesized mel-spectrogram of EEG with mel-spectrogram of voice. Moreover, the generator and discriminator executed transfer learning from the trained model of spoken EEG to connect naturally between imagined EEG and the voice during spoken speech.
### Model Implementation Details
The generator had three residual block with the kernel size of 3, 7, and 11, each dilation of 1, 3, and 5, and upsampling rate of 3, 2, and 2 with twice upsample kernel size. The number of initial channel was 1024, the directional GRU dimension was the half of initial channel. The discriminator had the same residual block as generator, but downsampling
rate of 3, 3, and 3 with twice kernel size. The number of final channel was 64, the directional GRU dimension was the half of final channel. The mel-spectrogram was managed in sampling rate of 22050 Hz and the STFT and mel function was conducted with nFFT of 1024, window of 1024, hop size of 256, and 80 bands of mel-spectrograms. Initial training was conducted with an initial learning rate of \(10^{-4}\), and the fine-tuning was conducted a lower learning rate such as \(10^{-5}\) in maximum epoch of 500 and a batch size of 10. We trained the model on a NVIDIA GeForce RTX 3090 GPU. We used AdamW optimizer [10] with searched parameters of \(\beta_{1}\)=0.8, \(\beta_{2}\)=0.99, and weight decay \(\lambda\)=0.01, which was scheduled by 0.999 factor in every epoch. We released the source code and sample data on Github at: [https://github.com/youngeun1209/NeuroTalk](https://github.com/youngeun1209/NeuroTalk)
### Evaluation Metrics
For the evaluation metrics, we used root mean square error (RMSE), character error rate (CER), and a subjective mean opinion score (MOS) test. To evaluate the accurate reconstructing performace of the generator, we computed the RMSE between the target and reconstructed mel-spectrogram. To evaluate the clarity quantitatively, we conducted CER after going through the ASR model. For the subjective evaluation, MOS test was conducted to evaluate the quality of the reconstructed speech. We randomly selected 125 samples of voice from a test dataset. The samples were evaluated by more than 20 raters on a scale of 1-5 with 0.5 point increments. We compared EEG dataset with GT and converted GT in form of mel-spectrogram, waveform, and character. Moreover, to demonstrate the extension of NeuroTalk, we evaluated the generation performance of unseen word which is composed of phonemes that were contained in the trained word classes.
## Results and Discussion
### Voice Reconstruction from EEG
The audio samples are included in the demo page at: [https://neurotalk.github.io/demo/neurotalk.html](https://neurotalk.github.io/demo/neurotalk.html). Figure 3 displays the mel-spectrogram and the audio wave of original voice, reconstructed voice from spoken speech EEG, and reconstructed voice from imagined speech EEG. As shown in the figure, successfully reconstructed cases display similar patterns of mel-spectrogram and the audio waveform. Table 1 shows the evaluation results of reconstructed voice from brain signal compared with GT. Objective measures of RMSE and CER have shown inferior performance on the case of imagined speech EEG compared to that of the spoken speech EEG. MOS of spoken speech cases was high with no large difference from GT, which means the model can generate natural speech from spoken EEG. Interestingly, the unseen spoken EEG shows higher MOS than imagined EEG though objective evaluation of RMSE and CER indicates inferior, which implies that reconstructing spoken EEG that simultaneously produce voice enables to generate natural voice.
As shown in Figure 3, test samples for the silent phases were successfully reconstructed with no activation. Silent cases for both spoken and imagined EEG was successfully decoded except for only one case of imagined speech. According to this result, we can infer that our NeuroTalk model
Figure 3: Mel-spectrogram and the audio wave of original voice, reconstructed voice from EEG. Three examples of reconstruction include ‘Hello’, ’Water’, and ‘Help me’. Silent phases for both spoken and imagined EEG were successfully decoded. Unseen cases were also reconstructed despite their inferior performance.
accurately learned the silence interval and can detect the precise onset from both spoken speech and imagined speech EEG. Although imagined speech doesn't have ground truth voice, the results show that the proposed NeuroTalk framework effectively adapts the spoken speech based model to the imagined speech EEG to decode user's intention from brain signals and generate voice.
There were some instances of failure in the imagined speech case (Figure 4). The significant difference between the success and failure cases was whether it detects silence intervals. As shown in the Figure 4, a failure case with CER of 50% displays few silence interval between 'thank' and 'you'. Moreover, the failure case with CER of 100% produces only a few words that cannot represent any characters from ground truth.
### Ablation Study
The results of ablation study are demonstrated in Table 2. We performed an ablation study of GRU in generator and discriminator to clarify to add the module in the model, and losses of GAN, reconstruction, and classification to verify the effect of each loss on the performance of generator. In the objective evaluation, as the performance without reconstruction loss is the worst and then CTC loss, reconstruction loss followed by CTC loss has the greatest impact on the framework. Furthermore, the results of all cases were much worse than the baseline, that indicates that all cases are performing their roles, in particular, reconstruction loss have the largest impact on training. In the subjective evaluation, naturalness shows different tendency to a certain extent, and the results without GRU show the worst with inferior naturalness, which shows that sequential features are important for natural speech synthesis.
### Voice Reconstruction of Unseen Words
The unseen and untrained word could be reconstructed from EEG, with much lower CER than chance level, and produce quite high quality of audio with MOS over 3. The gap between the CER of spoken and imagined EEG was relatively small in the unseen case, compared to the trained words. These results imply that while the robust performance of spoken EEG with trained words may be affected by movement artifacts to be overfitted to the trained classes, imagined speech may be relatively effective in capturing phonemes from the trained words to generate unseen word.
Although it still could be further improved, our result demonstrates that the NeuroTalk model has the potential to extend the degree of freedom of decodable words or sentences by training on several word level dataset. We expect that CTC loss could learn the character or phoneme information of words even from brain signals, which contains human intention. Since we trained the model with limited words/phrases, it may be simply classifying the EEG as one of the training classes. However, the model have shown the potential to generate unseen word outside of the training set indicates the possibility of the model to be generalized and expanded to the classes outside of the training set.
### Domain Adaptation
DA was performed by sharing the CSP subspaces and transferring the spoken speech-based trained model to imagined speech EEG. As shown in Table 2, the result with DA has shown superior performance compared to the baseline. This implies that spoken speech EEG was useful to train imagined speech EEG, which means the neural substrates of imagined and spoken speech has common features that can be represented in our embedding vector. Speech production and articulation is mainly known to be associated with the inferior frontal gyrus, so called Broca's area. Angular gyrus functions to associate various language-related activation from auditory, motor, sensory, and also visual cortex, therefore, not only the left temporal lobe but the whole brain may function in the speech process Watanabe et al. (2020). Our embedding vector which was generated from the whole channel EEG, may contain both articulatory information and the speech intention. Therefore, we demonstrates the potential of generating speech by extracting informative speech-related features, which refer to the similarity of spoken speech EEG and imagined speech EEG.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Model & RMSE & CER (\%) & MOS \\ \hline GT & - & 18.35 (\(\pm\)11.45) & 3.67 (\(\pm\)0.97) \\ GT \(\rightarrow\) Mel (\(\rightarrow\) ASR) & - & 23.35 (\(\pm\)10.85) & 3.68 (\(\pm\)0.88) \\ \hline Spoken EEG & 0.166 (\(\pm\)0.022) & 40.21 (\(\pm\)13.49) & 3.34 (\(\pm\)0.95) \\ Imagined EEG & 0.175 (\(\pm\)0.029) & 68.26 (\(\pm\)2.47) & 2.78 (\(\pm\)1.11) \\ Unseen Spoken EEG & 0.185 (\(\pm\)0.029) & 78.89 (\(\pm\)7.43) & 2.87 (\(\pm\)1.12) \\ Unseen Imagined EEG & 0.187 (\(\pm\)0.026) & 83.06 (\(\pm\)14.54) & 2.57 (\(\pm\)1.18) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of subjective and quantitative tests
\begin{table}
\begin{tabular}{l l l l} \hline \hline Input & RMSE & CER (\%) & MOS \\ \hline Baseline & 0.175 (\(\pm\)0.029) & 68.26 (\(\pm\)2.47) & 2.78 (\(\pm\)1.11) \\ \hline w/o GRU & 0.185 (\(\pm\)0.027) & 76.05 (\(\pm\)3.25) & 2.18 (\(\pm\)1.24) \\ w/o GAN loss & 0.180 (\(\pm\)0.022) & 76.05 (\(\pm\)2.33) & 2.86 (\(\pm\)1.21) \\ w/o reconstruction loss & 0.620 (\(\pm\)0.121) & 80.16 (\(\pm\)7.98) & 2.50 (\(\pm\)1.25) \\ w/o CTC loss & 0.387 (\(\pm\)0.069) & 76.90 (\(\pm\)0.25) & 2.52 (\(\pm\)1.21) \\ w/o DA & 0.175 (\(\pm\)0.025) & 72.30 (\(\pm\)1.71) & 2.66 (\(\pm\)1.24) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of ablation study
Figure 4: Success and failure cases. Mel-spectrogram and waveform were displayed for original voice and reconstructed voices.
### Leave-one-out Scenario
In order to apply our NeuroTalk system to locked-in patients who can only use imagined speech, we conducted an additional experiment of leave-one-out (LOO) approach to apply to an entirely new data from an unseen person. The model was trained with the spoken EEG of entire subject excluding one subject and was fine-tuned with the imagined EEG of the excluded subject. As a result, comparable performance was obtained, inferior to the baseline but better than without DA. Based on the LOO approach, we have found potential to expand our framework to entirely new person, which could further help people who have lost their own voice.
## Conclusion
We presented NeuroTalk, which reconstructs user's own voice from EEG during imagined speech. DA approach was conducted by sharing feature embedding and training the models of imagined speech EEG, using the trained models of spoken speech EEG. Our results demonstrate the feasibility of reconstructing voice from non-invasive brain signals of imagined speech in word-level. Furthermore, unseen word can be generated with several characters although the performance was not high, which means we can expand our study to a larger dataset and to sentence-level speech synthesis in the future. We hope our study can contribute to expanding the means of human communication and further benefit patients or disabled people to gain freedom for their communication. We look forward to a world where we can communicate without saying anything.
## Acknowledgement
This work was supported by Institute for Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2021-0-02068, Artificial Intelligence Innovation Hub; No. 2017-0-00451, Development of BCI based Brain and Cognitive Computing Technology for Recognizing User's Intentions using Deep Learning; No. 2019-0-00079, Artificial Intelligence Graduate School Program(Korea University)).
|
2302.08710
|
Cross-Domain Label Propagation for Domain Adaptation with Discriminative
Graph Self-Learning
|
Domain adaptation manages to transfer the knowledge of well-labeled source
data to unlabeled target data. Many recent efforts focus on improving the
prediction accuracy of target pseudo-labels to reduce conditional distribution
shift. In this paper, we propose a novel domain adaptation method, which infers
target pseudo-labels through cross-domain label propagation, such that the
underlying manifold structure of two domain data can be explored. Unlike
existing cross-domain label propagation methods that separate domain-invariant
feature learning, affinity matrix constructing and target labels inferring into
three independent stages, we propose to integrate them into a unified
optimization framework. In such way, these three parts can boost each other
from an iterative optimization perspective and thus more effective knowledge
transfer can be achieved. Furthermore, to construct a high-quality affinity
matrix, we propose a discriminative graph self-learning strategy, which can not
only adaptively capture the inherent similarity of the data from two domains
but also effectively exploit the discriminative information contained in
well-labeled source data and pseudo-labeled target data. An efficient iterative
optimization algorithm is designed to solve the objective function of our
proposal. Notably, the proposed method can be extended to semi-supervised
domain adaptation in a simple but effective way and the corresponding
optimization problem can be solved with the identical algorithm. Extensive
experiments on six standard datasets verify the significant superiority of our
proposal in both unsupervised and semi-supervised domain adaptation settings.
|
Lei Tian, Yongqiang Tang, Liangchen Hu, Wensheng Zhang
|
2023-02-17T05:55:32Z
|
http://arxiv.org/abs/2302.08710v1
|
# Cross-Domain Label Propagation for Domain Adaptation with Discriminative Graph Self-Learning
###### Abstract
Domain adaptation manages to transfer the knowledge of well-labeled source data to unlabeled target data. Many recent efforts focus on improving the prediction accuracy of target pseudo-labels to reduce conditional distribution shift. In this paper, we propose a novel domain adaptation method, which infers target pseudo-labels through cross-domain label propagation, such that the underlying manifold structure of two domain data can be explored. Unlike existing cross-domain label propagation methods that separate domain-invariant feature learning, affinity matrix constructing and target labels inferring into three independent stages, we propose to integrate them into a unified optimization framework. In such way, these three parts can boost each other from an iterative optimization perspective and thus more effective knowledge transfer can be achieved. Furthermore, to construct a high-quality affinity matrix, we propose a discriminative graph self-learning strategy, which can not only adaptively capture the inherent similarity of the data from two domains but also effectively exploit the discriminative information contained in well-labeled source data and pseudo-labeled target data. An efficient iterative optimization algorithm is designed to solve the objective function of our proposal. Notably, the proposed method can be extended to semi-supervised domain adaptation in a simple but effective way and the corresponding optimization problem can be solved with the identical algorithm. Extensive experiments on six standard datasets verify the significant superiority of our proposal in both unsupervised and semi-supervised domain adaptation settings.
domain adaptation, transfer learning, label propagation, discriminative graph learning, domain-invariant feature learning.
## I Introduction
One common assumption of statistical learning theory is that the training data and test data are drawn from an identical feature distribution, which may be violated in many situations. Moreover, in practical applications, collecting labeled training data is often expensive and time-consuming. Thus, there is a strong demand to leverage the knowledge from a source domain with sufficient labels to help design effective model for the unlabeled target domain data, which follows a different feature distribution. To this end, considerable efforts have been devoted to domain adaptation [1], and impressive progress has been made in various tasks, _e.g._, object recognition [2, 3, 4], semantic segmentation [5, 6], and sentiment analysis [7, 8].
The goal of domain adaptation is to mitigate the distribution discrepancy between the source and target domains, such that the classifier could be applicable across two domains. To accomplish this, numerous works [10, 11, 12, 13, 14] have devoted to learning a domain-invariant space where distribution discrepancy can be significantly reduced via minimizing a distance metric, _e.g._, the widely used maximum mean discrepancy (MMD) [15]. Along this line, JDA [10] is a pioneering method, which aims to reduce the joint distribution shift between two domains by simultaneously aligning the marginal distribution and conditional distribution. Inheriting the core idea of minimizing joint distribution discrepancy, tremendous subsequent studies following JDA [11, 12, 14], focus on further reducing the conditional distribution discrepancy by improving the prediction accuracy of target pseudo-labels. Despite the brilliant achievements in the literature, most of them generally overlook the underlying data manifold structure in the process of inferring data labels on target domain, thus making the performance of domain adaptation far from satisfactory.
More recently, to explore the data distribution structure, several studies [12, 16, 17, 18] innovatively propose to infer target pseudo-labels by cross-domain label propagation [19]. Generally, these methods follow a multi-stage paradigm in each iteration: 1) projecting the source and target data into a domain-invariant common feature space; 2) constructing a affinity matrix by calculating the sample similarity in the projected space with a predefined metric, _e.g._, the gaussian kernel similarity [16, 17]; 3) assigning pseudo-labels for target data via propagating the labels of source data with the constructed affinity matrix. Although improved performance has been achieved by these methods, they still suffer from three crucial issues:
* **Issue 1** The domain-invariant feature learning, affinity matrix constructing and target labels inferring are separated into three independent stages. Thus, the correlation among these three parts could not be fully exploited.
* **Issue 2** Constructing the affinity matrix with predefined metric may not capture the inherent similarity of samples in both domains, which might seriously affect the effectiveness of cross-domain label propagation.
* **Issue 3** During the construction of affinity matrix, the discriminative information contained in the ground-truth labels of source data, as well as in the pseudo-labels of target data is less explored.
In this study, we propose a novel domain adaptation method called Cross-domain label propagation with Discriminative Graph Self-learning (CDGS) to remedy the above three issues. As illustrated in Fig. 1, to tackle the first issue, we propose to formulate the three parts of cross-domain label propagation into a unified optimization framework, which learns domain-invariant features, constructs affinity matrix and infers target labels simultaneously. In the unified framework, these three parts can assist each other from an iterative optimization perspective. For the second issue, inspired by [20, 21], we resort to a graph self-learning strategy, which assigns adaptive neighbors for each sample according to the local distance in the projected feature space. In such way, the underlying data manifold structure of two domains could be captured more effectively. To handle the third issue, for well-annotated source data, we enforce the learned connected subgraph to have a block diagonal structure, which means that only source samples within the same category are allowed to be connected, while the connection weight of source samples between different categories is forcibly set to 0. In this manner, the discriminative information of source data can be exploited to the maximum extent. Beyond that, inspired by [21, 22], we further impose the label smoothness constraint during the graph self-learning, such that the weakly supervised information contained in target pseudo-labels can be well inserted into the adaptive graph.
It is noteworthy that, except for unsupervised domain adaptation (UDA), our CDGS could be readily extended to the semi-supervised domain adaptation (SDA) scenario where some labeled target samples are available. Interestingly, the extended SDA model could be solved with the same algorithm as UDA. To sum up, we list our contributions in fourfolds:
1. We propose a novel cross-domain label propagation method for domain adaptation named CDGS, which integrates domain-invariant feature learning, affinity matrix constructing and target labels inferring into a unified optimization framework. Through the joint optimization, the three parts could boost each other and thus more effective knowledge transfer can be achieved.
2. To construct a high-quality affinity matrix in CDGS, we propose a discriminative graph self-learning strategy, which can not only adaptively capture the local connectivity structure of data from two domains but also effectively explore the discriminative information.
3. An efficient optimization algorithm is designed to solve the objective function of our CDGS. In addition to UDA, we further extend CDGS to the semi-supervised scenario in a direct but effective way and solve the extended model with the identical optimization algorithm.
4. Extensive experiments on six standard datasets verify that the proposed CDGS can consistently outperform the state-of-the-art methods in both UDA and SDA settings.
The rest of this paper is organized as follows. Section II provides a brief review on related domain adaptation and label propagation methods. Section III introduces the proposed CDGS approach, the optimization algorithm, the computational complexity and the extension to SDA. Extensive experimental analysis is presented in Section IV. Finally, this paper is summarized in Section V.
## II Related Work
In this section, we review the related works in terms of domain adaptation and label propagation, and highlight the difference between the previous works and our proposal.
### _Domain Adaptation_
Domain adaptation aims to leverage the knowledge from a well-labeled source domain to an unlabeled but related target domain. In general, domain adaptation can be grouped as UDA and SDA. In UDA, no labeled target samples are available. While in SDA, the target domain contains few labeled samples.
Generally, existing UDA methods can be roughly divided into three categories: instance reweighting [23, 24], classifier adaptation [25, 26] and feature adaptation [9, 10, 11, 12] methods. Instance reweighting methods assign source samples with different weights to reduce the distribution shift between two domains. Classifier adaptation methods adapt
Figure 1: Flowchart of our proposed CDGS. We integrate domain-invariant feature learning, adaptive graph learning and cross-domain label propagation into a unified optimization framework. Besides, in order to construct a high-quality affinity matrix in our CDGS, we further propose a discriminative graph self-learning strategy. To be specific, instead of predefining the similarity metric, our proposal could adaptively assign neighbors for each sample according to the local distance in the projected feature space. To fully explore the discriminative information contained in well-labeled source data and pseudo-labeled target data, we further impose block diagonal structure constraint on source data and label smoothness constraint on two domain data.
the classifier trained on source data to target data. Feature adaptation methods seek a common feature space [10] or latent intermediate subspaces [27] to make the two domains have similar distributions. The proposed CDGS falls into the former line of feature adaptation methods, thus we focus on reviewing the works related to it. Among existing works, TCA [9] proposes to align marginal distribution between two domains with MMD metric for the first time. Following this idea, JDA [10] further considers the conditional distribution, such that the joint distribution alignment can be achieved. To boost the classification performance, several subsequent works propose to employ the discriminative information by encouraging intra-class compactness and inter-class dispersion [11] simultaneously or promoting domain-irrelevant class clustering [12]. To refine the target pseudo-labels to further mitigate the conditional distribution discrepancy, several recent works attempt to exploit the geometric structure underlying data manifold by assigning target pseudo-labels via cross-domain label propagation [12, 16, 17, 18] or performing label propagation just on target domain [13, 28], and promising performance have been achieved by them.
Our CDGS also employs cross-domain label propagation strategy to assign target pseudo-labels. However, CDGS is significantly different from these methods. First, CDGS integrates domain-invariant feature learning, affinity matrix constructing and target labels inferring into a unified optimization formulation while [12, 16, 17] separate the three parts into independent stages, and [18] only combines the domain-invariant feature learning and target labels inferring. Through the joint optimization in our CDGS, the three parts could benefit from each other to yield a superior performance. Second, CDGS presents a novel self-learning strategy to construct a discriminative graph. Specifically, the neighbors of each sample are adaptively assigned according to the local distance, which is calculated based on the projected features and label information of source and target data. Besides, only source samples within the same class are enforced to be connected to exploit the source discriminative information. Thus, the discriminative graph can not only faithfully capture the inherent local connectivity structure of samples but also effectively explore the discriminative information contained in source ground-truth labels and target pseudo-labels, which is beneficial to effective target pseudo-labels assignment.
In the past few years, deep domain adaptation methods have attracted considerable interest and different strategies have been proposed to align deep features. For example, DAN [29] exploits the multikernel MMD to reduce the marginal distribution discrepancy in the reproducing kernel Hilbert space (RKHS). Based on this framework, JAN [30] proposes to align the joint distribution between two domains. To capture the fine-grained information, DSAN [31] further aligns the relevant subdomain distributions within the same category in two domains based on a local MMD. Different from them, DANN [32] tries to learn domain agnostic feature representations with adversarial learning. Later, MADA [33] trains a class-wise domain discriminator for each class. To enhance positive transfer and relieve negative transfer, Wang _et al_. [34] introduced a self-adaptive re-weighted adversarial approach to promote domain alignment in terms of conditional distribution. However, these deep methods may confront the challenges of long training time and massive resource consumption while CDGS is faster and can achieve excellent performance by just using off-the-shelf deep features.
Many methods have also been developed for SDA [35, 36, 37]. For instance, MMDT [35] learns the transformation matrix and classifier parameters jointly by making samples within the same class have high similarity. CDLS [36] aligns the conditional distribution by selecting representative landmarks. OBTL [37] is a Bayesian transfer learning framework, which relates the two domains by joint prior density. The proposed CDGS can be readily extended to SDA. Specifically, we take the labeled and unlabeled target data as a whole. In such case, we can estimate target class means more accurately, which can result in more accurate conditional distribution alignment. Besides, as a common strategy in semi-supervised learning, reliable connections between labeled and unlabeled data are built by discriminative graph self-learning, thus the knowledge from labeled samples can be propagated to the unlabeled ones. Moreover, the resulting optimization problem has the same formula as that of the unsupervised setting, thus they can be solved with the same optimization algorithm.
### _Label Propagation_
The goal of label propagation is to propagate the label information of limited labeled samples to amounts of unlabeled samples through graph. In the graph, a vertex represents a sample and the weight of the edge between two vertexes measures the similarity of the corresponding samples.
GFHF [39] and LGC [40] are two classical methods. Both of them first use the gaussian kernel similarity to build the affinity matrix and then utilize label propagation to predict the unknown labels via gaussian fields and harmonic function, or the local and global consistency. However, they can not exploit the relationship of the affinity matrix and label information of samples due to the two separated stages. To overcome this limitation, STSSL [22] integrates the affinity matrix constructing and the unknown labels inferring into one unified optimization framework to exploit the correlation between them. Following this idea, AWSSL [21] further proposes to adaptively assign the neighbors of each sample and effectively extract robust features by auto-weighting feature selection.
There are several classifier adaptation methods, which borrow the advantages of cross-domain label propagation to assign target pseudo-labels, _e.g._, ARTL [25] and MEDA [26]. ARTL is also a unified framework, which learns an adaptive classifier by jointly optimizing the source structural risk, joint distribution alignment and manifold regulation, which is relevant to our CDGS. However, CDGS differs from ARTL in three aspects. First, ARTL learns the classifier with the original features while CDGS conducts subspace learning, which is more flexible and effective. Second, CDGS learns domain-invariant features, constructs affinity matrix and infers target labels jointly to fully exploit the relationship among them. Third, CDGS and ARTL use different strategies to construct the affinity matrix. Specifically, CDGS introduces
a self-learning strategy to capture the intrinsic similarity of samples as well as effectively explore the label information of source and target data. By contrast, ARTL just utilizes the predefined metric to calculate the similarity for all samples.
## III Proposed Method
In this section, the key notations throughout this paper are first introduced. Then, we describe the details of the proposed CDGS. Next, we design an iterative algorithm to solve the optimization problem and provide the computational complexity analysis. Finally, we extend our method to SDA.
### _Notations_
In UDA, the labeled source data \(\mathcal{D}_{s}=\{\mathbf{X}_{s},\mathbf{Y}_{s}\}=\{(\mathbf{x}_{si},y_{si}) \}_{i=1}^{n_{s}}\) and unlabeled target data \(\mathcal{D}_{t}=\{\mathbf{X}_{t}\}=\{\mathbf{x}_{tj}\}_{j=1}^{n_{t}}\) are given, where \(\mathbf{x}_{si}\in\mathbb{R}^{m}\) is a source sample (\(y_{si}\in\mathbb{R}\) is its label), \(\mathbf{x}_{tj}\in\mathbb{R}^{m}\) is a target sample, \(n_{s}\) and \(n_{t}\) represent the number of source and target samples. The entire data matrix is denoted as \(\mathbf{X}=[\mathbf{X}_{s},\mathbf{X}_{t}]=\{\mathbf{x}_{i}\}_{i=1}^{n}\), where \(n=n_{s}+n_{t}\). For clarity, the key notations throughout this paper and their descriptions are summarized in Table I.
### _Problem Formulation_
In this paper, we propose the CDGS framework to address domain adaptation problem, which integrates domain-invariant feature learning, affinity matrix constructing and target labels inferring into a unified optimization objective. The overall framework of our CDGS can be formulated as:
\[\min_{\mathbf{P},\mathbf{S},\mathbf{F}}\Omega(\mathbf{P},\mathbf{X})+\alpha \Theta(\mathbf{P},\mathbf{S},\mathbf{X})+\beta\Psi(\mathbf{F},\mathbf{S})+ \gamma\Phi(\mathbf{P}) \tag{1}\]
where \(\mathbf{P}\in\mathbb{R}^{m\times d}\) denotes the projection matrix, \(\mathbf{F}\in\mathbb{R}^{n\times C}\) is the label matrix for all data and \(\mathbf{S}\in\mathbb{R}^{n\times n}\) represents the affinity matrix. \(\Omega(\mathbf{P},\mathbf{X})\) is employed to learn domain-invariant features. \(\Theta(\mathbf{P},\mathbf{S},\mathbf{X})\) is utilized to adaptively construct the affinity matrix with the projected features. \(\Psi(\mathbf{F},\mathbf{S})\) is used to infer the target labels by cross-domain label propagation. \(\Phi(\mathbf{P})\) is the regularization term for the projection matrix to avoid overfitting. \(\alpha\), \(\beta\) and \(\gamma\) are hyperparameters to balance the importance of different parts. As we can see, by integrating the three parts into the joint optimization objective, they could well communicate with each other to achieve more effective knowledge transfer. Next, more details about the three parts are presented.
#### Iii-B1 Domain-invariant Feature Learning
When \(\mathbf{X}_{s}\) and \(\mathbf{X}_{t}\) are drawn from different feature distributions, it is crucial to reduce the distribution discrepancy between two domains, such that the classifier trained on source data can be directly applied to target domain. To measure the distribution discrepancy, numerous metrics have been proposed. Among them, MMD [15] is probably the most widely used one. In the projected space, the MMD distance between two domains can be calculated as the distance between the sample means of the source and target data [11]. Considering the large distribution discrepancy across domains, we minimize the marginal distribution distance and the conditional distribution distance simultaneously, and denote them by \(\mathcal{L}_{mmd}^{m}\) and \(\mathcal{L}_{mmd}^{c}\), respectively. With the MMD metric, marginal distribution distance can be stated as:
\[\mathcal{L}_{mmd}^{m} =\|\frac{1}{n_{s}}\sum_{i=1}^{n_{s}}\mathbf{P}^{\mathrm{T}} \mathbf{x}_{si}-\frac{1}{n_{t}}\sum_{j=1}^{n_{t}}\mathbf{P}^{\mathrm{T}} \mathbf{x}_{tj}\|_{2}^{2} \tag{2}\] \[=\mathrm{tr}(\mathbf{P}^{\mathrm{T}}\mathbf{X}\mathbf{M}_{0} \mathbf{X}^{\mathrm{T}}\mathbf{P})\]
where \(\mathrm{tr}(\cdot)\) is the trace operator, and \(\mathbf{M}_{0}\in\mathbb{R}^{n\times n}\) represents the marginal MMD matrix calculated as:
\[\mathbf{M}_{0}=\begin{bmatrix}\frac{1}{n_{s}}\mathbf{1}_{n_{s}\times n_{s}}&- \frac{1}{n_{s}n_{t}}\mathbf{1}_{n_{s}\times n_{t}}\\ -\frac{1}{n_{s}n_{t}}\mathbf{1}_{n_{t}\times n_{s}}&\frac{1}{n_{t}^{2}}\mathbf{ 1}_{n_{t}\times n_{t}}\end{bmatrix} \tag{3}\]
The calculation of conditional MMD distance requires to get the labels of target samples, which is generally infeasible in domain adaptation task. To remedy this issue, we employ the target pseudo-labels instead of the unavailable true labels to compute the conditional distribution distance as follows:
\[\mathcal{L}_{mmd}^{c} =\sum_{c=1}^{C}\|\frac{1}{n_{s}^{c}}\sum_{\mathbf{x}_{i}\in \mathbf{X}_{s}^{c}}\mathbf{P}^{\mathrm{T}}\mathbf{x}_{si}-\frac{1}{n_{t}^{c}} \sum_{\mathbf{x}_{tj}\in\mathbf{X}_{t}^{c}}\mathbf{P}^{\mathrm{T}}\mathbf{x}_{ tj}\|_{2}^{2} \tag{4}\] \[=\mathrm{tr}(\mathbf{P}^{\mathrm{T}}\mathbf{X}(\sum_{c=1}^{C} \mathbf{M}_{c})\mathbf{X}^{\mathrm{T}}\mathbf{P})\]
where \(C\) is the number of classes, \(\mathbf{M}_{c}\in\mathbb{R}^{n\times n}\) is conditional MMD matrix defined as:
\[(\mathbf{M}_{c})_{ij}=\begin{cases}\frac{1}{n_{s}^{c}n_{t}^{c}},&\mathrm{if} \ \mathbf{x}_{i},\mathbf{x}_{j}\in\mathbf{X}_{s}^{c};\\ \frac{1}{n_{s}^{c}n_{t}^{c}},&\mathrm{if}\ \mathbf{x}_{i},\mathbf{x}_{j}\in\mathbf{X}_{t}^{c}; \\ \frac{1}{n_{s}^{c}n_{t}^{c}},&\mathrm{if}\ \mathbf{x}_{i}\in\mathbf{X}_{s}^{c}\wedge \mathbf{x}_{j}\in\mathbf{X}_{t}^{c};\\ \frac{1}{n_{s}^{c}n_{t}^{c}},&\mathrm{if}\ \mathbf{x}_{j}\in\mathbf{X}_{s}^{c} \wedge\mathbf{x}_{i}\in\mathbf{X}_{t}^{c};\\ 0,&\mathrm{otherwise}\end{cases} \tag{5}\]
\(\mathbf{X}_{s}^{c}\) represents all source samples in class \(c\), and \(n_{s}^{c}\) is the corresponding number of samples. Similar definitions can be applied for target samples according to the pseudo-labels. Denote \(\mathbf{M}=\sum_{c=0}^{C}\mathbf{M}_{c}\), then we have the following formula:
\[\Omega(\mathbf{P},\mathbf{X})=\mathrm{tr}(\mathbf{P}^{\mathrm{T}}\mathbf{X} \mathbf{M}\mathbf{X}^{\mathrm{T}}\mathbf{P}) \tag{6}\]
#### Iii-B2 Graph Self-Learning with Source Domain Discriminative Structure Preserving
Obviously, the quality of the affinity matrix is crucial to the performance of cross-domain label propagation. Most of previous works [12, 16, 28] use the same strategy to construct it, which calculates the similarity for all samples with predefined similarity metric, _e.g._, the heatkenel similarity [12, 16]. This strategy may not capture
the inherent similarity of samples, thus hinders the correctness of cross-domain label propagation and results in serious mis-classification for target data. The wrong pseudo-labels will further mislead the conditional distribution alignment in the next iteration, which ultimately results in significant performance degradation. To tackle this issue, inspired by several recent works [20, 21], we adopt a self-learning strategy, which constructs the affinity matrix by assigning the adaptive neighbors for each sample according to the local distance in the projected space. In light of this, the optimization objective of graph self-learning can be stated as follows:
\[\begin{split}&\min_{\mathbf{S}}\sum\nolimits_{i=1}^{n}((\sum \nolimits_{j=1}^{n}\|\mathbf{z}_{i}-\mathbf{z}_{j}\|_{2}^{2}S_{ij}+\lambda_{i }\|\mathbf{S}_{i,.}\|_{2}^{2})\\ &=\min\limits_{\mathbf{S}}\operatorname{tr}(\mathbf{P}^{\text{ T}}\mathbf{XLX}^{\text{T}}\mathbf{P})+\|\mathbf{A}\mathbf{S}\|_{F}^{2}\\ & s.t.\quad\mathbf{S}\mathbf{1}_{n}=\mathbf{1}_{n},\ 0\leq S_{ij}\leq 1 \end{split} \tag{7}\]
where \(\mathbf{z}_{i}=\mathbf{P}^{\text{T}}\mathbf{x}_{i}\) is the projection of sample \(\mathbf{x}_{i}\), \(\mathbf{S}_{i,.}\) represents the \(i\)-th row of \(\mathbf{S}\), and \(\mathbf{A}=\operatorname{diag}(\sqrt{\lambda_{1}},\sqrt{\lambda_{2}},..., \sqrt{\lambda_{n}})\). \(\mathbf{L}\) is the graph Laplacian matrix calculated as \(\mathbf{L}=\mathbf{D}-\mathbf{S}\), and \(\mathbf{D}\) is a diagonal matrix with the \(i\)-th element \(D_{ii}=\sum_{j=1}^{n}S_{ij}\). An \(F\)-norm regulation term is imposed on the \(i\)-th (\(i=1,2,\ldots,n\)) row of \(\mathbf{S}\) and the corresponding regularization term is \(\lambda_{i}\), which can be determined automatically and will be elaborated in Section III-C. Then, we can obtain the following formula for \(\Theta(\mathbf{P},\mathbf{S},\mathbf{X})\):
\[\Theta(\mathbf{P},\mathbf{S},\mathbf{X})=\operatorname{tr}(\mathbf{P}^{\text {T}}\mathbf{XLX}^{\text{T}}\mathbf{P})+\|\mathbf{A}\mathbf{S}\|_{F}^{2} \tag{8}\]
In addition, several previous works [11, 12, 16] have shown that the performance of domain adaptation can be significantly enhanced if the discriminative information of source data is exploited. To this end, we adopt an intuitive strategy for labeled source data that only the samples belonging to the same category are allowed to be connected. In such case, each source sample could be connected with two parts, one of which is the source samples within the identical class and the other is all target samples. For simplicity, we fix the probability as \(\delta\) and \(1-\delta\) for these two parts, respectively. That is, when \(i\leq n_{s}\), we have \(\sum_{j=1}^{n_{s}}S_{ij}=\delta\) and \(\sum_{j=n_{s}+1}^{n_{s}}S_{ij}=1-\delta\), where \(\delta\in[0,1]\) is a hyperparameter to control the partition of probability. In this way, the learned adaptive discriminative graph owns the following structure:
\[\mathbf{S}=\left[\begin{array}{cccc}\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\
Finally, by combining Eq. (6), Eq. (11), Eq. (13) and Eq. (14), we obtain the final formulation of our CDGS:
\[\begin{split}\min_{\mathbf{P},\mathbf{S},\mathbf{F}}& \operatorname{tr}(\mathbf{P}^{\mathrm{T}}\mathbf{X}\mathbf{M}\mathbf{X}^{ \mathrm{T}}\mathbf{P})+\alpha(\operatorname{tr}(\mathbf{P}^{\mathrm{T}} \mathbf{X}\mathbf{L}\mathbf{X}^{\mathrm{T}}\mathbf{P})\\ &+\|\mathbf{A}\mathbf{S}\|_{F}^{2})+\beta\operatorname{tr}( \mathbf{F}^{\mathrm{T}}\mathbf{L}\mathbf{F})+\gamma\|\mathbf{P}\|_{F}^{2}\\ s.t.&\mathbf{P}^{\mathrm{T}}\mathbf{X}\mathbf{H} \mathbf{X}^{\mathrm{T}}\mathbf{P}=\mathbf{I}_{\mathrm{d}},\ \mathbf{S}\mathbf{1}_{n}=\mathbf{1}_{n},0\leq S_{ij}\leq 1,\\ &\mathbf{F}_{l}=\mathbf{F}_{s},\ \sum_{j=1}^{n_{s}}S_{ij}= \delta,\ i\leq n_{s},\\ & S_{ij}=0,\ i,j\leq n_{s}\wedge y_{si}\neq y_{sj}\end{split} \tag{15}\]
where \(\mathbf{H}\) is the centering matrix defined as \(\mathbf{H}=\mathbf{I}_{n}-\frac{1}{n}\mathbf{1}_{n\times n}\). The first constraint is to maximize the variance of all data [10] in the projected space, which is inspired by the principal component analysis. Similar to [11], \(\mathbf{M}\) and \(\mathbf{L}\) can be normalized into the same scale. Thus, we set \(\alpha=1.0\) for all cases.
### _Optimization Procedure_
In problem (15), we need to optimize three variables \(\mathbf{P}\), \(\mathbf{S}\), and \(\mathbf{F}\). As it is not jointly convex with the three variables, we update each variable alternatively with the others fixed. To be specific, we solve each subproblem as follows.
**1. P-Subproblem:** When we fix \(\mathbf{S}\) and \(\mathbf{F}\), the optimization problem (15) becomes:
\[\begin{split}\min_{\mathbf{P}}&\operatorname{tr}( \mathbf{P}^{\mathrm{T}}(\mathbf{X}\mathbf{M}\mathbf{X}^{\mathrm{T}}+\alpha \mathbf{X}\mathbf{L}\mathbf{X}^{\mathrm{T}}+\gamma\mathbf{I}_{m})\mathbf{P})\\ s.t.&\mathbf{P}^{\mathrm{T}}\mathbf{X}\mathbf{H} \mathbf{X}^{\mathrm{T}}=\mathbf{I}_{d}\end{split} \tag{16}\]
We employ the Lagrange techniques to solve it. The corresponding Lagrangian function can be formulated as:
\[\begin{split} L(\mathbf{P},\mathbf{\Theta})&= \operatorname{tr}(\mathbf{P}^{\mathrm{T}}(\mathbf{X}\mathbf{M}\mathbf{X}^{ \mathrm{T}}+\alpha\mathbf{X}\mathbf{L}\mathbf{X}^{\mathrm{T}}+\gamma\mathbf{I }_{m})\mathbf{P})\\ &\quad+\operatorname{tr}((\mathbf{I}_{d}-\mathbf{P}^{\mathrm{T}} \mathbf{X}\mathbf{H}\mathbf{X}^{\mathrm{T}}\mathbf{P})\mathbf{\Pi})\end{split} \tag{17}\]
where \(\mathbf{\Pi}=\operatorname{diag}(\pi_{1},\pi_{2},...,\pi_{d})\in\mathbb{R}^{d \times d}\) is a diagonal matrix and each element is a Lagrange Multiplier. By setting the gradient of (17) with respect to \(\mathbf{P}\) to zero, we obtain:
\[(\mathbf{X}\mathbf{M}\mathbf{X}^{\mathrm{T}}+\alpha\mathbf{X}\mathbf{L} \mathbf{X}^{\mathrm{T}}+\gamma\mathbf{I}_{m})\mathbf{P}=\mathbf{X}\mathbf{H} \mathbf{X}^{\mathrm{T}}\mathbf{P}\mathbf{\Theta} \tag{18}\]
Then the optimal solution can be obtained by computing the eigenvectors of (18) regarding to the \(d\)-smallest eigenvalues.
**2. S-Subproblem:** When \(\mathbf{P}\) and \(\mathbf{F}\) are fixed, the optimization problem (15) with regard to \(\mathbf{S}\) is equal to problem (13). Actually, problem (13) can be divided into \(n\) subproblems and each of them is formulated as:
\[\begin{split}\min_{\mathbf{S}_{i,:}}&\sum_{j=1}^{n}( \|\mathbf{z}_{i}-\mathbf{z}_{j}\|_{2}^{2}S_{ij}+\beta\|\mathbf{F}_{i}-\mathbf{ F}_{j}\|_{2}^{2}S_{ij})\\ &\quad+\lambda_{i}\|\mathbf{S}_{i,:}\|_{2}^{2}\\ s.t.&\mathbf{S}_{i,:}\mathbf{1}_{n}=1,\ 0\leq S_{ij}\leq 1,\ \sum_{j=1}^{n_{s}}S_{ij}=\delta,\ i\leq n_{s}\\ &\quad S_{ij}=0,\ i,j\leq n_{s}\wedge y_{si}\neq y_{sj}\end{split} \tag{19}\]
**Case 1:** First of all, we show how to obtain the optimal solution when \(i>n_{s}\). We define \(A_{ij}=\|\mathbf{z}_{i}-\mathbf{z}_{j}\|_{2}^{2}+\beta\|\mathbf{F}_{i}- \mathbf{F}_{j}\|_{2}^{2}\), then the above problem can be reformulated as:
\[\mathbf{s}_{i,:}\mathbf{\Lambda}_{n}=1,0\leq S_{ij}\leq 1}\|\mathbf{S}_{i,:}+ \frac{\mathbf{A}_{i,:}}{2\lambda_{i}}\|_{2}^{2} \tag{20}\]
The corresponding Lagrangian function is
\[\min_{\mathbf{S}_{i,:}}\|\mathbf{S}_{i,:}+\frac{\mathbf{A}_{i,:}}{2\lambda_{i} }\|_{2}^{2}-\mu(\mathbf{S}_{i,:}\mathbf{1}_{n}-1)-\mathbf{S}_{i,:}\boldsymbol {\eta}^{\mathrm{T}} \tag{21}\]
where \(\mu\) and \(\boldsymbol{\eta}\) are the Lagrangian multipliers. To utilize the local structure of data and relieve computation burden, we learn a sparse \(\mathbf{S}_{i,:}\), i.e., each sample is only locally connected with its \(k\)-nearest neighbors. Based on the KKT condition, problem (21) has a closed-form solution as follows:
\[S_{ij}=\max(z-\frac{A_{ij}}{2\lambda_{i}},0) \tag{22}\]
where \(z=\frac{1}{k}+\frac{1}{2k\lambda_{i}}\sum_{j=1}^{k}\tilde{A}_{ij}\) and \(\tilde{A}_{ij}\) is the entry of matrix \(\tilde{\mathbf{A}}\), which is obtained by sorting the elements of each row of \(\mathbf{A}\) from small to large. To ensure that each \(\mathbf{S}_{i,:}\) has exactly \(k\) nonzero elements, we could set \(z-\tilde{A}_{i,k+1}/(2\lambda_{i})=0\), then we have:
\[\lambda_{i}=\frac{1}{2}(k\tilde{A}_{i,k+1}-\sum_{j=1}^{k}\tilde{A}_{ij}) \tag{23}\]
Submitting Eq. (23) into Eq. (22), we can obtain:
\[S_{ij}=\max(\frac{\tilde{A}_{i,k+1}-A_{ij}}{k\tilde{A}_{i,k+1}-\sum_{j=1}^{k} \tilde{A}_{ij}},0) \tag{24}\]
**Case 2:** When \(i,j\leq n_{s}\), Eq.(19) can be reformulated as:
\[\begin{split}&\min_{\mathbf{S}_{i,:}}\sum_{j=1}^{n_{s}}\|\mathbf{z}_ {i}-\mathbf{z}_{j}\|_{2}^{2}S_{ij}+\beta\|\mathbf{F}_{i}-\mathbf{F}_{j}\|_{2}^{2} S_{ij}+\lambda_{i}S_{ij}^{2}\\ s.t.&\sum_{j=1}^{n_{s}}S_{ij}=\delta,\ 0\leq S_{ij}\leq 1,\ S_{ij}=0,\ y_{si}\neq y_{sj} \end{split} \tag{25}\]
To satisfy the last constraint, we could set \(A_{ij}=+\infty\) if \(y_{si}\neq y_{sj}\). Similar to problem (20), we can obtain the closed-form solution of problem (25):
\[S_{ij}=\delta\max(\frac{\tilde{A}_{i,k_{1}+1}-A_{ij}}{k_{1}\tilde{A}_{i,k_{1}+1 }-\sum_{j=1}^{k_{1}}\tilde{A}_{ij}},0) \tag{26}\]
where \(k_{1}=\min(k,n_{s}^{y_{s}})\) as in practice, some classes may have very small-size samples.
**Case 3:** When \(i\leq n_{s},j>n_{s}\), problem (19) can be rewritten as:
\[\begin{split}\min_{\mathbf{S}_{i,:}}&\sum_{j=n_{s} +1}^{n}\|\mathbf{z}_{i}-\mathbf{z}_{j}\|_{2}^{2}S_{ij}+\beta\|\mathbf{F}_{i}- \mathbf{F}_{j}\|_{2}^{2}S_{ij}\\ &+\lambda_{i}S_{ij}^{2}\\ s.t.&\sum_{j=n_{s}+1}^{n}S_{ij}=1-\delta,\ 0\leq S_{ij} \leq 1\end{split} \tag{27}\]
Similarly, the closed-form solution of problem (27) is:
\[S_{ij}=(1-\delta)\mathrm{max}(\frac{\tilde{A}_{i,k+1}-A_{ij}}{k\tilde{A}_{i,k+ 1}-\sum_{j=1}^{k}\tilde{A}_{ij}},0) \tag{28}\]
**3. F-Subproblem:** With fixed \(\mathbf{P}\) and \(\mathbf{S}\), the optimization problem (15) with respect to \(\mathbf{F}\) is equal to solve problem (11). According to [39], we only need to update \(\mathbf{F}_{t}\). Split \(\mathbf{L}\) into four blocks: \(\mathbf{L}=\begin{bmatrix}\mathbf{L}_{ss}&\mathbf{L}_{st}\\ \mathbf{L}_{ts}&\mathbf{L}_{tt}\end{bmatrix}\), where \(\mathbf{L}_{ss}\in\mathbb{R}^{n_{s}\times n_{s}}\), \(\mathbf{L}_{st}\in\mathbb{R}^{n_{s}\times n_{t}}\), \(\mathbf{L}_{ts}\in\mathbb{R}^{n_{t}\times n_{s}}\) and \(\mathbf{L}_{tt}\in\mathbb{R}^{n_{t}\times n_{t}}\), and then, the optimal solution of problem (11) is:
\[\mathbf{F}_{t}=-\mathbf{L}_{tt}^{-1}\mathbf{L}_{ts}\mathbf{F}_{s} \tag{29}\]
Eventually, the target pseudo-labels can be obtained based on the following decision function:
\[\widehat{y}_{ti}=\mathrm{argmax}_{j}\ (\mathbf{F}_{t})_{ij} \tag{30}\]
The affinity matrix \(\mathbf{S}\) is initialized according to (26) in the original feature space. We summarize the detailed optimization steps of the proposed CDGS in Algorithm 1.
### _Computational Complexity Analysis_
To find the optimal solutions for the optimization Algorithm 1, we need to solve three subproblems. The complexity of each subproblem in each iteration is induced as follows: First, constructing and solving the eigen-decomposition problem (18) for \(\mathbf{P}\)-subproblem costs \(\mathcal{O}(n^{2}m+dm^{2})\); Then, updating the affinity matrix \(\mathbf{S}\) needs a time cost of \(\mathcal{O}(n^{2}\mathrm{log}(n))\); Finally, the complexity of obtaining the target estimated label matrix \(\mathbf{F}_{t}\) and the pseudo-labels \(\mathbf{\hat{Y}}\) is \(\mathcal{O}(n_{t}^{3})\). Thus, the overall computational complexity of our proposal is \(\mathcal{O}(Tn^{2}m+Tdm^{2}+Tn^{2}\mathrm{log}(n)+Tn_{t}^{3})\), where \(T\) is the number of iterations.
### _Extension to Semi-supervised Domain Adaptation_
We denote the target data as \(\mathbf{X}_{t}=[\mathbf{X}_{t},\mathbf{X}_{u}]\), where \(\mathbf{X}_{t}=\{\mathbf{x}_{ti}\}_{i=1}^{n_{t}}\) is the labeled data and \(\mathbf{X}_{u}=\{\mathbf{x}_{uj}\}_{j=1}^{n_{u}}\) is the unlabeled data. Then, by submitting \(\mathbf{X}_{s}\) and \(\mathbf{X}_{t}\) into Eq. (15), the semi-supervised extension for our CDGS can be stated as:
\[\begin{split}\min_{\mathbf{P},\mathbf{S},\mathbf{F}}& \mathrm{tr}(\mathbf{P}^{\mathrm{T}}\mathbf{X}\mathbf{M}\mathbf{X}^{ \mathrm{T}}\mathbf{P})+\alpha(\mathrm{tr}(\mathbf{P}^{\mathrm{T}}\mathbf{X} \mathbf{L}\mathbf{X}^{\mathrm{T}}\mathbf{P})\\ &+\|\mathbf{\Lambda}\mathbf{S}\|_{F}^{2})+\beta\mathrm{tr}( \mathbf{F}^{\mathrm{T}}\mathbf{L}\mathbf{F})+\gamma\|\mathbf{P}\|_{F}^{2}\\ s.t.&\ \mathbf{P}^{\mathrm{T}}\mathbf{X}\mathbf{H}\mathbf{X}^{ \mathrm{T}}\mathbf{P}=\mathbf{I}_{d},\ \mathbf{S}\mathbf{1}_{n}=\mathbf{1}_{n},0\leq S_{ij}\leq 1,\\ &\ \mathbf{F}_{lj}=\mathbf{F}_{s},\ \sum_{j=1}^{n_{s}}S_{ij}=\delta,\ i\leq n_{s},\\ & S_{ij}=0,\ i,j\leq n_{s}\wedge y_{si}\neq y_{sj}\end{split} \tag{31}\]
where \(n=n_{s}+n_{l}+n_{u}\). Obviously, Eq. (31) owns the same formula with Eq. (15), thus they can be solved with the identical algorithm.
Actually, our semi-supervised extension can be effective for the following two reasons: 1) The estimation of target class means is more accurate when some labeled target samples are available, which can promote to perform conditional distribution alignment more accurately; 2) Through Eq. (7), reliable connections between the labeled and unlabeled data are built, which can transfer the knowledge of labeled samples to the unlabeled ones via cross-domain label propagation.
## IV Experiments
In this section, we first describe the six benchmark datasets. Then, the details of experimental setup are shown. Next, we present the evaluation results of UDA, ablation study, parameter sensitivity and convergence analysis. Finally, the results for SDA are reported. The source code of this paper is available at [https://drive.google.com/drive/folders/19Fqxxtf9MTcd-1em](https://drive.google.com/drive/folders/19Fqxxtf9MTcd-1em) XstZE01G60JUyAst?usp=sharing.
### _Datasets and Descriptions_
We adopt six benchmark datasets in our experiments, including Office31, Office-Caltech10, ImageNet-VOC2007, Office-Home, MNIST-USPS and PIE, which are widely used cross-domain object, digit and face datasets. Overall descriptions about these datasets are summarized in Table II. We will introduce more details for each dataset as follows.
_Office31_[42] contains 4,110 images with 31 categories collected from three domains: Amazon (A), DSLR (D) and Webcam (W). Amazon images are downloaded from the online merchants. DSLR images are captured by a digital SLR camera while Webcam images are recorded by a web camera. Following [43], we utilize the AlexNet-FC\({}_{7}\) features1 fine-tuned on the source domain.
Footnote 1: [https://github.com/VisionLearningGroup/CORAL/tree/master/dataset](https://github.com/VisionLearningGroup/CORAL/tree/master/dataset)
_Office31_[42] contains 4,110 images with 31 categories collected from three domains: Amazon (A), DSLR (D) and Webcam (W). Amazon images are downloaded from the online merchants. DSLR images are captured by a digital SLR camera while Webcam images are recorded by a web camera. Following [43], we utilize the AlexNet-FC\({}_{7}\) features1 fine-tuned on the source domain.
Footnote 2: [http://booimaging.info/assets/GFK.zip](http://booimaging.info/assets/GFK.zip)
_Office-Caltech10_[27] includes 2,533 images in 10 shared categories from the Office31 dataset and the Caltech256 (C) dataset, which is a widely used dataset for object recognition. Following [27], we exploit the SURF features2. Besides, the VGG-FC\({}_{6,7}\) features3 provided by [44] are used.
Footnote 3: https://[email protected]/sherath/ils.git
_PIE_[45] involves 41,638 facial images of 68 people with different poses, illuminations, and expression changes. Following [10], we focus on five poses: C05 (left), C07 (upward), C09 (downward), C27 (frontal) and C29 (right). All images were converted to grayscale and cropped to the size 32 \(\times\) 32. We adopt the pixel features4.
Footnote 4: [https://github.com/jindongwang/transferlearning/tree/master/data](https://github.com/jindongwang/transferlearning/tree/master/data)
_MNIST-USPS_ is made up of two handwritten digit image datasets: MNIST (M) and USPS (U). Following [10], we randomly choose 2,000 images in MNIST and 1,800 images in USPS and utilize the pixel features5.
Footnote 5: [https://github.com/VisionLearningGroup/CORAL/tree/master/dataset](https://github.com/VisionLearningGroup/CORAL/tree/master/dataset)
_ImageNet-VOC2007_ consists of two large image recognition datasets, ImageNet (I) and VOC2007 (V). Following [26], we extract all images from five common classes of the two
datasets, _i.e._, bird, cat, chair, dog and person. The DeCAF\({}_{6}\) feature4 is employed.
Footnote 4: [https://www.csie.ntu.edu.tw/~cjlin/liblinear/](https://www.csie.ntu.edu.tw/~cjlin/liblinear/)
_Office-Home_[46] includes 15,585 object images in 65 categories from four domains: Art (artistic depictions of objects, Ar), Clipart (clipart images, Cl), Product (object images without background, Pr) and Real-World (images captured by a regular camera, Re). We employ the Resnet50 features extracted by a Resnet50 model [47] pretrained on ImageNet.
For simplicity, in our experiments, each cross-domain task is denoted by S\(\rightarrow\) T, where S represents the source domain and T is the target domain.
### _Experimental Setup_
#### Iv-B1 Comparison Methods
For UDA, we compare the performance of our CDGS with massive methods, which can be classified into two categories: _shallow methods_: 1-NN, SVM5, JDA [10], DICD [11], PACET [14], MCS [48], DTLC [13], ARTL [25], MEDA [26], DGA-DA [16] and DICE\({}_{\rm lp}\)[12], _deep methods_: the method of [34], DRCN [49], DSAN [31], the method of [50], and GSP [51]. For SDA, the competitors include MMDT [35], CDLS [36], ILS [44], TFMLK-S [52] and OBTL [37].
Footnote 5: [https://www.csie.ntu.edu.tw/~cjlin/liblinear/](https://www.csie.ntu.edu.tw/~cjlin/liblinear/)
#### Iv-B2 Training Protocol
We exploit all source data for training, known as full protocol, on all datasets in Table II. Besides, regarding the Office-Caltech10 dataset, two kinds of sampling protocols are also adopted, where only few labeled source samples per category are employed for training. For the first sampling protocol, similar to [12], we use the SURF features and 20 instances per class are randomly selected for domain A while 8 instances per class for other domains as sources. For the second sampling protocol, following [48], VGG-FC\({}_{6}\) features are utilized and 8 samples per category are selected for domain D while 20 samples per category for the others.
#### Iv-B3 Parameter Setting
In UDA and SDA, sufficient labeled target samples are unavailable, thus we cannot perform a standard cross-validation procedure to decide the optimal parameters. Following [11], we report the best results by grid-searching the hyper-parameter space. For all competitors, we run the public codes provided by the authors using the default parameters or following the given procedure to tune parameters. For all approaches requiring a subspace dimension, the optimal value is searched in \(d\in\{1C,2C,3C,4C,5C,6C\}\), where \(C\) is the number of classes for the corresponding dataset. The regulation parameter for projection matrix is searched in \(\gamma\in\{0.005,0.01,0.05,0.1,0.5,1.0,5.0,10.0\}\). For the other parameters in our CDGS, we fix \(\alpha=1.0\), \(k=20\), \(\delta=0.8\), \(T=10\) and set \(\beta=0.5\) for Office-Home and Office-Caltech10 datasets, \(\beta=0.01\) for PIE dataset and \(\beta=0.1\) for other datasets. We also provide the optimal parameters for UDA setting: Office31 (\(d=124\), \(\gamma=0.01\)), Office-Caltech10 (\(d=30\), \(\gamma=0.5\) for SURF, \(d=30\), \(\gamma=0.1\) for SURF split, \(d=40\), \(\gamma=0.1\) for VGG-FC\({}_{6,7}\) split), MNIST-USPS (\(d=40\), \(\gamma=0.5\)), ImageNet-VOC2007 (\(d=30\), \(\gamma=0.01\)), PIE (\(d=340\), \(\gamma=0.005\)) and Office-Home (\(d=130\), \(\gamma=0.005\)).
#### Iv-B4 Evaluation Metric
Following many previous works [10, 11, 12], we adopt the classification accuracy of target data as the evaluation metric, which is computed as:
\[\mathrm{Accuracy}=\frac{|\mathbf{x}:\mathbf{x}\in\mathbf{X}_{t}\cap\tilde{y }=y|}{|\mathbf{x}:\mathbf{x}\in\mathbf{X}_{t}|} \tag{32}\]
where \(\mathbf{x}\) is a target sample, \(y\) is the truth label of \(\mathbf{x}\), and \(\tilde{y}\) is the corresponding pseudo-label.
### _Unsupervised Domain Adaptation_
#### Iv-C1 The Experimental Results on Unsupervised Domain Adaptation: a) Results on Office31 Dataset
The classification accuracies of all methods on this dataset are listed in Table III, where the highest accuracy for each task is boldfaced. The results of DGA-DA are copied from [12]. It is observed that CDGS performs much better than all competitors. Specifically, CDGS achieves 78.9\(\%\) average accuracy, which leads the second best method PACET by 2.3\(\%\). DICE\({}_{\rm lp}\) and DGA-DA both explore the geometric structure underlying data manifold to assign target pseudo-labels by cross-domain label propagation. However, CDGS further integrates domain-invariant feature learning, affinity matrix constructing and target labels inferring into one framework. Therefore, CDGS could make the three parts interact with each other to yield a superior performance. Besides, CDGS employs a self-learning strategy to construct a discriminative graph to capture the inherent similarity of samples as well as explore the label information of source and target data. In such case, the discriminative graph can transfer source knowledge to target domain more effectively.
#### Iv-C2 Results on Office-Caltech10 Dataset
The results on Office-Caltech10 dataset with SURF features under the full protocol are shown in table IV. In terms of the average accuracy, CDGS owns a large advantage, which improves 2.7\(\%\) over the best competitor PACET. CDGS works the best for 7 out of 12 tasks while PACET only wins two tasks, which verifies the significant effectiveness of CDGS. Compared with these methods which employ cross-domain label propagation to infer target labels, _i.e._, ARTL, MEDA, DGA-DA and
DICE\({}_{\rm{ip}}\), the improvement of CDGS is 3.6\(\%\), which illustrates the superiority of our CDGS over the counterparts.
Then, we also compare our CDGS with several competitors under different splitting protocols with different features. The results over 20 random splits are illustrated in table V. For SURF features, CDGS performs much better than other methods in terms of the average accuracy. CDGS achieves 49.7\(\%\) average performance, which owns 3.6\(\%\) improvement compared with the best competitors, MCS and DICE\({}_{\rm{ip}}\). Notably, CDGS performs the best on all tasks except for C\(\rightarrow\)A. For VGG-FC\({}_{6,7}\) features, CDGS outperforms all comparison methods again. Carefully comparing the results of SURF and VGG-FC\({}_{6,7}\) features, we can find that CDGS can consistently achieve good performance regardless of the features, which illustrates that CDGS holds better generalization capacity.
Results on MNIST-USPS and ImageNet-VOC2007 Datasets.To verify the effectiveness of CDGS on digit images, we further conduct experiments on MNIST-USPS dataset. The comparison results are listed in Table VI. CDGS achieves the highest average accuracy compared with all competitors. We can observe that CDGS is much superior to feature adaptation approaches, _e.g._, DGA-DA and DICE\({}_{\rm{ip}}\), and owns 5.7\(\%\) improvement in terms of the average accuracy, which demonstrates the superiority of our proposal. The classification results of all methods on ImageNet-VOC2007 dataset are also provided in Table VI. CDGS performs much better than other methods. Moreover, compared with the related methods, _i.e._, ARTL, MEDA and DICE\({}_{\rm{ip}}\), CDGS shows large improvement up to 8.3\(\%\), which confirms the advancement of our CDGS.
Results on PIE Dataset.Table VII summarizes the classification performance of CDGS and other methods on PIE dataset. We can observe that CDGS performs better than all competitors in terms of the average performance. Specifically, CDGS achieves the highest average classification accuracy, which owns 0.9\(\%\) improvement against the best competitor DTLC. Besides, CDGS wins 12 out of 20 tasks while DTLC only performs the best on 7 tasks. It is worthy to note that compared with ARTL, MEDA, DGA-DA and DICE\({}_{\rm{ip}}\), CDGS
achieves 6.3\(\%\) improvement, which indicates that our CDGS is more conductive for cross-domain face recognition tasks.
Results on Office-Home Dataset.For this large-scale dataset, we use the Resnet50 model pretrained on ImageNet to extract features. The classification results are shown in Table VIII. Here, we also report the results of five recent deep domain adaptation methods, which take the Resnet50 model as the backbone. It is clearly observed that our CDGS outperforms all traditional and deep comparison methods in average accuracy. Specifically, CDGS leads the best traditional competitor MCS by 1.7\(\%\). In addition, CDGS is the best method on 5 out of 12 tasks while MCS only wins one task, which verifies the significant effectiveness of our proposal against the traditional competitors. Compared with the best deep competitor, CDGS achieves 1.6\(\%\) improvement, which validates the superiority of our proposal when equipped with off-the-self deep features.
For a complete understanding, we summarize the average accuracy of several competitors and our CDGS on all benchmark datasets under the full protocol in Table IX. We discover that CDGS obtains the highest average accuracy, leading the best competitor MEDA by 5.8\(\%\), which validates that our CDGS is capable of addressing various DA tasks effectively.
#### Vi-B2 Ablation Study
To understand our method more deeply, we propose three variants of CDGS: _a_) CDGS\({}^{\rm sp}\), Separates domain-invariant feature learning, affinity matrix constructing
and target labels inferring into three independent stages and constructs the affinity matrix with Predefined similarity metric, _i.e._, the gaussian kernel similarity with kernel width 1.0; _b_) CDGS\({}^{\rm dg}\), integrates Domain-invariant feature learning and Graph self-learning into one framework, _i.e._, jointing Eq. (6), Eq. (7) and Eq. (14); _c_) CDGS\({}^{\rm ds}\), jointly performs Domain-invariant feature learning and graph self-learning with Source domain discriminative structure preserving, _i.e._, unifying Eq. (6), Eq. (10) and Eq. (14). It is worthy noting that compared with CDGS\({}^{\rm ds}\), our CDGS further considers the label smoothness constraint during the discriminative graph self-learning. In Table X, we list the average classification accuracy of CDGS and the three variants on all datasets under the full protocol. Based on this table, more detailed analysis about our CDGS is presented as follows.
Effectiveness of Graph Self-learning.As we can see, CDGS\({}^{\rm dg}\) is superior to CDGS\({}^{\rm sp}\) on all datasets except for PIE, which verifies the effectiveness of graph self-learning. Particularly, compared with CDGS\({}^{\rm sp}\), CDGS\({}^{\rm dg}\) achieves 5.9\(\%\) improvement on ImageNet-VOC2007 dataset and 3.3\(\%\) improvement on Office-Home dataset respectively, which confirms the superiority of graph self-learning. By integrating the domain-invariant feature learning and graph self-learning into one framework, we can capture the inherent similarity connections among source and target samples more effectively, and thus improve the classification performance of cross-domain label propagation.
Effectiveness of Graph Self-learning with Source Discriminative Structure Preserving.We can see that CDGS\({}^{\rm ds}\) performs much better than CDGS\({}^{\rm dg}\) in terms of average accuracy, which achieves a large improvement of 5.1\(\%\). Notably, on datasets MNIST-USPS and PIE, CDGS\({}^{\rm ds}\) even achieves more than 12.9\(\%\) advancement. The above results demonstrate that preserving the source discriminative structure in graph self-learning process is of vital importance to improve the quality of affinity matrix, such that the knowledge from source domain can be transferred to target domain more effectively.
Effectiveness of Label Smoothness Constraint for Discriminative Graph Self-learning.It is observed that our CDGS outperforms CDGS\({}^{\rm ds}\) on 5 out of all 6 datasets and achieves superior performance in terms of average accuracy. This phenomenon indicates that the introduction of weakly supervised information contained in target pseudo-labels can promote to yield a discriminative graph with higher quality, and thus the source knowledge can be propagated to target domain more adequately.
#### Iv-C3 Parameter Sensitivity and Convergence Analysis
Three tunable parameters are involved in our CDGS: \(d\), \(\gamma\), \(\beta\). We have conducted extensive parameter sensitivity analysis on object, digit and face datasets by varying one parameter once in a wide range and fixing the other parameters to the optimal values. We display the results of task C\(\rightarrow\) D (SURF), U\(\rightarrow\)M, C29\(\rightarrow\)C05 and Cl\(\rightarrow\)Pr in Fig. 2 (a) \(\sim\) (c). To verify the effectiveness of our CDGS, the results of the best competitor for each task are also provided as the dash lines.
First, we run CDGS as \(d\) varies in \(d\in[1C,2C,...,10C]\), where \(C\) is the number of classes for the corresponding task. From Fig. 2 (a), we can observe that our CDGS is robust to different values of \(d\). We empirically find that \(d\in[2C,7C]\) is an optimal choice. Then, we investigate the sensitivity of \(\gamma\) by varying it from 0.001 to 10.0. Theoretically, when \(\gamma\to 0\), the optimization problem is ill-defined, while when \(\gamma\rightarrow\infty\), the domain-invariant feature learning and discriminative graph self-learning are not performed, thus our CDGS can not learn robust features for cross-domain label propagation. As we can see from Fig. 2 (b), determining the optimal value of \(\gamma\) is infeasible and a reasonable one will make CDGS outperform the best competitor generally. Finally, we vary the value of \(\beta\) from 0.001 to 10.0 to evaluate its influence. Theoretically, too small (large) values of \(\beta\) make the label smoothness constraint (graph self-learning with the projected features) ineffective, which hinders us to construct a high-quality affinity matrix. A proper value of \(\beta\) helps to capture the intrinsic similarity of samples, thereby improving the performance of cross-domain label propagation. From Fig. 2 (c), we can discover that \(\beta\in[0.01,5.0]\) is an optimal choice. Moreover, we display the convergence analysis in Fig. 2 (d), where the maximum iteration is 15. We can observe that our CDGS can quickly converge within several iterations.
### _Semi-supervised Domain Adaptation_
#### Iv-D1 Results on Office-Caltech10 dataset
We follow the standard experimental setup of [35], where 20 samples per class are randomly selected for amazon domain while 8 for the others as the sources. Besides, three labeled target samples per category are selected for training with the rest for testing. For fair comparison, we use the train/test splits released by [35]. The average accuracies for each task over 20 random splits are shown in Table XI. We also report the performance of OBTL [37], which to our knowledge, is the best method on this dataset. We can observe that in terms of the average accuracy, CDGS obtains 2.2\(\%\) improvement over OBTL. Besides, CDGS works the best for 9 out of all 12 tasks while OBTL just wins one task, which verifies the significant effectiveness of our semi-supervised extension. Carefully comparing the results of Table XI and Table V, we find that when few labeled
target samples are available, CDGS obtains 11.4\(\%\) gain in the average classification performance, which highlights the value of our extension.
#### Iv-D2 Results on MNIST-USPS dataset
We follow the protocol of [52]. Specifically, all source samples are utilized for training, and 2 labeled target samples per category are also selected for training with the remaining to be recognized. The average classification accuracies over 5 random splits are reported in Table XII, where some results are copied from [52]. We can observe that our CDGS is the best method for all tasks and achieves 83.5 \(\%\) averaged accuracy, leading the second best method CDLS by 9.7\(\%\), which confirms the superiority of our semi-supervised extension.
## V Conclusion and Future Work
In this paper, a novel domain adaptation approach called CDGS is proposed, which infers target pseudo-labels by cross-domain label propagation. Different from existing cross-domain label propagation methods that separate domain-invariant learning, affinity matrix constructing and target labels inferring into three independent stages, our CDGS integrates these three parts into one unified optimization framework, such that they can assist each other to achieve more effective knowledge transfer. Furthermore, to construct a high-quality affinity matrix in CDGS, we propose a discriminative graph self-learning strategy, which can capture the inherent data manifold structure by adaptively calculating sample similarity in the projected space and exploring the discriminative information contained in well-labeled source data and pseudo-labeled target data. An iterative optimization algorithm is designed to solve the CDGS optimization problem. We further extend our CDGS to the SDA scenario in a direct but effective way and the corresponding optimization problem can be solved with the identical optimization algorithm. Extensive experimental results on six benchmark datasets have verified the significant superiority of our CDGS against the competitors in both UDA and SDA settings.
|
2310.07632
|
Prompt Backdoors in Visual Prompt Learning
|
Fine-tuning large pre-trained computer vision models is infeasible for
resource-limited users. Visual prompt learning (VPL) has thus emerged to
provide an efficient and flexible alternative to model fine-tuning through
Visual Prompt as a Service (VPPTaaS). Specifically, the VPPTaaS provider
optimizes a visual prompt given downstream data, and downstream users can use
this prompt together with the large pre-trained model for prediction. However,
this new learning paradigm may also pose security risks when the VPPTaaS
provider instead provides a malicious visual prompt. In this paper, we take the
first step to explore such risks through the lens of backdoor attacks.
Specifically, we propose BadVisualPrompt, a simple yet effective backdoor
attack against VPL. For example, poisoning $5\%$ CIFAR10 training data leads to
above $99\%$ attack success rates with only negligible model accuracy drop by
$1.5\%$. In particular, we identify and then address a new technical challenge
related to interactions between the backdoor trigger and visual prompt, which
does not exist in conventional, model-level backdoors. Moreover, we provide
in-depth analyses of seven backdoor defenses from model, prompt, and input
levels. Overall, all these defenses are either ineffective or impractical to
mitigate our BadVisualPrompt, implying the critical vulnerability of VPL.
|
Hai Huang, Zhengyu Zhao, Michael Backes, Yun Shen, Yang Zhang
|
2023-10-11T16:25:45Z
|
http://arxiv.org/abs/2310.07632v1
|
# Prompt Backdoors in Visual Prompt Learning
###### Abstract
Fine-tuning large pre-trained computer vision models is infeasible for resource-limited users. Visual prompt learning (VPL) has thus emerged to provide an efficient and flexible alternative to model fine-tuning through Visual Prompt as a Service (VPPTaaS). Specifically, the VPPTaaS provider optimizes a visual prompt given downstream data, and downstream users can use this prompt together with the large pre-trained model for prediction. However, this new learning paradigm may also pose security risks when the VPPTaaS provider instead provides a malicious visual prompt. In this paper, we take the first step to explore such risks through the lens of backdoor attacks. Specifically, we propose BadVisualPrompt, a simple yet effective backdoor attack against VPL. For example, poisoning 5% CIFAR10 training data leads to above 99% attack success rates with only negligible model accuracy drop by 1.5%. In particular, we identify and then address a new technical challenge related to interactions between the backdoor trigger and visual prompt, which does not exist in conventional, model-level backdoors. Moreover, we provide in-depth analyses of seven backdoor defenses from model, prompt, and input levels. Overall, all these defenses are either ineffective or impractical to mitigate our BadVisualPrompt, implying the critical vulnerability of VPL.
## 1 Introduction
Large pre-trained computer vision models have shown great success in various (downstream) tasks [3, 16, 19, 76]. However, the conventional approach to adapting pre-trained models based on fine-tuning model parameters requires large computations and memories. To address such limitations, inspired by the success of prompt learning in NLP [25, 39, 63, 40, 20], recent work has introduced visual prompt learning (VPL) [4, 9, 10, 20, 31, 70] as an efficient alternative for model adaption. Given a pre-trained visual model and specific downstream data, VPL learns a global visual prompt, which comprises pixel perturbations, usually in the shape of a padding or patch. This learned visual prompt is then placed on any downstream test image for model prediction. Recent work has shown the competitive performance of VPL to parameter fine-tuning in various tasks [4, 43, 22].
A suitable visual prompt is critical to the VPL performance and normally needs considerable efforts to optimize [4, 9, 10, 20, 70]. Therefore, Visual Prompt as a Service (VPPTaaS) is promising to assist non-expert users to adapt to this new paradigm, as in the NLP domain [1, 15]. In a typical scenario, users provide their data to the VPPTaaS provider to optimize a prompt. Then, the prompt is returned to users and can be used together with a pre-trained visual model for prediction. However, despite its effectiveness and convenience, VPPTaaS may bring unknown security risks to downstream users when the VPPTaaS provider intentionally supplies a malicious visual prompt.
In this paper, we take the first step to systematically study the security risks of VPPTaaS. We focus on the backdoor attack since it is widely recognized as a major security risk of machine learning models [24, 33]. Specifically, we propose BadVisualPrompt, the first backdoor attack against VPL. Different from conventional backdoors, which are implanted in the model parameters, our backdoor is implanted in the (pixel-space) visual prompt. With such a backdoored visual prompt, the pre-trained model would behave abnormally (e.g., misclassifying the input) when a pre-defined backdoor trigger appears in the input but normally on a clean input.
Our systematic studies are conducted from both the attack and defense perspectives. From the attack perspective, we demonstrate the effectiveness of BadVisualPrompt in various settings with diverse model architectures, datasets, and VPL variants (with different prompt templates or label mapping strategies). For instance, poisoning 5% CIFAR10 training data leads to an average attack success rate (ASR) of above 99% with only about a 1.5% drop in model clean accuracy (CA). In particular, we point out that trigger-prompt interactions should be studied since both the trigger and prompt are placed on the same input image. As a case study, we analyze the impact of trigger-prompt distance on the attack performance and find that the ASR may drop by 80% when the trigger appears distant from the visual prompt. We further show that optimizing the trigger pattern can restore the ASR in this challenging case.
From the defense perspective, we provide in-depth analyses of seven backdoor detection and mitigation methods from three different levels: model, prompt, and input. In general, we find that these defenses are either ineffective or impractical against our new attack, BadVisualPrompt. In particular, we investigate a new, prompt-level detection method that is based on visual discrimination of backdoored and clean prompts. We find that although this new prompt-level detection method
achieves almost 100% accuracy, a large number of training prompts and substantial computational resources are required.
Note that, the major contribution of this paper is not proposing new attack techniques, but systematically and empirically evaluating the security risks of VPL, a brand new learning paradigm for large vision models. Our work provides significant findings and in-depth analysis which might inspire further security research in VPL.
## 2 BadVisualPrompt
### Visual Prompt Learning
Recall that pixel space [4] is continuous. Inspired by the continuous prompts (sometimes called soft prompts in NLP), visual prompt learning [4] aims at learning a visual prompt \(\mathbf{w}\) to adapt the input image \(\mathbf{x}\) to a pre-trained image model \(M\) (see Figure 1 for illustration). The downstream task learns a function \(f(M,\mathbf{w},\mathbf{x})\) that combines the frozen pre-trained model \(M\) and the visual prompt \(\mathbf{w}\) to predict the result for an input image \(\mathbf{x}\). Concretely, the visual prompt is optimized on the downstream training dataset \(\mathcal{D}\) with the following objective function:
\[\mathbf{w}^{*}=\operatorname*{arg\,min}_{\mathbf{w}}\mathbb{E}_{(\mathbf{x}, y)\in\mathcal{D}}[\mathcal{L}(f(M,\mathbf{w},\mathbf{x}),y)], \tag{1}\]
where the loss function \(\mathcal{L}(\cdot)\) is normally the cross-entropy loss. If the downstream task is a classification task, visual prompt learning also requires a pre-defined label mapping \(\mathbf{\pi}\) to interpret the prompting results (see Chen et al. [10]). Moreover, a visual prompt can be seen as additive perturbations to the input image. Users may use any form of visual templates (e.g., patch and padding) to represent visual prompts in practice. Note that one recent study [34] adopts a different paradigm that forms the prompt as additional model parameters instead of pixel perturbations at the input space. This paradigm and its variants are therefore out of our research scope. A detailed review of related work on (visual) prompt learning and backdoor attacks/defenses can be found in Appendix A.
### Threat Model
Following existing backdoor studies [24], we assume that the attacker is a malicious VPPTaaS service provider. The victim, i.e., the downstream user, outsources the prompt optimization to the VPPTaaS provider and may get back a backdoored visual prompt. We assume the pre-trained model is publicly available (to both the attacker and victim).
**Attacker's Goals.** The attacker aims to implant backdoors in the visual prompt. When such a backdoored visual prompt is returned to the victim, their downstream prediction is correct for clean inputs but incorrect for the triggered inputs. As such, the attacker tends to simultaneously achieve two goals, i.e., achieving attack success and maintaining model utility.
**Attacker's Knowledge and Capabilities.** To get a task-specific visual prompt, the user must supply detailed downstream task information, including limited downstream data, to the service provider. Therefore, we assume that the attacker has knowledge of the downstream dataset. We also assume that the attacker has full control of the prompt learning process and can define the form of the visual prompt (e.g., shape and location).
### Attack Method
**Data Poisoning.** Our attack method, namely BadVisualPrompt, crafts a backdoored visual prompt by manipulating the user-uploaded dataset (denoted as \(\mathcal{D}_{\text{clean}}\)) in the prompting process. We randomly sample a proportion of \(p\) from \(\mathcal{D}_{\text{clean}}\) to constitute a poisoned dataset \(\mathcal{D}_{\text{poison}}\). Specifically, for each sampled instance \((\mathbf{x},y)\in\mathcal{D}_{\text{clean}}\), we form its corresponding poisoned version \((\mathbf{x}_{\text{poison}},t)\in\mathcal{D}_{\text{poison}}\), where \(\mathbf{x}_{\text{poison}}=\mathcal{P}(\mathbf{x},\Delta,t)\) and \(\mathcal{P}(\cdot)\) is a function to add the backdoor trigger \(\Delta\) to the given image \(\mathbf{x}\) and to assign an incorrect, target label \(t\). For the trigger \(\Delta\), following the common practice [24], we adopt a small patch with iterative white and black colors placed at the corner.
**Attack Objective.** The optimization of BadVisualPrompt can be formulated as:
\[\begin{split}\mathbf{w}_{\text{b}}=& \operatorname*{arg\,min}_{\mathbf{w}}\bigl{[}\mathbb{E}_{(\mathbf{x},y)\in \mathcal{D}_{\text{clean}}}\mathcal{L}(f(M,\mathbf{w},\mathbf{x}),y)+\\ &\lambda\cdot\mathbb{E}_{(\mathbf{x}_{\text{poison}},t)\in \mathcal{D}_{\text{poison}}}\mathcal{L}(f(M,\mathbf{w},\mathbf{x}_{\text{ poison}}),t)\bigr{]},\end{split} \tag{2}\]
where the \(\mathcal{L}(\cdot)\) represents the loss function (e.g., cross-entropy loss) in the normal prompting process, and \(\lambda>0\) is a coefficient to balance the model utility (i.e., first term) and attack effectiveness (i.e., second term). Intuitively, a larger \(\lambda\) makes the backdoored visual prompt \(\mathbf{w}_{\text{b}}\) focus more on the attack effectiveness and may exert a larger negative impact on the model utility.
**Workflow.** The workflow of our BadVisualPrompt is illustrated in Figure 2. In the training phase, we optimize the backdoored visual prompt using both \(\mathcal{D}_{\text{poison}}\) and \(\mathcal{D}_{\text{clean}}\). In the
Figure 1: Illustration of applying a visual prompt in visual prompt learning (VPL). An input image is combined with the learned visual prompt and then sent to the fixed pre-trained model for prediction. Label mapping is used for addressing the difference between the upstream and downstream tasks. Note that the original image should be first resized to the desired input size of the pre-trained model.
inference phase, the backdoored visual prompt \(\mathbf{w}_{\text{b}}\) is placed on (clean or backdoored) images to feed into the pre-trained model. Specifically, the model can correctly classify the clean input but misclassify the triggered input, into a target class.
## 3 Experiments of Attacks
### Experimental Setups
**Datasets and Models.** We consider three benchmark image datasets: CIFAR10 [2], SVHN [51], and EuroSAT [29]. We use the official training and testing data splits for the CIFAR10 and SVHN datasets. For the EuroSAT dataset, we randomly sample 80% images per class for training and the rest 20% for testing. For pre-trained models, we consider three vision models, i.e., ResNet trained on ImageNet-1K (RN50) [28, 57], Big Transfer (BiT-M) [37], ResNeXt trained on 3.5B Instagram images (Instagram) [49], and also a vision-language model, CLIP [55].
**Prompt Learning and Attack Settings.** We follow Bahng et al. [4] to construct visual prompts. If not mentioned specifically, the visual prompt has a shape of four-edge padding with a width of 30 pixels (on a \(224\times 224\) input image). For label mapping, each pre-trained class index \(i\) corresponds to the same downstream class index \(i\) for the three vision models, and a semantically similar text prompt is constructed for CLIP. For attacking, we place the trigger at the bottom right corner with the size as 1/16 of the input image and set \(\lambda=1.0\) in Equation 2. We consider both single-target and multi-target attack goals. For the single-target goal, we choose "automobile" for CIFAR10, "1" for SVHN, and "forest" for EuroSAT, all mapping to class index 1, and we poison 5% training data. For the multi-target goal, we choose class indexes 1, 3, and 5 for each dataset. We adopt different trigger positions for different targets (i.e., bottom left \(\rightarrow\) 1, bottom center \(\rightarrow\) 3, and bottom right \(\rightarrow\) 5), and we poison 2% training data for each target.
**Evaluation Metrics.** In this work, the pre-trained model is always used together with the visual prompt to give predictions for downstream data. We use Clean Accuracy (CA) and Attack Success Rate (ASR) to measure the performance of our BadVisualPrompt. Here CA represents the percentage of clean test images whose predicted labels are the same as the ground-truth labels, while ASR represents the percentage of backdoored test images whose predicted labels are the same as the target labels. In general, a higher ASR with little impact on CA indicates a more effective backdoor attack. More detailed descriptions of our experimental setups can be found in Appendix B.
### Effectiveness of BadVisualPrompt
As can be seen from Table 1, our BadVisualPrompt achieves ASRs higher than 99% with less than 1% CA drop in most cases. The relatively low ASR (78.47%) of the RN50 model on EuroSAT for the multi-target attack is mainly caused by the low ASR (47.11%) for label index 5. We find that class index 5 has the minimum training samples (i.e., 1,600), so a model that generalizes not very well, i.e., the RN50 with a CA of 79.63%, may yield relatively low attack results.
We further test the impact of the poisoning ratio and trigger size on the attack performance. Detailed results for three datasets and four models are shown in Figure 11 of Appendix C. As expected, in general, the attack performance increases as the poisoning ratio or trigger size increases. Specifically, we find that the attack performance gets saturated for most cases even when the ratio is lower than 3% or the trigger size is \(2\times 2\). Another interesting observation is that CLIP yields better attack performance than the vision models. One exception is when the trigger size is small. We attribute this finding to the high capacities of CLIP and provide detailed explanations about the particularly low results for EuroSAT in Appendix C.
### New Insights into Trigger-Prompt Interactions
In conventional backdoor attacks, the impact of trigger position on the attack performance is negligible [73, 44, 71]. However, in our context, changing the trigger position may lead to different interactions between the trigger and the visual prompt since they are placed on the same input image. As can be seen from Figure 3, our exploratory experiments
Figure 2: Workflow of BadVisualPrompt. In the (a) training phase, the visual prompt is optimized on clean and poisoned data to contain backdoor information. Then, in the (b) inference phase, the backdoored prompt can be applied to triggered images for targeted misclassification.
show that placing the trigger in the central position leads to a significantly low ASR, except for CLIP.
To further analyze the impact of trigger-prompt interactions on the attack performance, we formulate the problem as gradually moving the trigger further away from the prompt, as illustrated in Figure 4. Here we choose a larger trigger size (i.e., \(4\times 4\)) to capture more variances of the trigger-prompt overlap. The \(4\times 4\) trigger on the original \(32\times 32\) image is resized to \(28\times 28\) on the resized \(224\times 224\) image. In addition to the padding prompt, we consider a stripe prompt with the size of \(30\times 224\) and a patch prompt with the size of \(80\times 80\). We define the position of a trigger as \((h,w)\), where \(h/w\) represents the vertical/horizontal coordinate pixel distance from the top left corner of the trigger to that of the resized image.
We measure the trigger-prompt interactions by their overlap and distance, which is their minimum pixel distance. As can be seen from Table 2, the trigger-prompt overlap has little impact on the attack performance. For example, both the CA and ASR results for padding remain almost the same when the overlap decreases from 784 to 108. In contrast, the trigger-prompt distance has a significant impact. For example, the ASR drops from 84.08% to 17.76% when the distance increases from 26 to 54. We find that the above observation also holds for the frequency-based label mapping strategy [10].
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Prompt} & \multirow{2}{*}{Metric} & \multicolumn{3}{c|}{Single-target attack} & \multicolumn{3}{c}{Multi-target attack} \\ \cline{3-10} & & & CIFAR10 & EuroSAT & SVHN & CIFAR10 & EuroSAT & SVHN \\ \hline \multirow{4}{*}{RN50} & \multirow{2}{*}{Clean} & CA (\%) & 54.99 & 79.63 & 60.95 & 54.99 & 79.63 & 60.95 \\ & & ASR (\%) & 93.33 & 11.67 & 21.46 & 9.80 & 8.48 & 13.45 \\ \cline{2-10} & \multirow{2}{*}{Backdoor} & CA (\%) & 54.75 & 79.33 & 59.91 & 54.29 & 80.19 & 59.41 \\ & & ASR (\%) & 99.94 & 99.99 & 100.00 & 96.19 & 78.47 & 99.22 \\ \hline \multirow{4}{*}{BiT-M} & \multirow{2}{*}{Clean} & CA (\%) & 61.91 & 85.72 & 69.43 & 61.91 & 85.72 & 69.43 \\ & & ASR (\%) & 10.86 & 10.94 & 20.18 & 9.54 & 9.64 & 14.69 \\ \cline{2-10} & \multirow{2}{*}{Backdoor} & CA (\%) & 62.45 & 86.22 & 70.99 & 61.67 & 86.28 & 68.73 \\ & & ASR (\%) & 100.00 & 100.00 & 100.00 & 99.96 & 99.40 & 99.86 \\ \hline \multirow{4}{*}{Ins.} & \multirow{2}{*}{Clean} & CA (\%) & 64.22 & 84.96 & 72.02 & 64.22 & 84.96 & 72.02 \\ & & ASR (\%) & 99.91 & 11.20 & 20.84 & 10.79 & 9.35 & 13.39 \\ \cline{2-10} & \multirow{2}{*}{Backdoor} & CA (\%) & 63.07 & 85.96 & 68.80 & 61.54 & 85.35 & 69.59 \\ & & ASR (\%) & 99.50 & 99.94 & 99.90 & 96.84 & 95.33 & 98.77 \\ \hline \multirow{4}{*}{CLIP} & \multirow{2}{*}{Clean} & CA (\%) & 92.94 & 99.51 & 90.88 & 52.94 & 96.11 & 90.88 \\ & & ASR (\%) & 99.93 & 10.98 & 20.10 & 9.95 & 9.25 & 13.52 \\ \cline{2-10} & \multirow{2}{*}{Backdoor} & CA (\%) & 92.95 & 96.46 & 90.34 & 92.32 & 96.15 & 90.76 \\ \cline{1-1} & & ASR (\%) & 99.99 & 99.94 & 100.00 & 99.80 & 98.95 & 99.95 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Single- and multi-target attack performance.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline Prompt & Position & Overlap & Distance & CA (\%) & ASR (\%) \\ \hline \multirow{4}{*}{Padding} & \multirow{2}{*}{\begin{tabular}{c} (\(0,0\)) \\ (\(28,28\)) \\ (\(56,56\)) \\ (\(84,84\)) \\ \end{tabular} } & 784 & 0 & 54.01 & 100.00 \\ & & 108 & 0 & 54.24 & 99.95 \\ & & (\(56,56\)) & 0 & 26 & 52.59 & 84.08 \\ & & (\(84,84\)) & 0 & 54 & 53.77 & 17.76 \\ \hline \multirow{4}{*}{Stripe} & \multirow{2}{*}{\begin{tabular}{c} (\(0,0\)) \\ (\(49,49\)) \\ (\(98,98\)) \\ (\(147,147\)) \\ \end{tabular} } & 784 & 0 & 31.24 & 99.93 \\ & & 0 & 19 & 31.28 & 68.07 \\ \cline{1-1} & & (\(98,98\)) & 0 & 68 & 31.19 & 19.48 \\ \cline{1-1} & & (\(147,147\)) & 0 & 117 & 30.43 & 20.85 \\ \hline \multirow{4}{*}{Patch} & \multirow{2}{*}{
\begin{tabular}{c} (\(0,0\)) \\ (\(49,49\)) \\ (\(98,98\)) \\ (\(98,98\)) \\ (\(147,147\)) \\ \end{tabular} } & 784 & 0 & 33.74 & 99.98 \\ \cline{1-1} & & 0 & 33.74 & 99.98 \\ \cline{1-1} & & 0 & 34.44 & 99.98 \\ \cline{1-1} & & 0 & 18 & 32.78 & 21.64 \\ \cline{1-1} & & 0 & 67 & 32.41 & 20.92 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The impact of trigger-prompt interactions on the attack performance.
Figure 4: Illustration of trigger-prompt interactions by moving the trigger along the diagonal with position \((h,w)\) for visual prompts following three different templates: padding (left), stripe (middle), and patch (right).
Figure 3: Attack success rates (%) of backdoored visual prompts (in gray color) with triggers (in blue color) at 9 typical positions on CIFAR10.
### Improving Distant Triggers
The above analyses suggest that a successful attack requires placing the trigger at specific positions. This increases the possibility of detecting the trigger and further mitigating the attack [12, 68]. Therefore, here we explore simple solutions to improve the attack for distant trigger positions. We focus on the padding prompt with the trigger placed at the image center and evaluate RN50 on CIFAR10.
**Larger Trigger Size/Poisoning Ratio or Coefficient \(\lambda\).** Based on the results in Section 3.2, a larger trigger size or poisoning ratio generally leads to better attack performance. However, we find that the ASR results are just around \(33\%\) even when the trigger size is increased to \(8\times 8\), or the poisoning ratio is increased to \(15\%\). According to the attack objective in Equation 2, a straightforward way to improve the attack is to increase the coefficient \(\lambda\). However, as can be seen from Figure 5, although a larger \(\lambda\) leads to higher attack success, the model utility substantially decreases.
**Trigger Pattern Optimization.** Instead of simply fixing the trigger pattern as above, here we treat the trigger as a learnable variable. We follow the bi-level optimization [23, 32] to alternatively update the prompt and trigger using the same loss function in Equation 2. The detailed optimization procedure with hyperparameter selection is described in Appendix D. As can be seen from Table 3, the optimized triggers yield consistently high ASRs with little impact on CA. For example, a small trigger size of \(4\times 4\) is sufficient to achieve a high attack performance above \(85\%\) on average. Note that trigger optimization inevitably requires additional computations, so it makes little sense to apply it to our main experiments, where a fixed trigger already works perfectly.
## 4 Experiments of Defenses
We evaluate six well-known model- and input-level backdoor defenses and also introduce a new, prompt-level detection approach that solely relies on the prompt features. Model-level defenses are applied to the prompted model, i.e., the combination of the frozen (clean) pre-trained model \(M\) and the backdoored visual prompt \(\mathbf{w}_{\text{b}}\). Since the pre-trained model is clean, we can induce that the visual prompt is backdoored if abnormal behaviors are shown. Note that dataset-level defenses [11, 65] are not applicable to our scenario because the attacker (i.e., VPPTaaS) does not send any backdoored dataset but the backdoored visual prompt to the victim (i.e., downstream user). We focus this section on fixed triggers and leave similar experiments on optimized triggers to Appendix I. Note that their conclusions are very similar.
### Model-Level Backdoor Detection
**Trigger-Reconstruction-Based Detection.** Neural Cleanse [68] is a backdoor defense based on trigger reconstruction. The main idea is the minimum perturbation required to reconstruct the trigger for the backdoor target label should be substantially smaller than that for other labels. Given the reconstructed triggers for all labels, an anomaly index is calculated on the statistical distribution of the norm values of these triggers. When the anomaly index is larger than a threshold \(T\), the model (the prompted model in our case) is treated as backdoored. To evaluate the effectiveness of Neural Cleanse, we generate 5 clean and 5 backdoored visual prompts on CIFAR10. We adopt the recommended threshold \(T=2\) and also other default parameter settings from the original work [68]. We show the ROC curves together with AUC scores in Figure 6. The recommended threshold \(T=2\) leads to either low TPR (RN50, BiT-M, and Instagram) or high FPR (CLIP). We thus conclude that Neural Cleanse is not effective against our backdoor attacks.
We further examine the trigger reconstruction results for both the failure and success cases for \(T=2\). Figure 7 visualizes two such examples. For Figure (a)a, the reconstruction is thought to fail since the anomaly index is \(1.82<T\), but it indeed successfully locates the trigger at the bottom right corner. For Figure (b)b, the reconstruction is thought to be successful since the anomaly index is \(3.73>T\), but it indeed fails to locate the trigger. We find that the above conflict happens half the time, suggesting that Neural Cleanse is not a reliable defense against our attack. Specifically, using only a scalar threshold may not sufficiently interpret the actual reconstruction results.
**Model-Diagnosis-Based Detection.** MNTD [71] learns a meta binary detector to distinguish backdoored models from clean ones. It assumes a black-box access to the target model
Figure 5: Improving distant triggers by increasing \(\lambda\) in Equation 2.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Prompt} & \multirow{2}{*}{Metric} & \multicolumn{3}{c|}{Trigger Size} & \multicolumn{3}{c}{Poisoning Ratio (\%)} \\ \cline{4-7} & & & 2 & 4 & 10 & 15 \\ \hline \multirow{3}{*}{RN50} & \multirow{3}{*}{Clean} & CA (\%) & 54.99 & 54.99 & 54.99 & 54.99 \\ & & ASR (\%) & 15.15 & 26.60 & 15.14 & 15.16 \\ \cline{3-7} & \multirow{3}{*}{Backdoor} & CA (\%) & 55.35 & 54.76 & 54.69 & 53.15 \\ & & ASR (\%) & 28.33 & 61.56 & 46.92 & 62.26 \\ \hline \multirow{3}{*}{BiT-M} & \multirow{3}{*}{Clean} & CA (\%) & 61.91 & 61.91 & 61.91 & 61.91 \\ & & ASR (\%) & 14.25 & 27.35 & 13.56 & 13.78 \\ \cline{3-7} & \multirow{3}{*}{Backdoor} & CA (\%) & 62.29 & 62.22 & 63.08 & 61.92 \\ \cline{3-7} & & ASR (\%) & 87.71 & 96.40 & 98.24 & 99.37 \\ \hline \multirow{3}{*}{Ins.} & \multirow{3}{*}{Clean} & CA (\%) & 64.22 & 64.22 & 64.22 & 64.22 \\ \cline{3-7} & & ASR (\%) & 12.18 & 17.23 & 12.27 & 12.53 \\ \cline{3-7} & \multirow{3}{*}{Backdoor} & CA (\%) & 63.20 & 66.15 & 64.55 & 62.82 \\ \cline{3-7} & & ASR (\%) & 61.67 & 92.96 & 89.15 & 94.46 \\ \hline \multirow{3}{*}{CLIP} & \multirow{3}{*}{Clean} & CA (\%) & 92.94 & 92.94 & 92.94 & 92.94 \\ \cline{3-7} & & ASR (\%) & 10.20 & 9.97 & 10.01 & 10.12 \\ \cline{1-1} \cline{3-7} & \multirow{3}{*}{Backdoor} & CA (\%) & 93.13 & 92.86 & 93.06 & 93.26 \\ \cline{1-1} \cline{3-7} & & ASR (\%) & 99.32 & 99.94 & 99.99 & 99.94 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Improving distant triggers by trigger pattern optimization.
and as a result, the detector takes as input the output posteriors of a set of fine-tuned queries on the target model. We use the CIFAR10 dataset and CLIP model for evaluation purposes. For the backdoored visual prompts, we consider diverse poisoning ratios (i.e., 0.5%, 1%, 3%, 5%, and 10%), trigger sizes (i.e., from \(1\times 1\) to \(5\times 5\)), and trigger locations (i.e., 9 in Figure 2(a)). In total, we obtain 340 backdoored visual prompts and the same number of clean visual prompts. Different random seeds are used to ensure that prompts generated in the same setting are different from each other.
For evaluation, we consider "known" and "unknown" scenarios. In the "known" scenario, we randomly sample 60% of the above prompts for training and 40% for testing. In the "unknown" scenario, we ensure the training and testing backdoored prompts are based on different parameters. Specifically, we select those generated with triggers located at the bottom right corner (180 in total) for training and the rest 160 backdoored prompts for testing. The same number of clean visual prompts are used to ensure class balance. We find MNTD performs very well, with an area under the curve (AUC) score of 1.0 in the "known" scenario and 0.995 in the "unknown" scenario.
### Prompt-Level Backdoor Detection
Since our backdoor is directly implanted into the visual prompt, it is worth exploring if it can be detected given only the prompt. To this end, we conduct similar experiments as in Section 4.1 but train a simple CNN detector containing 4 convolution layers instead of a meta detector in MNTD. This CNN detector takes as input the visual prompt combined with a pseudo-image full of zero pixel values. We find our prompt-level detection works perfectly, with the detection accuracy of 100% in both the "known" and "unknown" scenarios.
We further examine the Grad-CAM saliency maps [61] to interpret how our prompt-level detector works. As can be seen from Figure 8, besides being similarly effective, the two detectors for the two different scenarios also yield similar salient regions. Specifically, the salient regions spread the whole prompt for clean prompts but concentrate on local regions for backdoored prompts. This difference also confirms the perfect detection performance. Interestingly, for the backdoored visual prompts, the most salient regions do not overlap with the triggers, indicating that the backdoor information stored in the prompt is not around the trigger.
**MNTD vs. Our Prompt-Level Detector.** We further use the t-SNE [66] to help explain the good performance of MNTD and our prompt-level detector and compare their properties, as shown in Figure 9. We can observe that for both detectors, clean and backdoored prompts are clearly separable, confirming their good performance. A clear difference is that the MNTD samples are linearly separable, but for our prompt-level detector, the clean prompts are densely clustered, and the backdoored ones surround this cluster. This may be explained by the fact that the dimension of pixel-space visual prompts is much higher than the output posteriors used in MNTD.
**Note on the Practicality.** Both MNTD and our prompt-level detector require a number of training prompts. In Figure 14 of Appendix F, we show that a stable and good detection performance requires around 60 training visual prompts. Similar to Carlini et al. [8], we argue that all these efforts, however, are infeasible for downstream users with limited resources. Otherwise, they do not need the VPPTaaS service in the first place.
Figure 8: Grad-CAM visualizations of the clean vs. backdoored prompts (with triggers at 9 positions) for our prompt-level detector trained in the “known” scenario. Red regions correspond to high saliency scores. T, C, B, L, and R denote Top, Central, Bottom, Left, and Right, respectively. Figure 13 in Appendix E further shows that the “unknown” scenario follows almost the same pattern.
Figure 6: Backdoor detection by Neural Cleanse [68].
Figure 7: Visualizations of reconstructed triggers for failure and success cases of Neural Cleanse [68].
Therefore, detection-based defenses may not be practical in our scenario.
### Input-Level Backdoor Detection
Backdoor detection is also commonly conducted at the input level, where clean inputs are accepted for further use but backdoored inputs are rejected. The detection performance is evaluated based on two metrics: False Rejection Rate (FRR) and False Acceptance Rate (FAR). FAR represents the percentage of backdoored inputs that are falsely detected as clean. FRR represents the percentage of clean inputs that are falsely detected as backdoored. A detection method is expected to achieve a low FAR for effectiveness and a low FRR for maintaining the model utility. We consider two detection methods, SentiNet [13] and STRIP [21]. The intuition of SentiNet is that strong localized universal attacks usually cause the saliency of the pre-trained model to concentrate on the localized perturbations, e.g., the triggers in backdoor attacks. Model predictions on such strongly concentrated salient regions persist no matter how the rest image regions change. STRIP relies on a more general intuition that the model prediction on a backdoored input is more invariant to image corruptions than that on a clean input. See Appendix G for detailed descriptions.
For evaluating SentiNet, we first ensure that it is applicable in our case by showing that the saliency of the backdoored image indeed concentrates on the trigger in Appendix H. Then, we conduct quantitative experiments on 1,000 clean and 1,000 backdoored images. As can be seen from Table 4, SentiNet performs particularly badly for CLIP (i.e., FAR = 35.20%). We further check the salient regions generated for backdoored input images, and we find that in _all_ false acceptance cases, Grad-CAM fails to locate the triggers accurately. On the other hand, SentiNet yields relatively high FRRs (around 10%), leading to a non-negligible drop in model utility. For evaluating STRIP, we use 2,000 clean inputs to determine the entropy threshold (by ensuring the FRR on these clean inputs is 1%). We then employ another 2,000 clean and 2,000 backdoored inputs for detection. A softmax function is used to process the output posteriors before the entropy calculation in our experiments. As can be seen from Table 4, STRIP is superior to SentiNet, especially for CLIP.
**Bypass Input-Level Detection.** Both SentiNet and STRIP require attacks to be strong so that the model prediction on a backdoored input is consistent over multiple overlaid images. Therefore, we further explore if the attacker can bypass SentiNet and STRIP by intentionally restricting their strength. Specifically, we adopt the \(4\times 4\) optimized trigger on the RN50 model in Section 3.4. We find this modification still yields an acceptable ASR (i.e., 61.56%) but drastically increases the FAR to 37.10% for SentiNet and 87.20% for STRIP. These results indicate that it is possible to largely compromise the performance of SentiNet and STRIP by adopting a moderate attack.
### Backdoor Mitigation
Although users can reject a backdoored model/prompt based on detection results, it may be impractical because finding another service provider requires additional resources and expertise [68]. In this case, backdoor mitigation is a promising alternative, which is normally achieved by eliminating backdoors from the model/prompt or trigger patterns from the input.
**Prompt-Level Backdoor Mitigation.** Fine-Pruning [42] applies network pruning [50, 26] to mitigate model backdoors. Fine-Pruning iteratively prunes neurons with the lowest average activations on a validation dataset until a pre-defined pruning fraction \(\eta\) is reached. In our case, we fix the model but prune the pixels with the lowest absolute values from the visual prompt. As can be seen from Figure 10, there is a clear trade-off between ASR and CA for RN50, BiT-M, and
\begin{table}
\begin{tabular}{l|l|c|c|c|c} \hline \multirow{2}{*}{Defense} & \multirow{2}{*}{Metric} & \multicolumn{4}{c}{Model} \\ \cline{3-6} & & RN50 & BiT-M & Ins. & CLIP \\ \hline \multirow{2}{*}{SentiNet} & FAR (\%) & 0.00 & 8.30 & 1.00 & 35.20 \\ & FRR (\%) & 9.20 & 8.50 & 11.30 & 9.10 \\ \hline \multirow{2}{*}{STRIP} & FAR (\%) & 0.05 & 0.00 & 0.25 & 1.35 \\ & FRR (\%) & 2.80 & 3.65 & 3.20 & 1.15 \\ \hline \end{tabular}
\end{table}
Table 4: Backdoor detection results of SentiNet [13] and STRIP [21].
Figure 10: Backdoor mitigation by Fine-Pruning [42].
Figure 9: The t-SNE visualizations for MNTD [71] trained in the “known” scenario (left) and our prompt-level detector (right).
Instagram. For example, when ASR starts to substantially drop for \(\eta>70\%\), the CA also drops dramatically. In contrast, for CLIP, CA is consistently well maintained, even when \(\eta=100\%\), i.e., downstream users choose to completely drop the use of the visual prompt. This contrast might be explained by the strong "zero-shot" capability of CLIP, which makes CLIP easily transfer to other downstream tasks via simple natural language instructions without fine-tuning on downstream data [55].
**Input-Level Backdoor Mitigation.** DAPAS [12] trains a Denoising AutoEncoder (DAE) [67] on balanced clean and perturbed images and then uses this trained DAE to eliminate potential perturbations. We follow the settings of Cho et al. [12] and report the results in Table 5. Here, "w/o" means no DAPAS is applied, while "Noise" means training DAPAS with Gaussian noise, and "Trigger" means an extreme scenario in which the defender constructs the perturbed training using the identical trigger to that of the attacker. We can observe that "Noise" is sufficient to guarantee very low ASR results, and "Trigger" further decreases them. However, DAPAS also trades off the CA by about 40%.
## 5 Conclusion and Outlook
We provide the first systematic study of security vulnerabilities of visual prompt learning (VPL) from the lens of backdoor attacks. From the attack perspective, we propose BadVisualPrompt, the first backdoor attack against VPL, and demonstrate its general effectiveness over different models, datasets, and VPL variants. We particularly analyze the impact of trigger-prompt interactions on the attack performance and show that the attack performance may be largely decreased when the trigger and prompt are distant. From the defense perspective, we demonstrate that representative detection- and mitigation-based methods are either ineffective or impractical against our BadVisualPrompt. We also provide new insights into their behaviors in both the success and failure cases. Although our new attack may be potentially misused by malicious actors, we firmly believe that our systematic analyses of the vulnerability of VPL can provide more to future studies for designing effective defenses.
Since visual prompt learning (VPL) just gets substantial attention very recently, it is understandable that VPL performs not well enough in certain settings. However, we can already see performance improvement in recent attempts [10, 31]. Since we are the first to explore the vulnerabilities of VPL, we have focused our study solely on the well-defined backdoor attacks. Moving forward, it would be necessary to explore other vulnerabilities. In addition, although we have tried our best to evaluate diverse defenses against our new attack, no effective defenses have been found so far. Future work should explore new defense strategies to defeat our attack.
|
2301.11711
|
ADDIS-Graphs for online error control with application to platform
trials
|
In contemporary research, online error control is often required, where an
error criterion, such as familywise error rate (FWER) or false discovery rate
(FDR), shall remain under control while testing an a priori unbounded sequence
of hypotheses. The existing online literature mainly considered large-scale
designs and constructed blackbox-like algorithms for these. However, smaller
studies, such as platform trials, require high flexibility and easy
interpretability to take study objectives into account and facilitate the
communication. Another challenge in platform trials is that due to the shared
control arm some of the p-values are dependent and significance levels need to
be prespecified before the decisions for all the past treatments are available.
We propose ADDIS-Graphs with FWER control that due to their graphical structure
perfectly adapt to such settings and provably uniformly improve the
state-of-the-art method. We introduce several extensions of these ADDIS-Graphs,
including the incorporation of information about the joint distribution of the
p-values and a version for FDR control.
|
Lasse Fischer, Marta Bofill Roig, Werner Brannath
|
2023-01-27T13:50:18Z
|
http://arxiv.org/abs/2301.11711v3
|
# An Adaptive-Discard-Graph for online error control
###### Abstract
In recent years, graphical multiple testing procedures have gained popularity due to their generality and ease of interpretation. In contemporary research, online error control is often required, where an error criterion, such as familywise error rate (FWER) or false discovery rate (FDR), shall remain under control while testing an a priori unbounded sequence of hypotheses. Although the classical graphical procedure can be extended to the online setting, previous work has shown that it leads to low power, and other approaches, such as Adaptive-Discard (ADDIS) procedures, are preferred instead. In this paper, we introduce an ADDIS-Graph with FWER control and its extension for the FDR setting. These graphical ADDIS procedures combine the good interpretability of graphical procedures with the high online power of ADDIS procedures. Moreover, they can be adapted to a local dependence structure and an asynchronous testing setup, leading to power improvements over the current state-of-art methods. Consequently, the proposed methods are useful for a wide range of applications, including innovative complex trial designs, such as platform trials, and large-scale test designs, such as in the evaluation of A/B tests for marketing research.
graphical testing procedures false discovery rate familywise error rate online multiple testing.
## 1 Introduction
In online multiple testing, an infinite stream of hypotheses \((H_{i})_{i\in\mathbb{N}}\) is tested sequentially (Foster and Stine, 2008). This means, at each step \(i\in\mathbb{N}\), a decision has to be made on the current hypothesis \(H_{i}\) while having access only to the previous hypotheses and decisions. Since the number of hypotheses to be tested in the future is unknown, an infinite number is usually assumed. Due to the testing of multiple hypotheses, the probability of making false discoveries increases and a multiplicity correction is required. Usually, either the _familywise error rate_ (FWER) or the _false discovery rate_ (FDR) are used as error criteria. While controlling the FWER refers to maintaining the probability of rejecting at least one true null hypothesis below some pre-specified threshold, the FDR controls the expected proportion of true hypotheses among the rejections, and thus allows making false discoveries (Benjamini and Hochberg, 1995). The choice of error criterion depends on the study and the associated stringency towards false rejections. In some online applications, the less conservative FDR is preferred, as the number of hypotheses is very large. Examples can be found in the marketing of large internet companies, which conduct a sequence of so-called A/B tests to improve websites (Kohavi et al., 2013). However, there are also applications where false discoveries need to be avoided and control of the FWER is necessary. For instance, specific platform trials can be embedded in the online multiple testing setting (Robertson et al., 2022) and in such clinical trials FWER control may be required. In genetic studies, an a priori unbounded sequence of hypotheses is tested (Munoz-Fuentes et al., 2018), often with an interest in
the FWER. Another example where online FWER is essential is the updating of a machine learning algorithm (Feng et al., 2022).
In classical "offline" multiple testing, graphical approaches have been suggested to facilitate the visualization and communication of multiple testing procedures. In the seminal paper, Bretz et al. (2009) proposed the representation of multiple testing procedures by directed graphs, where the null hypotheses are represented by nodes accompanied by their initial significance levels and connected by weighted vertices, illustrating error propagation if the hypotheses are rejected. The graphical procedures provide FWER control, facilitate the illustration of the study objectives and are very popular as many test procedures can be represented by this graphical structure (Bretz et al., 2009). The graphical approaches were later extended to adaptive designs (Klinglmueller et al., 2014) and to other error measures (Robertson et al., 2020), and more recently Tian and Ramdas (2021) extended the graphical approach to the online case. Fischer et al. (2022) presented a new online closure principle which ensures that the resulting closed procedure can be applied in the online setting and, in particular, showed how the online version of the graphical procedure can be written as an online closed procedure based on the Alpha-Spending (Foster and Stine, 2008). Feng et al. (2022) also used an online version of the graphical procedure for a specific online multiple testing problem, namely, the updating of a machine learning algorithm. In addition, they included the correlation structure between the \(p\)-values in order to improve the algorithm. However, one disadvantage the previously mentioned online graphical approaches have in common is that the individual significance levels are typically rather low due to a large number of hypotheses in online multiple testing. Since the significance level is only distributed to the future hypotheses when a \(p\)-value is below an individual significance level, an update of the levels is unlikely, which results in low power. A more promising approach to online FWER control is the Adaptive-Discard (ADDIS) principle by Tian and Ramdas (2021). It allows to preserve the individual significance level of a hypothesis \(H_{i}\) for the future testing process if the \(p\)-value \(P_{i}\) lies outside of an interval \((\lambda_{i},\tau_{i}]\) with \(0\leq\lambda_{i}<\tau_{i}\leq 1\). Tian and Ramdas (2021) proposed the ADDIS-Spending as a concrete ADDIS procedure. In the case of \(P_{i}\leq\lambda_{i}\) or \(P_{i}>\tau_{i}\), the ADDIS-Spending ignores the hypothesis \(H_{i}\) in the future testing process and adjusts the future significance levels accordingly. The price for this improvement is a testing factor \((\tau_{i}-\lambda_{i})\), which needs to be multiplied by the individual significance level before comparing it with the \(p\)-value. In comparison to FWER, a variety of approaches have been proposed for FDR control (Foster and Stine, 2008; Ramdas et al., 2017, 2018; Javanmard and Montanari, 2018; Tian and Ramdas, 2019). Also, in this case, the ADDIS\({}^{*}\) procedure proposed by Tian and Ramdas (2019), which is based on an ADDIS principle for FDR control, seems to be the most promising in terms of online power (Robertson et al., 2022b). However, ADDIS-Spending and ADDIS\({}^{*}\) lack generality and interpretability, which also leads to power loss in certain frameworks, such as local dependency and an asynchronous test structure. Our main contribution in this paper is the so-called ADDIS-Graph. The ADDIS-Graph allows to distribute significance level to future hypotheses whenever a \(p\)-value \(P_{i}\) is less or equal than \(\lambda_{i}\) or greater than \(\tau_{i}\). Consequently, the ADDIS-Graph combines the good interpretability of graphical procedures with the high online power of ADDIS procedures.
In Section 2, we formally describe the setting and present the online version of the graphical procedure, the ADDIS principle for FWER control and the ADDIS-Spending (Tian and Ramdas, 2021). Based on these concepts, we derive the ADDIS-Graph for FWER control and show that it contains all other online procedures satisfying the ADDIS principle (Section 3). The visual representation of the ADDIS-Graph clarifies the dependencies between an individual significance level and the outcomes of previous tests. This allows the ADDIS-Graph to adapt to complex situations, resulting in high efficiency. We illustrate this by showing that the ADDIS-Graph leads to an improvement over the ADDIS-Spending under local dependence (Section 4). In Section 5, we transfer the ADDIS-Graph approach to the FDR setting, resulting in the FDR-ADDIS-Graph. Here, we show superiority over ADDIS\({}^{*}\) with the example of an asynchronous testing setup, where a test does not necessarily start and finish at the same step. In Sections 6 and 7, we compare our proposals with the procedures proposed by Tian and Ramdas (2019, 2021) through a simulation study and application to real data, respectively. All formal proofs of the theoretical assertions are in the Appendix.
## 2 Preliminaries
### Setting and notation
Let \(I_{0}\) be the index set of true hypotheses, \(R(i)\) be the index set of rejected hypotheses up to step \(i\in\mathbb{N}\) and \(V(i)=I_{0}\cap R(i)\) denote the index set of falsely rejected hypotheses up to step \(i\). We aim to control the familywise error rate
\[\text{FWER}(i)\coloneqq\mathbb{P}(|V(i)|>0) \tag{1}\]
at each step \(i\in\mathbb{N}\), where \(\mathbb{P}\) denotes the probability under the true configuration of true and false hypotheses. Since \(\text{FWER}(i)\) is nondecreasing, it is sufficient to control \(\text{FWER}\coloneqq\mathbb{P}(v>0)\), where \(v\coloneqq\lim_{i\to\infty}|V(i)|\). The FWER is controlled strongly at level \(\alpha\), if \(\text{FWER}\leq\alpha\) for any configuration of true and false null hypotheses. In contrast, weak
control only provides that FWER \(\leq\alpha\) under the global null hypothesis, which assumes that all hypotheses are true (\(I_{0}=\mathbb{N}\)). In this paper, we focus on strong control.
Denoting by \((P_{i})_{i\in\mathbb{N}}\) the \(p\)-values corresponding to the hypotheses \((H_{i})_{i\in\mathbb{N}}\). Each null \(p\)-value \(P_{i}\), \(i\in I_{0}\), is assumed to be valid, meaning \(\mathbb{P}(P_{i}\leq x)\leq x\) for all \(x\in[0,1]\). A hypothesis \(H_{i}\) is rejected, if \(P_{i}\leq\alpha_{i}\), where \(\alpha_{i}\in[0,1)\) is the individual significance level of \(H_{i}\). ADDIS algorithms also require additional parameters \((\tau_{i})_{i\in\mathbb{N}}\) and \((\lambda_{i})_{i\in\mathbb{N}}\) with values in \((0,1]\) and \([0,\tau_{i})\), respectively. In order to apply a multiple testing procedure in the online setting, these parameters are only allowed to depend on information about previous \(p\)-values. Mathematically, \((\alpha_{i})_{i\in\mathbb{N}}\), \((\tau_{i})_{i\in\mathbb{N}}\) and \((\lambda_{i})_{i\in\mathbb{N}}\) are sequences of random variables such that \(\alpha_{i}\), \(\tau_{i}\) and \(\lambda_{i}\) are measurable with respect to \(\mathcal{G}_{i-1}\coloneqq\sigma(\{R_{1},S_{1},C_{1},\ldots,R_{i-1},S_{i-1},C_ {i-1}\})\), where \(R_{j}=\mathbb{1}_{P_{j}\leq\alpha_{j}}\), \(S_{j}=\mathbb{1}_{P_{j}\leq\tau_{j}}\) and \(C_{j}=\mathbb{1}_{P_{j}\leq\lambda_{j}}\).
### Online-Graph and ADDIS procedures
In the following, we present essential concepts for constructing an ADDIS-Graph. We start with the Online-Graph, which was introduced by Tian and Ramdas (2021) (named Online-Fallback procedure in their paper) as the online version of the graphical procedure by Bretz et al. (2009). Fischer et al. (2022) have shown how the Online-Graph can be obtained by the online closure principle. Afterwards, we present the ADDIS principle and ADDIS-Spending by Tian and Ramdas (2021).
The procedures considered in this paper involve a non-negative sequence \((\gamma_{i})_{i\in\mathbb{N}}\) with \(\sum_{i\in\mathbb{N}}\gamma_{i}\leq 1\), which can be interpreted as the initial allocation of the significance level \(\alpha\). The graphical procedures additionally require non-negative weights \((g_{j,i})_{i=j+1}^{\infty}\) for all \(j\in\mathbb{N}\) with \(\sum_{i=j+1}^{\infty}g_{j,i}\leq 1\), which determine the updating of the individual significance levels during the testing process. With this, the _Online-Graph_ is defined as
\[\alpha_{i}=\alpha\gamma_{i}+\sum_{j=1}^{i-1}g_{j,i}R_{j}\alpha_{j}. \tag{2}\]
The Online-Graph is illustrated in Figure 1. The initial individual significance level is below each hypothesis. After the rejection of a hypothesis \(H_{j}\), \(j\in\mathbb{N}\), the individual significance level of \(H_{j}\) is distributed to the future hypotheses according to the weights \((g_{j,i})_{i=j+1}^{\infty}\). The rectangles below the nodes can be ignored for the Online-Graph and the dots at the end refer to the fact that there is an infinite number of future hypotheses.
Due to the ease of interpretation, graphical multiple testing procedures are very popular. However, except for unrealistic extreme cases (e.g. that each hypothesis can be rejected), the individual significance levels \((\alpha_{i})_{i\in\mathbb{N}}\) of the Online-Graph tend to \(0\) for \(i\) to infinity. This means that the probability of distributing a significance level to the future hypotheses becomes enormously unlikely at a late stage of the testing process, which results in low power. For this reason, Tian and Ramdas (2021) have proposed an ADDIS principle that preserves the significance level of \(H_{i}\) for the future testing process, if \(P_{i}\leq\lambda_{i}\) or \(P_{i}>\tau_{i}\). In this way, the decrease of the significance levels can be slowed down and thus the power increased. However, this improvement also has its cost. In order to control the FWER, it is required that the null \(p\)-values are independent of each other and the non-nulls. In addition, it is assumed that the null \(p\)-values are uniformly valid, which means \(\mathbb{P}(P_{i}\leq xy|P_{i}\leq y)\leq x\) for all \(x,y\in[0,1]\) and \(i\in I_{0}\)(Zhao et al., 2019). Furthermore, in case of \(\lambda_{i}<P_{i}\leq\tau_{i}\), the level \(\alpha_{i}/(\tau_{i}-\lambda_{i})\) is lost instead of just \(\alpha_{i}\).
**Theorem 2.1** (ADDIS principle for FWER control (Tian and Ramdas, 2021)).: _Assume the null \(p\)-values are uniformly valid and independent from each other and the non-nulls. Every multiple testing procedure controls the FWER in the strong sense when the individual significance levels \((\alpha_{i})_{i\in\mathbb{N}}\) satisfy_
\[\sum_{j=1}^{i}\frac{\alpha_{j}}{\tau_{j}-\lambda_{j}}(S_{j}-C_{j})\leq\alpha \quad\text{for all }i\in\mathbb{N}. \tag{3}\]
The ADDIS principle combines the two concepts of _adaptivity_ and _discarding_. The idea of adaptivity is based on the fact that false hypotheses cannot lead to a type I error and thus testing false hypotheses does not increase the FWER. Hence, \(\lambda_{i}\) is used to estimate whether a hypothesis is true or false. The discarding approach uses the fact that null \(p\)-values are often conservative, meaning \(\mathbb{P}(P_{i}\leq x)<x\) or equivalently \(\mathbb{P}(P_{i}>x)>1-x\) for some \(x\in[0,1]\) and \(i\in I_{0}\). Discarding exploits this by accepting large \(p\)-values without testing, which leads to higher significance levels for the remaining hypotheses. Note that one could also consider the adaptivity and discarding part separately by setting \(\tau_{i}=1\) or \(\lambda_{i}=0\) respectively for all \(i\in\mathbb{N}\). In case of \(\tau_{i}=1\), the assumption about the null \(p\)-values being uniformly valid can be dropped (Tian and Ramdas, 2021).
Online multiple testing procedures that follow Theorem 2.1 are called ADDIS procedures. As a concrete ADDIS procedure, Tian and Ramdas (2021) proposed the _ADDIS-Spending_. The idea of ADDIS-Spending is to ignore a
hypothesis \(H_{i}\) in case of \(P_{i}\leq\lambda_{i}\) or \(P_{i}>\tau_{i}\) in the future testing process and adjust the significance levels accordingly. This results in the individual significance level
\[\alpha_{i}=\alpha(\tau_{i}-\lambda_{i})\gamma_{t(i)},\quad\text{where }t(i)=1+\sum_{j=1}^{i-1}(S_{j}-C_{j}). \tag{4}\]
## 3 ADDIS-Graph for FWER control
Tian and Ramdas (2021) showed by means of simulations that the ADDIS-Spending (in equation (4)) leads to a substantially higher power than the one achieved when using the Online-Graph. However, the interpretation of the Online-Graph is easier, which makes it simpler to use. To this end, we bring together the approaches of the Online-Graph (in (2)) and the ADDIS principle (Theorem 2.1), resulting in what we call the ADDIS-Graph.
**Definition 3.1** (ADDIS-Graph).: Let \((\gamma_{i})_{i\in\mathbb{N}}\) and \((g_{j,i})_{i=j+1}^{\infty}\), \(j\in\mathbb{N}\), be non-negative sequences that sum to at most one. In addition, let \(\tau_{i}\in(0,1]\) and \(\lambda_{i}\in[0,\tau_{i})\) be measurable regarding \(\mathcal{G}_{i-1}\) for all \(i\in\mathbb{N}\). The _ADDIS-Graph_ tests each hypothesis \(H_{i}\) at significance level
\[\alpha_{i}=(\tau_{i}-\lambda_{i})\left(\alpha\gamma_{i}+\sum_{j=1}^{i-1}g_{j, i}(C_{j}-S_{j}+1)\frac{\alpha_{j}}{\tau_{j}-\lambda_{j}}\right). \tag{5}\]
**Theorem 3.2**.: _The ADDIS-Graph satisfies the ADDIS principle (Definition 2.1) and thus controls the FWER in the strong sense when the the null \(p\)-values are uniformly valid and independent from each other and the non-nulls._
In order to represent this ADDIS-Graph as a graph, consider \(\tilde{\alpha}_{i}=\alpha_{i}\frac{1}{\tau_{i}-\lambda_{i}}\) for all \(i\in\mathbb{N}\), where \(\alpha_{i}\) is the significance level obtained by the ADDIS-Graph. Equation (5) gives us \(\tilde{\alpha}_{i}=\alpha_{i}\frac{1}{\tau_{i}-\lambda_{i}}=\alpha\gamma_{i}+ \sum_{j=1}^{i-1}g_{j,i}(C_{j}-S_{j}+1)\tilde{\alpha}_{j}\). Comparing this with the Online-Graph (2), the \((\tilde{\alpha}_{i})_{i\in\mathbb{N}}\) can be interpreted as the significance levels we would obtain by a graph with initial levels \((\alpha\gamma_{i})_{i\in\mathbb{N}}\) that updates the future levels whenever a \(p\)-value \(P_{j}\), \(j\in\mathbb{N}\), is less or equal than \(\lambda_{j}\) or greater than \(\tau_{j}\). Thus, one can first determine \(\tilde{\alpha}_{i}\) using this graph and then compute the level of the ADDIS-Graph as \(\alpha_{i}=\tilde{\alpha}_{i}(\tau_{i}-\lambda_{i})\). This fact is used to illustrate the ADDIS-Graph in Figure 1. It can be interpreted just as the Online-Graph, with two subtle differences. First, we can choose at each step \(j\in\mathbb{N}\) limits \(\tau_{j}\in[0,1)\) and \(\lambda_{j}\in(0,\tau_{j}]\) for the \(p\)-value \(P_{j}\) that determine when the significance level of the \(j\)-th hypothesis is distributed among the future hypotheses. Second, we need to include an additional testing factor based on these parameters, which is illustrated in the rectangle below each hypothesis. This testing factor is only multiplied with the individual significance level when the corresponding hypothesis is tested, but it is not involved in the updating process with the graph.
Figure 1: Illustration of the ADDIS-Graph. Ignoring the rectangles the figure can also be interpreted as the Online-Graph.
When defining the ADDIS-Graph, we considered the parameters \((\gamma_{i})_{i\in\mathbb{N}}\) and \((g_{j,i})_{i=j+1}^{\infty}\), \(j\in\mathbb{N}\) as fixed. However, we do not need this assumption to satisfy the conditions of Theorem 2.1 and thus to control the FWER. Consequently, \(\gamma_{i}\) and \(g_{j,i}\) could also be random variables that are measurable regarding \(\mathcal{G}_{i-1}\). With this, the procedures become more flexible. It can even be shown that, in this case, the ADDIS-Graph is the general ADDIS procedure, thus containing all procedures satisfying the ADDIS principle (Theorem 2.1).
**Theorem 3.3**.: _Let \(\gamma_{i}\) (\(i\in\mathbb{N}\)) and \(g_{j,i}\) (\(j\in\mathbb{N}\), \(i>j\)) be measurable with respect to \(\mathcal{G}_{i-1}\). Then, any procedure satisfying the ADDIS principle (Theorem 2.1) can be written as an ADDIS-Graph (Definition 3.1)._
Note that the ADDIS-Graph is more general than the ADDIS-Spending, as Theorem 3.3 does not hold for ADDIS-Spending. To see this, suppose we choose a fix \(\lambda_{i}=\lambda\) and \(\tau_{i}=\tau\) for all \(i\in\mathbb{N}\). Then \(P_{i}\leq\lambda\) or \(P_{i}>\tau\) directly implies \(\alpha_{i}=\alpha_{i+1}\) (see (4)) which does not need to hold for every other ADDIS procedure.
**Remark.** For a positive and nonincreasing \((\gamma_{i})_{i\in\mathbb{N}}\), one could write the ADDIS-Spending as an ADDIS-Graph by choosing \(g_{j,i}=(\gamma_{t(j)+i-j-1}-\gamma_{t(j)+i-j})/\gamma_{t(j)}\), where \(t(j)=1+\sum_{k=1}^{j-1}(S_{k}-C_{k})\). If \((\gamma_{i})_{i\in\mathbb{N}}\) is increasing, it becomes more complex, as the \((\gamma_{i})_{i\in\mathbb{N}}\) used in the ADDIS-Graph would need an adjustment as well.
For the sake of simplicity, we consider \((\gamma_{i})_{i\in\mathbb{N}}\) and \((g_{j,i})_{j\in\mathbb{N},i>j}\) as fixed parameters in the remainder of this paper. In the following section, we show that the ADDIS-Graph can handle local dependence structures and argue why it provides a major improvement over the ADDIS-Spending under local dependence.
## 4 ADDIS-Graph under local dependence
Procedures based on the ADDIS principle (Theorem 2.1) only control the FWER when the \(p\)-values are independent. In practice, this assumption is often violated. For example, when the same control group is used to test experimental groups in different hypotheses or the formulation of new hypotheses is based on the previous test outcomes. On the other hand, it is unlikely that \(p\)-values from the distant past have any influence on the current testing, as the data and context of the data might have changed. For this reason, Zrnic et al. (2021) have proposed a local dependence structure. Assume that a fixed sequence of lags \((L_{i})_{i\in\mathbb{N}}\) with \(L_{i}\in\{0,1,\ldots,i-1\}\) and \(L_{i+1}\leq L_{i}+1\) for all \(i\in\mathbb{N}\) is given. Then, the \(p\)-values \((P_{i})_{i\in\mathbb{N}}\) are called _locally dependent_, if \(\forall i\in\mathbb{N}\) holds:
\[P_{i}\perp P_{i-L_{i}-1},P_{i-L_{i}-2},\ldots,P_{1}.\]
For every \(P_{i}\), this local dependency structure specifies up to which point of time the previous p-values are independent of \(P_{i}\). Note that local dependence contains independence (\(L_{i}=0\)\(\forall i\in\mathbb{N}\)) and arbitrary dependence (\(L_{i}=i-1\)\(\forall i\in\mathbb{N}\)) as special cases. Although we consider the lags as fixed, they do not need to be known before the evaluation. However, \(L_{i}\) has to be determined at the beginning of step \(i\in\mathbb{N}\) and must not depend on the data itself. For example, the lags could be based on content-related information about the data. Tian and Ramdas (2021) showed that local dependence can be incorporated into the ADDIS principle (Theorem 2.1) by ignoring the dependent \(p\)-values and making pessimistic assumptions instead. Mathematically, \(\alpha_{i}\), \(\lambda_{i}\) and \(\tau_{i}\) need to be measurable regarding \(\mathcal{G}_{i-L_{i}-1}=\sigma(\{P_{1},\ldots,P_{i-L_{i}-1}\})\). Tian and Ramdas (2021) used this to adjust their ADDIS-Spending (4) to the local dependence by requiring \((\gamma_{i})_{i\in\mathbb{N}}\) to be nonincreasing and setting
\[\alpha_{i}=\alpha(\tau_{i}-\lambda_{i})\gamma_{t(i)},\quad\text{ where }t(i)=1+L_{i}+\sum_{j=1}^{i-L_{i}-1}(S_{j}-C_{j}). \tag{6}\]
Note that this procedure, which we refer to as _ADDIS-Spending\({}_{local}\)_, loses significance level due to local dependence. To see that, suppose \(\alpha_{i}^{*}=\alpha(\tau_{i}-\lambda_{i})\gamma_{t^{*}(i)}\), where \(t^{*}(i)=1+\sum_{j=1}^{i-1}(S_{j}-C_{j})\). Then, \(\alpha_{i}^{*}\) can be interpreted as the level we would obtain under independence (\(L_{i}=0\)). It is easy to see that \(t^{*}(i)\leq t(i)\), and often even strictly smaller. For example, suppose \(P_{1}\) and \(P_{2}\) depend on each other \((L_{2}=1)\). If \(P_{1}\leq\lambda_{1}\) or \(P_{1}>\tau_{1}\), we have \(t^{*}(2)=1<2=t(2)\) and if additionally \(\gamma_{1}>\gamma_{2}\), we also have \(\alpha_{2}^{*}>\alpha_{2}\). Since \((\gamma_{i})_{i\in\mathbb{N}}\) needs to be decreasing at some steps (unless it is constant \(0\)) and often is at every step, the power loss is inevitable and can get high when the lags \((L_{i})_{i\in\mathbb{N}}\) are large. In the following, we will see that this power loss can be avoided using the ADDIS-Graph.
First, we adjust the ADDIS-Graph to the local dependence structure such that FWER control is preserved. After that, we show how the weights of the ADDIS-Graph can be used such that no significance level is lost. A simple way to account for local dependence in the ADDIS-Graph (Figure 1) is to remove the arrows connecting dependent \(p\)-values and adjust the individual significance levels of the ADDIS-Graph (Definition 3.1) to
\[\alpha_{i}=(\tau_{i}-\lambda_{i})\left(\alpha\gamma_{i}+\sum_{j=1}^{i-L_{i}-1}g_ {j,i}(C_{j}-S_{j}+1)\frac{\alpha_{j}}{\tau_{j}-\lambda_{j}}\right).\]
In Figure 2, the ADDIS-Graph is illustrated for a specific local dependence structure. In this example, \(L_{2}=1\), meaning \(P_{1}\) and \(P_{2}\) depend on each other. This is illustrated by the dotted line. Hence, the link \(g_{1,2}\) is removed as no significance level of the first hypothesis can be allocated to the second. Furthermore, \(L_{3}\) was chosen equal to zero, which is why no further adjustment of the graph was needed. Note that by removing the weight \(g_{1,2}\) potential significance level is lost as well as in ADDIS-Spending. However, the ADDIS-Graph allows to adjust the weights to the given local dependence structure. For example, by adding \(g_{1,2}\) to one of the other weights \(g_{1,i}\), \(i>2\). In that case, it may be that not the same hypotheses benefit from the first hypothesis, but the same amount of significance level is distributed as under independence.
In order to formalise such strategies, note that \(L_{i+1}\leq L_{i}+1\) for all \(i\in\mathbb{N}\) implies that \(i-L_{i}\) is nondecreasing in \(i\). Hence, if we define \(d_{j}:=\min\{i\in\mathbb{N}:i-L_{i}>j\}\) (we set \(\min(\emptyset)=\infty\)) as the index of the first future \(p\)-value that does not depend on \(P_{j}\), all \(P_{k}\) with \(k>d_{j}\) are independent from \(P_{j}\) as well. Thus, the idea is to distribute the entire significance level of \(H_{j}\) in case of \(P_{j}\leq\lambda_{j}\) or \(P_{j}>\tau_{j}\) only to hypotheses \(H_{k}\) with \(k\geq d_{j}\). For this, we propose to remove the weights between dependent hypotheses and standardise the remaining weights, which leads to the following ADDIS-Graph under local dependence.
**Definition 4.1** (ADDIS-Graph\({}_{\text{local}}\)).: Assume local dependence with the lags \((L_{i})_{i\in\mathbb{N}}\). Let \((\gamma_{i})_{i\in\mathbb{N}}\) be a non-negative sequence that sums up to \(1\) and \((g_{j,i})_{i=j+1}^{\infty}\) be a non-negative sequence for all \(j\in\mathbb{N}\) such that \(\sum_{i=j+1}^{k}g_{j,i}<1\) for all \(k>j\). In addition, let \(\tau_{i}\in(0,1]\) and \(\lambda_{i}\in[0,\tau_{i})\) be measurable regarding \(\mathcal{G}_{i-L_{i}-1}=\sigma(\{P_{1},\ldots,P_{i-L_{i}-1}\})\). The _ADDIS-Graph\({}_{\text{local}}\)_ tests each hypothesis \(H_{i}\) at significance level
\[\alpha_{i}=(\tau_{i}-\lambda_{i})\left(\alpha\gamma_{i}+\sum_{j=1}^{i-L_{i}-1 }g_{j,i}^{*}(C_{j}-S_{j}+1)\frac{\alpha_{j}}{\tau_{j}-\lambda_{j}}\right),\]
where \(g_{j,i}^{*}=g_{j,i}\bigg{/}\left(1-\sum_{k=j+1}^{d_{j}-1}g_{j,k}\right)\) if \(i\geq d_{j}\) and \(g_{j,i}^{*}=0\) otherwise.
The FWER control of the ADDIS-Graph\({}_{\text{local}}\) comes directly by Theorem 3.2 and the ADDIS principle under local dependence (Tian and Ramdas, 2021). Importantly, note that, for all \(j\in\mathbb{N}\), it holds
\[d_{j}<\infty\text{ and }\sum_{i=j+1}^{\infty}g_{j,i}=1\implies\sum_{i=j+1}^{ \infty}g_{j,i}^{*}=1,\]
which implies that there are no uniformly larger weights than \((g_{j,i}^{*})_{i=j+1}^{\infty}\), that are suitable for an ADDIS-Graph. Since \(\alpha_{i}=\alpha_{i}^{*}\coloneqq(\tau_{i}-\lambda_{i})\left(\alpha\gamma_{i }+\sum_{j=1}^{i-1}g_{j,i}^{*}(C_{j}-S_{j}+1)\alpha_{j}\frac{1}{\tau_{j}- \lambda_{j}}\right)\), we do not lose significance level compared to the
Figure 2: Adjustment of the ADDIS-Graph (Figure 1) to a local dependence structure in which \(P_{1}\) and \(P_{2}\) depend on each other.
independent case, indicating superiority over the ADDIS-Spending, where significance level is lost when the \(p\)-values are locally dependent.
**Remark.**
* The \(\text{ADDIS-Graph}_{local}\) does not lose significance due to local dependence, if \(d_{j}<\infty\) for all \(j\in\mathbb{N}\), which is equivalent to \(\lim\limits_{i\to\infty}i-L_{i}=\infty\). In particular, this is satisfied if \((L_{i})_{i\in\mathbb{N}}\) has an upper bound, which indeed covers many cases that occur in practice. For example, when the hypotheses are tested in finite batches. That means, there are disjoint groups of \(p\)-values \(B_{1}=\{P_{1},\ldots,P_{j}\}\) (\(j\in\mathbb{N}\)), \(B_{2}=\{P_{j},\ldots,P_{k}\}\) (\(k>j\)), \(B_{3}=\{P_{k},\ldots,P_{l}\}\) (\(l>k\)) and so on, such that \(p\)-values from the same batch may depend on each other, but hypotheses from different batches are independent.
* There are many other possible ADDIS-Graphs for local dependence. We decided to present \(\text{ADDIS-Graph}_{\text{local}}\) because \(g_{j,i}^{*}/g_{j,k}^{*}=g_{j,i}/g_{j,k}\) for all \(j\in\mathbb{N}\) and \(i,k\geq d_{j}\). Hence, compared to \((g_{j,i})_{i=j+1}^{\infty}\), the weights \((g_{j,i}^{*})_{i=j+1}^{\infty}\) are increased by the same factor for each \(j\in\mathbb{N}\). Simulations showed that an extremely uneven allocation of the significance levels leads to low power.
In the same manner as for local dependence, the \(\text{ADDIS-Graph}\) can be adjusted to an asynchronous testing setup (Zmic et al., 2021). This is a generalisation of the online multiple testing framework in which the test for hypothesis \(H_{i}\) is not finished at step \(i\in\mathbb{N}\) but at a random time \(E_{i}\geq i\). It is assumed that \(E_{i}\) is independent of the \(p\)-values and thus can be interpreted as fixed but unknown before time \(E_{i}\). One has to determine a significance level for a hypothesis \(H_{i}\) at step \(j\in\mathbb{N}\) without using information about tests that are not finished before step \(j\). Thus, we can adjust the \(\text{ADDIS-Graph}\) to the asynchronous setting by removing arrows connecting hypotheses where the testing process overlaps in time. By standardizing the remaining weights we do not lose any significance level which again leads to a superiority of the \(\text{ADDIS-Graph}\) over the \(\text{ADDIS-Spending}\). A more formal construction of an \(\text{ADDIS-Graph}\) for the asynchronous setting can be found in the next section, where we derive an \(\text{ADDIS-Graph}\) with FDR control.
## 5 \(\text{ADDIS-Graph}\) for FDR control
In this section, we focus on FDR control, where
\[\text{FDR($i$)}\coloneqq\mathbb{E}\left(\frac{|V(i)|}{|R(i)|\lor 1}\right). \tag{7}\]
In order to control FDR(\(i\)) at any time \(i\in\mathbb{N}\) using \(\text{ADDIS}\) procedures, we need the additional assumptions that \(\lambda_{i}\geq\alpha_{i}\) for all \(i\in\mathbb{N}\) and that \(\alpha_{i}\), \(\lambda_{i}\) and \(1-\tau_{i}\) are monotonic functions of the past. This means that they are coordinatewise nondecreasing functions in \(R_{1:(i-1)}\coloneqq(R_{1},\ldots,R_{i-1})\) and \(C_{1:(i-1)}\coloneqq(C_{1},\ldots,C_{i-1})\) and nonincreasing in \(S_{1:(i-1)}\coloneqq(S_{1},\ldots,S_{i-1})\). Under these assumptions, Tian and Ramdas (2019) showed that the FDR is controlled if the condition of the \(\text{ADDIS}\) principle (3) for FWER control (Definition 2.1) is replaced with
\[\frac{\sum_{j=1}^{i}\frac{\alpha_{j}}{\tau_{j}-\lambda_{j}}(S_{j}-C_{j})}{|R(i )|\lor 1}\leq\alpha\quad\text{for all $i\in\mathbb{N}$.} \tag{8}\]
**Remark.** If \(\tau_{i}=1\) for all \(i\in\mathbb{N}\), the \(\text{ADDIS}\) principle for FDR control reduces to the SAFFRON principle by Ramdas et al. (2018). In this case, the uniformly validity assumption of the null \(p\)-values can be dropped.
The only difference between the conditions of the \(\text{ADDIS}\) principle for the FDR (in (8)) and for the FWER case (in (3)) is the denominator \(|R(i)|\lor 1\). Bringing it on the other side, it can be interpreted as if an additional level \(\alpha\) is gained after each rejection except for the first one. This can be incorporated into the \(\text{ADDIS-Graph}\) by distributing an additional \(\alpha\) to future hypotheses in case of rejection according to non-negative weights \((h_{j,i})_{i=j+1}^{\infty}\) such that \(\sum_{i=j+1}^{\infty}h_{j,i}\leq 1\) for all \(j\in\mathbb{N}\). For example, one could just choose \(h_{j,i}=g_{j,i}\).
Since no significance level is gained for the first rejection, FDR procedures often assume that a lower overall significance level of \(W_{0}\leq\alpha\) is available at the beginning of the testing process such that \((\alpha-W_{0})\) can be gained after the first rejection. To differentiate between the first and other rejections, we additionally define the indicator \(T_{i}\) with \(T_{i}=1\), if the first rejection happened within the first \(i-1\) steps and \(T_{i}=0\), otherwise. We also set \(T_{i}^{c}=1-T_{i}\). With this, the \(\text{ADDIS-Graph}\) for FDR control can be defined as follows.
**Definition 5.1** (Fdr-\(\text{ADDIS-Graph}\)).: Let \((\tau_{i})_{i\in\mathbb{N}}\), \((g_{j,i})_{j\in\mathbb{N},i>j}\), \((\tau_{i})_{i\in\mathbb{N}}\) and \((\lambda_{i})_{i\in\mathbb{N}}\) be as in \(\text{ADDIS-Graph}\) (Definition 3.1) such that \(\tau_{i}\) and \(\lambda_{i}\) are monotonic functions of the past. In addition, let \(W_{0}\leq\alpha\) and \((h_{j,i})_{i=j+1}^{\infty}\), \(j\in\mathbb{N}\), be a
non-negative sequence such that \(\sum_{i=j+1}^{\infty}h_{i,j}\leq 1\). The _FDR-ADDIS-Graph_ tests each hypothesis \(H_{i}\) at significance level \(\alpha_{i}=\min(\hat{\alpha}_{i},\lambda_{i})\), where
\[\hat{\alpha}_{i}=(\tau_{i}-\lambda_{i})\left(W_{0}\gamma_{i}+\sum_{j=1}^{i-1}g _{j,i}(C_{j}-S_{j}+1)\frac{\hat{\alpha}_{j}}{\tau_{i}-\lambda_{j}}+\sum_{j=1}^{i -1}h_{j,i}R_{j}[\alpha T_{j}+(\alpha-W_{0})T_{j}^{c}]\right). \tag{9}\]
Obviously, \(\alpha_{i}\) is a monotonic function of the past for all \(i\in\mathbb{N}\), which leads to the following conclusion.
**Theorem 5.2**.: _The FDR-ADDIS-Graph satisfies equation (8) and thus controls the FDR strongly, when the null \(p\)-values are uniformly conservative and independent from each other and the non-nulls._
The FDR-ADDIS-Graph is illustrated in Figure 3. Note that the figure only contains \((\hat{\alpha}_{i})_{i\in\mathbb{N}}\) and one needs to set \(\alpha_{i}=\min(\hat{\alpha}_{i},\lambda_{i})\) after using the graph. The FDR-ADDIS-Graph can be interpreted just as the ADDIS-Graph for FWER control (Figure 1). The additional grey arrows are activated if the corresponding hypothesis is rejected. In case of the first rejection, the level \(\alpha-W_{0}\) is distributed to the future hypotheses according to the weights \((h_{j,i})_{i=j+1}^{\infty}\), \(j\in\mathbb{N}\), and in case of any other rejection, the level \(\alpha\) is distributed.
The benefit of the FDR-ADDIS-Graph compared to the proposal of Tian and Ramdas (2019), the ADDIS\({}^{*}\) algorithm, is similar as for the ADDIS-Graph for FWER control and the ADDIS-Spending. Due to its graphical structure, the FDR-ADDIS-Graph is easier to interpret. In particular, the dependencies between the previous test outcomes and individual significance levels become clearer.
The FDR-ADDIS-Graph can be easily adapted to an asynchronous setting by removing the arrows connecting timely overlapping hypotheses. By standardizing the remaining weights, no significance level is lost, which leads to an improvement over the ADDIS\({}^{*}_{async}\) by Tian and Ramdas (2019). In the same way, the FDR-ADDIS-Graph can be adjusted to a local dependence structure. However, in case of local dependence, only control of the _marginal false discovery rate_ (mFDR) is provided (Zmic et al., 2021), which is defined as
\[\text{mFDR}(i)\coloneqq\frac{\mathbb{E}[|V(i)|]}{\mathbb{E}(|R(i)|\lor 1)}. \tag{10}\]
Figure 3: Illustration of the FDR-ADDIS-Graph.
For this reason, we only present an extension of the FDR-ADDIS-Graph to the asynchronous setting. However, if one is interested in mFDR control under local dependence, the same adjustments to the FDR-ADDIS-Graph can be made as we presented in Section 4 for the FWER controlling ADDIS-Graph. To derive an FDR-ADDIS-Graph for the asynchronous setting, we proceed as described at the end of Section 4. To this end, let \(E_{i}\geq i\), \(i\in\mathbb{N}\), be the stopping times given from the asynchronous testing. The idea is to distribute the significance level of \(H_{j}\) in case of \(P_{j}\leq\lambda_{j}\) or \(P_{j}>\tau_{j}\) only to hypotheses \(H_{i}\) with \(i>E_{j}\), which leads to the following FDR-ADDIS-Graph for the asynchronous setting.
**Definition 5.3** (FDR-ADDIS-Graph\({}_{\text{async}}\)).: Let \(W_{0}\leq\alpha\), \((\gamma_{i})_{i\in\mathbb{N}}\) be a non-negative sequence that sums up to \(1\) and \((g_{j,i})_{i=j+1}^{\infty}\) and \((h_{j,i})_{i=j+1}^{\infty}\) be non-negative sequences for all \(j\in\mathbb{N}\) such that \(\sum_{i=j+1}^{k}g_{j,i}<1\) and \(\sum_{i=j+1}^{k}h_{j,i}<1\) for all \(k>j\). In addition, let \(\tau_{i}\in(0,1]\) and \(\lambda_{i}\in[0,\tau_{i})\) be measurable regarding \(\mathcal{G}_{i}^{E}=\sigma(\{P_{j}:E_{j}<i\})\). We define \(g_{j,i}^{*}=g_{j,i}\bigg{/}\left(1-\sum_{k=j+1}^{E_{j}}g_{j,k}\right)\), \(h_{j,i}^{*}=h_{j,i}\bigg{/}\left(1-\sum_{k=j+1}^{E_{j}}h_{j,k}\right)\) if \(i>E_{j}\) and \(g_{j,i}^{*}=0\), \(h_{j,i}^{*}=0\), otherwise. The _FDR-ADDIS-Graph\({}_{\text{async}}\)_ tests each hypothesis \(H_{i}\) at significance level \(\alpha_{i}=\min(\hat{\alpha}_{i},\lambda_{i})\), where
\[\hat{\alpha}_{i} =(\tau_{i}-\lambda_{i})\left(W_{0}\gamma_{i}+\sum_{j=1}^{i-1}g_{ j,i}^{*}(C_{j}-S_{j}+1)\frac{\hat{\alpha}_{j}}{\tau_{i}-\lambda_{j}}\right.\] \[+\left.\sum_{j=1}^{i-1}h_{j,i}^{*}R_{j}[\alpha T_{j}+(\alpha-W_{0 })T_{j}^{c}]\right). \tag{11}\]
The FDR control of FDR-ADDIS-Graph\({}_{\text{async}}\) is directly implied by Theorem 5.2.
## 6 Simulations
We investigate the power and error control of the proposed ADDIS-Graphs by means of simulations. In Subsection 6.1, we compare the FWER controlling procedures ADDIS-Graph and ADDIS-Spending (Tian and Ramdas, 2021) under local dependence and in Subsection 6.2, we compare the FDR-ADDIS-Graph with the ADDIS* algorithm (Tian and Ramdas, 2019) in an asynchronous testing setup.
### Simulations for FWER control
We consider \(n=1000\) hypotheses to be tested, whose corresponding \(p\)-values \((P_{i})_{i\in\{1,\dots,n\}}\) arrive in finite batches \(B_{1},\dots,B_{n/b}\) with the same batch-size \(b\in\{1,10,25,50\}\) for every batch. Let \(X_{bj+1:b(j+1)}=(X_{bj+1},\dots,X_{b(j+1)})^{T}\mathord{\sim}N_{b}(\mu,\Sigma)\), \(j\in\{0,\dots,n/b-1\}\), be \(b\)-dimensional \(i.i.d\) random vectors, where \(\mu=(0,\dots,0)^{T}\in\mathbb{R}^{b}\) and \(\Sigma=(\sigma_{ij})_{i,j=1,\dots,b}\in\mathbb{R}^{b\times b}\) with \(\sigma_{ii}=1\) and \(\sigma_{ij}=\rho\in(0,1)\) for all \(i\in\{1,\dots,b\}\) and \(j\neq i\). For each \(H_{i}\), \(i\in\{1,\dots,n\}\), we test the null hypothesis \(H_{i}:\mu_{i}\leq 0\) with \(\mu_{i}=\mathbb{E}[Z_{i}]\), where \(Z_{i}=X_{i}+\mu_{A}\), \(\mu_{A}>0\), with probability \(\tau_{A}\in(0,1)\) and \(Z_{i}=X_{i}+\mu_{N}\), \(\mu_{N}<0\), otherwise. Since the test statistics follow a standard gaussian distribution under the null hypothesis, a z-test can be used. The parameter \(\mu_{A}\) can be interpreted as the strength of the alternative, \(\pi_{A}\) as probability of a hypothesis being false and \(\mu_{N}\) as the conservativeness of null \(p\)-values.
In this subsection, we use an overall level \(\alpha=0.2\) and estimate the FWER and power of the ADDIS-Graph\({}_{\text{local}}\) and ADDIS-Spending\({}_{\text{local}}\)(Tian and Ramdas, 2021) by averaging over \(2000\) independent trials. Thereby, the proportion of rejected hypotheses among the false hypotheses is used as empirical power. We set \(\mu_{A}=4\), \(\mu_{N}=-0.5\) and \(\rho=0.8\) in all simulations within this subsection, thus obtaining slightly conservative null \(p\)-values. Since both procedures are based on the same ADDIS principle and therefore exploit the conservativeness of null \(p\)-values in the same manner, no more parameter configurations are necessary. As recommended by Tian and Ramdas (2021), we choose the \(\tau_{i}=0.8\) and \(\lambda_{i}=\alpha\tau_{i}=0.16\) for all \(i\in\mathbb{N}\). In the first simulation (Figure 4), we also use the same \(\gamma_{i}\propto 1/\left((i+1)\log(i+1)^{2}\right)\) as Tian and Ramdas (2021) in their simulations. However, in Figures 5 and 6, we use \(\gamma_{i}\propto 1/i^{1.6}\) and \(\gamma_{i}=6/(\pi^{2}i^{2})\), as the procedures are very sensitive to the choice of \((\gamma_{i})_{i\in\mathbb{N}}\). For the weights of the ADDIS-Graph, we always set \(g_{j,i}=\gamma_{i-j}\), \(j\in\mathbb{N}\) and \(i>j\).
The plots indicate that ADDIS-Spending\({}_{\text{local}}\) and ADDIS-Graph\({}_{\text{local}}\) perform quite similar under independence of the \(p\)-values. However, when the \(p\)-values become locally dependent the power of the ADDIS-Spending\({}_{\text{local}}\) decreases systematically in all cases, while the power of the ADDIS-Graph\({}_{\text{local}}\) either remains similar (Figure 4) or even increases (Figures 5 and 6). To understand the power behavior of the ADDIS-Graph\({}_{\text{local}}\) note that the larger the batch-size, the
Figure 4: Comparison of ADDIS-Spending\({}_{\text{local}}\) and ADDIS-Graph\({}_{\text{local}}\) in terms of power and FWER for different batch-sizes and proportions of false null hypotheses (\(\pi_{A}\)). Lines above the overall level \(\alpha=0.2\) correspond to power and lines below to FWER. The \(p\)-values were generated as described in the text with parameters \(\mu_{N}=-0.5\), \(\mu_{A}=4\) and \(\rho=0.8\). Both procedures were applied with parameters \(\tau_{i}=0.8\), \(\lambda_{i}=0.16\) and \(\gamma_{i}\propto 1/\left((i+1)\log(i+1)^{2}\right)\). In addition, \(g_{j,i}=\gamma_{i-j}\) was used in ADDIS-Graph\({}_{\text{local}}\).
Figure 5: Comparison of ADDIS-Spending\({}_{\text{local}}\) and ADDIS-Graph\({}_{\text{local}}\) in terms of power and FWER for different batch-sizes and proportions of false null hypotheses (\(\pi_{A}\)). Lines above the overall level \(\alpha=0.2\) correspond to power and lines below to FWER. The \(p\)-values were generated as described in the text with parameters \(\mu_{N}=-0.5\), \(\mu_{A}=4\) and \(\rho=0.8\). Both procedures were applied with parameters \(\tau_{i}=0.8\), \(\lambda_{i}=0.16\) and \(\gamma_{i}\propto 1/i^{1.6}\). In addition, \(g_{j,i}=\gamma_{i-j}\) was used in ADDIS-Graph\({}_{\text{local}}\).
further into the future the significance level is distributed by the weights \((g^{*}_{j,i})_{i\in\mathbb{N}}^{\infty}\) (see Definition 4.1). In these simulations \(\gamma_{i}\propto 1/\left((i+1)\log(i+1)^{2}\right)\) (Figure 4) decreases the slowest and \(\gamma_{i}=6/(\pi^{2}i^{2})\) (Figure 6) decreases the fastest. If \((\gamma_{i})_{i\in\mathbb{N}}\) decreases slow and the batch-size is large, \(\text{ADDIS-Graph}_{\text{local}}\) distributes a lot of significance level to hypotheses in the far future. However, since the testing process is finite in this case, these hypotheses may never be tested, which leads to a power loss. On the other hand, if \((\gamma_{i})_{i\in\mathbb{N}}\) decreases fast, \(\text{ADDIS-Graph}_{\text{local}}\) allocates the individual significance levels more evenly under a larger batch-size, which results in a higher power. Thus, in order to obtain the optimal power for each batch-size, one could choose a faster decreasing \((\gamma_{i})_{i\in\mathbb{N}}\) the larger the batch-size.
### Simulations for FDR control
In this subsection we consider the same simulation setup as described in Subsection 6.1, but for independent \(p\)-values (\(b=1\)). However, applying the procedures, it is assumed that the hypotheses are tested in an asynchronous manner. Thus, for each hypotheses \(\bar{H}_{i}\), \(i\in\mathbb{N}\), we have a stopping time \(E_{i}\geq i\). We assume that \(E_{i}=i+e\) for some constant test duration \(e\in\mathbb{N}_{0}\). In the following simulations we compare the FDR-\(\text{ADDIS-Graph}_{\text{async}}\) and \(\text{ADDIS}^{*}_{\text{async}}\) (Tian and Ramdas, 2019) in terms of power and FDR for \(e\in\{0,2,5,10\}\). Since FDR is less conservative than FWER, we change the overall level to \(\alpha=0.05\) and strength of the alternative to \(\mu_{A}=3\). As recommended by Tian and Ramdas (2019), we choose \(\tau_{i}=0.5\) and \(\lambda_{i}=0.25\) for all \(i\in\mathbb{N}\), but use the same \((\gamma_{i})_{i\in\mathbb{N}}\) and \((g_{j,i})_{j=i+1}^{\infty}\), \(j\in\mathbb{N}\), as before. Furthermore, we set \(W_{0}=\alpha\). For the additional weights \((h_{j,i})_{j=i+1}^{\infty}\) of the FDR-\(\text{ADDIS-Graph}_{\text{async}}\), we fix \(h_{j,i}=g_{j,i}\) for all \(j\in\mathbb{N}\) and \(i>j\). The results obtained by averaging over \(200\) independent trials can be found in the Figures 7-9.
The results are similar as for the FWER controlling procedures (Subsection 6.1). The power of \(\text{ADDIS}^{*}_{\text{async}}\) decreases enormously for an increasing test duration. This decrease can be decelerated by the FDR-\(\text{ADDIS-Graph}_{\text{async}}\) (Figures 7 and 8) or even stopped (Figure 9), if a faster decreasing \((\gamma_{i})_{i\in\mathbb{N}}\) is chosen.
Figure 8: Comparison of \(\text{ADDIS}^{*}_{\text{async}}\) and FDR-ADDIS-\(\text{Graph}_{\text{aync}}\) in terms of power and FDR for different test durations and proportions of false null hypotheses (\(\pi_{A}\)). Lines above the overall level \(\alpha=0.05\) correspond to power and lines below to FDR. The \(p\)-values were generated as described in the text with parameters \(\mu_{N}=-0.5\) and \(\mu_{A}=3\). Both procedures were applied with parameters \(\tau_{i}=0.5\), \(\lambda_{i}=0.25\), \(\gamma_{i}\propto 1/i^{1.6}\) and \(W_{0}=\alpha\). In addition, \(g_{j,i}=\gamma_{i-j}\) and \(h_{j,i}=g_{j,i}\) were used in FDR-ADDIS-\(\text{Graph}_{\text{aync}}\).
Figure 7: Comparison of \(\text{ADDIS}^{*}_{\text{async}}\) and FDR-ADDIS-\(\text{Graph}_{\text{aync}}\) in terms of power and FDR for different test durations and proportions of false null hypotheses (\(\pi_{A}\)). Lines above the overall level \(\alpha=0.05\) correspond to power and lines below to FDR. The \(p\)-values were generated as described in the text with parameters \(\mu_{N}=-0.5\) and \(\mu_{A}=3\). Both procedures were applied with parameters \(\tau_{i}=0.5\), \(\lambda_{i}=0.25\), \(\gamma_{i}\propto 1/\left((i+1)\log(i+1)^{2}\right)\) and \(W_{0}=\alpha\). In addition, \(g_{j,i}=\gamma_{i-j}\) and \(h_{j,i}=g_{j,i}\) were used in FDR-ADDIS-\(\text{Graph}_{\text{aync}}\).
follow a natural local dependence structure: the hypotheses are tested in batches and within each batch the same mice are used for each hypothesis. Thus, the \(p\)-values within a batch depend on each other, but \(p\)-values from different batches are independent.
We applied ADDIS-Spending\({}_{\text{local}}\) and \(\text{ADDIS-Graph}_{\text{local}}\) with the same parameters as in Subsection 6.1. The \((\gamma_{i})_{i\in\mathbb{N}}\) was chosen such that \(\gamma_{i}\propto 1/\big{(}(i+1)\log(i+1)^{2}\big{)}\) for all \(i\in\mathbb{N}\). The results can be found in figure 10. The left plot shows the number of rejections achieved by the two procedures with respect to the FWER level \(\alpha\) considered. Similar to Subsection 6.1, we can see that the ADDIS-Graph allows a significantly larger number of hypotheses to be rejected than the \(\text{ADDIS-Spending}_{\text{local}}\). The right plot shows the individual significance levels obtained by the two procedures for the FWER level \(\alpha=0.2\). Note however that for ease of illustration, we omitted the first 100 levels that yielded much higher individual significance levels. It can be seen that the \(\text{ADDIS-Graph}_{\text{local}}\) tests each hypothesis using higher significance levels than the \(\text{ADDIS-Spending}_{\text{local}}\). In fact, for each FWER level and hypothesis, the individual significance level obtained by the \(\text{ADDIS-Graph}\) is greater than or equal to the \(\text{ADDIS-Spending}\) level. That means, the \(\text{ADDIS-Graph}\) rejects all hypotheses that are rejected by the \(\text{ADDIS-Spending}\), but additionally some more.
## 8 Discussion
In this work, we presented a graphical approach to exploit the \(\text{ADDIS}\) principles for FWER (Tian and Ramdas, 2021) and FDR (Tian and Ramdas, 2019) control. We started with the construction of an FWER controlling \(\text{ADDIS-Graph}\). This proposal enhances the interpretability of the \(\text{ADDIS-Spending}\) and also enlarges the family of procedures that it includes, as we show that by means of the \(\text{ADDIS-Graph}\) all procedures that satisfy the \(\text{ADDIS}\) principle for FWER control can be obtained. Furthermore, the \(\text{ADDIS-Graph}\) can easily be adapted to a local dependence structure and an asynchronous testing setup without losing significance level. For both situation, in the considered simulation scenarios we show that the \(\text{ADDIS-Graph}\) leads to a large power gain as compared to the \(\text{ADDIS-Spending}\). Moreover, we extend the \(\text{ADDIS-Graph}\) to the FDR control setting resulting in an FDR-\(\text{ADDIS-Graph}\). It has the same advantages as the \(\text{ADDIS-Graph}\) and is superior to the currently used \(\text{ADDIS}\) method with FDR control, the \(\text{ADDIS}^{*}\) algorithm.
Robertson et al. (2022b) claimed that the individual significance levels assigned by asynchronous online procedures are more conservative. We have illustrated with the \(\text{ADDIS-Graph}\) that this is not necessarily the case. Although the significance level of a hypothesis depends on pessimistic assumptions about the outcomes of tests that are still running, future hypotheses can take advantage of this conservatism and achieve higher significance levels such that no level is lost overall.
Although we focus exclusively on the online multiple testing setting in this paper, the proposed procedures can also be applied in classical multiple testing, which is particularly useful when the number of hypotheses is large. In doing
Figure 9: Comparison of \(\text{ADDIS}^{*}_{\text{async}}\) and \(\text{FDR-ADDIS-Graph}_{\text{async}}\) in terms of power and FDR for different test durations and proportions of false null hypotheses (\(\pi_{A}\)). Lines above the overall level \(\alpha=0.05\) correspond to power and lines below to FDR. The \(p\)-values were generated as described in the text with parameters \(\mu_{N}=-0.5\) and \(\mu_{A}=3\). Both procedures were applied with parameters \(\tau_{i}=0.5\), \(\lambda_{i}=0.25\), \(\gamma_{i}=6/(\pi^{2}i^{2})\) and \(W_{0}=\alpha\). In addition, \(g_{j,i}=\gamma_{i-j}\) and \(h_{j,i}=g_{j,i}\) were used in \(\text{FDR-ADDIS-Graph}_{\text{async}}\).
so, one could order the hypotheses without looking at the data, e.g. regarding their contextual importance, and apply the ADDIS-Graph to the resulting finite sequence of \(p\)-values. Since significance level is only distributed to "future" hypotheses, this could be interpreted as an ADDIS version of the fallback procedure (Wiens and Dmitrienko, 2005). However, we wonder whether our approach can be extended in this case such that significance level is also allowed to be distributed to previous hypotheses, which would define an ADDIS version of the classical graphical procedure by Bretz et al. (2009). This idea could also be applied to online batched-testing (Zrnic et al., 2020), where at each step not only one, but a batch of several hypotheses is tested at the same time.
Another task for future work is the optimal choice of the parameters \((\gamma_{i})_{i\in\mathbb{N}}\), \((g_{j,i})_{j\in\mathbb{N},i>j}\) and \((h_{j,i})_{j\in\mathbb{N},i>j}\). In our simulations (Section 6), we chose \((\gamma_{i})_{i\in\mathbb{N}}\) as in the literature for the comparison procedures and set \((g_{j,i})_{i=j+1}^{\infty}\) and \((h_{j,i})_{i=j+1}^{\infty}\) related to \((\gamma_{i})_{i\in\mathbb{N}}\) for each \(j\in\mathbb{N}\). However, the large number of parameters allows for many further possibilities that can strongly influence the performance of the ADDIS-Graphs. For instance, we saw that a faster decreasing \((\gamma_{i})_{i\in\mathbb{N}}\) would be useful when the batch-size or test duration is large. Many more such recommendations could be made through simulations and theoretical results. In addition, one could study time-varying choices of \((\tau_{i})_{i\in\mathbb{N}}\) and \((\lambda_{i})_{i\in\mathbb{N}}\) that may depend on the previous test outcomes.
Figure 10: The left plot shows the number of rejections for different FWER levels \(\alpha\) and the right plot the individual significance levels (for \(\alpha=0.2\)) obtained by ADDIS-Spendinglocal and ADDIS-Graphlocal. Both procedures were applied with parameters \(\tau_{i}=0.8\), \(\lambda_{i}=0.16\) and \(\gamma_{i}\propto 1/\left((i+1)\log(i+1)^{2}\right)\). In addition, \(g_{j,i}=\gamma_{i-j}\) was used in ADDIS-Graphlocal.
## Appendix
Proof of Theorem 3.2.: Let \((\alpha_{i})_{i\in\mathbb{N}}\) be given by the ADDIS-Graph. We need to show that for any \(i\in\mathbb{N}\), \(S_{1:i}\coloneqq(S_{1},\ldots,S_{i})^{T}\in\{0,1\}^{i-1}\) and \(C_{1:i}\coloneqq(C_{1},\ldots,C_{i})^{T}\in\{0,1\}^{i}\):
\[\sum_{j=1}^{i}\frac{\alpha_{i}}{\tau_{i}-\lambda_{i}}(S_{i}-C_{i})\leq\alpha. \tag{12}\]
We define \(U_{j}\coloneqq C_{j}-S_{j}+1\) for all \(j\in\mathbb{N}\). Then \(1-U_{j}=S_{j}-C_{j}\) and since \(C_{j}\leq S_{j}\), it holds \(U_{j}\in\{0,1\}\). Now let \(i\in\mathbb{N}\) and \(U_{1:i}=(U_{1},\ldots,U_{i})^{T}\in\{0,1\}^{i}\) be arbitrary but fixed. With this, (12) is equivalent to
\[F_{i}(U_{1:i})\coloneqq\sum_{j=1}^{i}\left(\alpha\gamma_{j}+\sum_{k=1}^{j-1}g _{k,j}U_{k}\alpha_{k}(U_{1:(k-1)})\frac{1}{\tau_{k}-\lambda_{k}}\right)(1-U_{ j})\leq\alpha. \tag{13}\]
Note that we only wrote the dependence of \(\alpha_{k}\) on \(U_{1:(k-1)}=(U_{1},\ldots,U_{k-1})^{T}\), although the parameters \(\lambda_{k}\) and \(\tau_{k}\) could depend on it as well. That is, because these parameters could also be fixed, meaning if we change the \(U_{1:(k-1)}\) they would still be valid parameters for an ADDIS-Graph. In contrast, the \(\alpha_{k}\) changes by definition. It is difficult to show the validity of (13) directly. However, we will see that there exists \(\tilde{U}_{1:i}\in\{0,1\}^{i}\) that obviously fulfil \(F_{i}(\tilde{U}_{1:i})\leq\alpha\). Therefore, the idea is to determine such a \(\tilde{U}_{1:i}\) that additionally satisfies \(F_{i}(U_{1:i})\leq F_{i}(\tilde{U}_{1:i})\).
Let \(l=\max\{j\in\{1,\ldots,i\}:U_{j}=1\}\) (we set \(\max(\emptyset)=0\)) and \(U_{1:i}^{l}=(U_{1}^{l},\ldots,U_{i}^{l})^{T}\), where \(U_{j}^{l}=U_{j}\) for all \(j\neq l\) and \(U_{l}^{l}=0\). We assume that \(l>0\) (if \(l=0\), we later see \(F_{i}(U_{1:i})\leq\alpha\) anyway). In the next step we want to show that \(F_{i}(U_{1:i})\leq F_{i}(U_{1:i}^{l})\). For shorter notation we write \(\alpha_{j}=\alpha_{j}(U_{1:(j-1)})\) and \(\alpha_{j}^{l}=\alpha_{j}(U_{1:(j-1)}^{l})\). Since for all \(j\leq i\): \(U_{j}^{l}=U_{j}\) (\(j\neq l\)), \(U_{j}^{l}=0\) (\(j\geq l\)), \(U_{j}=0\) (\(j\geq l+1\)) and \(\alpha_{j}^{l}=\alpha_{j}\) (\(j\leq l\)), we have:
\[F_{i}(U_{1:i}^{l})-F_{i}(U_{1:i})\] \[=\sum_{j=1}^{i}\alpha\gamma_{j}(1-U_{j}^{l})-\sum_{j=1}^{i}\alpha \gamma_{j}(1-U_{j})+\sum_{j=1}^{i}\left(\sum_{k=1}^{j-1}g_{k,j}U_{k}^{l} \alpha_{k}^{l}\frac{1}{\tau_{k}-\lambda_{k}}\right)(1-U_{j}^{l})\] \[-\sum_{j=1}^{i}\left(\sum_{k=1}^{j-1}g_{k,j}U_{k}\alpha_{k}\frac{ 1}{\tau_{k}-\lambda_{k}}\right)(1-U_{j})\] \[=\alpha\gamma_{l}+\sum_{j=1}^{i}\left(\sum_{k=1}^{j-1}g_{k,j}U_{ k}^{l}\alpha_{k}^{l}\frac{1}{\tau_{k}-\lambda_{k}}\right)(1-U_{j}^{l})-\sum_{j=1}^{i} \left(\sum_{k=1}^{j-1}g_{k,j}U_{k}\alpha_{k}\frac{1}{\tau_{k}-\lambda_{k}} \right)(1-U_{j})\] \[=\alpha\gamma_{l}+\sum_{k=1}^{l-1}g_{k,l}U_{k}\alpha_{k}\frac{1}{ \tau_{k}-\lambda_{k}}+\sum_{j=l+1}^{i}\sum_{k=1}^{l-1}g_{k,j}U_{k}^{l}\alpha_{ k}^{l}\frac{1}{\tau_{k}-\lambda_{k}}\] \[-\sum_{j=l+1}^{i}\sum_{k=1}^{l}g_{k,j}U_{k}\alpha_{k}\frac{1}{ \tau_{k}-\lambda_{k}}\] \[=\alpha\gamma_{l}+\sum_{k=1}^{l-1}g_{k,l}U_{k}\alpha_{k}\frac{1}{ \tau_{k}-\lambda_{k}}-\sum_{j=l+1}^{i}g_{l,j}\alpha_{l}\frac{1}{\tau_{l}- \lambda_{l}}\] \[\geq\alpha\gamma_{l}+\sum_{k=1}^{l-1}g_{k,l}U_{k}\alpha_{k}\frac{ 1}{\tau_{k}-\lambda_{k}}-\alpha_{l}\frac{1}{\tau_{l}-\lambda_{l}}\stackrel{{ Def.\ref{eq:1}}}{{=}}0,\]
where we used in the inequality that the sequence \((g_{l,j})_{j=l+1}^{\infty}\) is non-negative and sums to at most \(1\) for all \(l\in\mathbb{N}\).
Since the \(U_{1:i}\in\{0,1\}^{i}\) was arbitrary, this shows \(F_{i}(U_{1:i})\leq F_{i}(U_{1:i}^{0})\) for all \(U_{1:i}\in\{0,1\}^{i}\), where \(U_{1:i}^{0}=(0,\ldots,0)^{T}\in\{0,1\}^{i}\). Next, we deduce that \(F_{i}(U_{1:i}^{0})\leq\alpha\) and conclude the proof. For this, just recognize that \(U_{1:i}^{0}\) means \(U_{j}=0\) for all \(j\leq i\). Hence, we obtain
\[F_{i}(U_{1:i}^{0})=\sum_{j=1}^{i}\alpha\gamma_{j}\leq\alpha.\]
Proof of Theorem 3.3.: Let \(G_{1:i}=(R_{1},C_{1},S_{1},\ldots,R_{i},C_{i},S_{i})\in\{0,1\}^{3i}\), then every procedure satisfying the AD-DIS principle (Theorem 2.1) is a sequence of non-negative functions \((\alpha_{i}(G_{1:(i-1)}))_{i\in\mathbb{N}}\) such that
\[\sum_{j\leq i}\frac{\alpha_{j}(G_{1:(j-1)})}{\tau_{j}-\lambda_{j}}(S_{j}-C_{j}) \leq\alpha\quad\text{for all }i\in\mathbb{N}. \tag{14}\]
Note that the function \(\alpha_{i}(G_{1:(i-1)})\) is fully determined through the information until step \(i-1\), hence pessimistic assumptions about \(S_{i}\) and \(C_{i}\) need to be made in order to satisfy equation (14). Consequently, the condition of the ADDIS principle is equivalent to
\[0\leq\alpha_{i}(G_{1:(i-1)})\leq(\tau_{i}-\lambda_{i})\left(\alpha-\sum_{j\leq i -1}\frac{\alpha_{j}(G_{1:(j-1)})}{\tau_{j}-\lambda_{j}}(S_{j}-C_{j})\right) \quad\text{for all }i\in\mathbb{N}. \tag{15}\]
Let \(i\in\mathbb{N}\) and \(G_{1:(i-1)}\in\{0,1\}^{3(i-1)}\) be arbitrary but fixed. In addition, let \((\alpha_{j})_{j<i}\) be levels obtained by an ADDIS-Graph with parameters \((\gamma_{j})_{j<i}\) and \((g_{k,j})_{k<j<i}\). We want to prove that
\[\alpha_{i}(\gamma_{i},(g_{j,i})_{j<i}))=(\tau_{i}-\lambda_{i})\left(\alpha \gamma_{i}+\sum_{j=1}^{i-1}g_{j,i}(C_{j}-S_{j}+1)\frac{\alpha_{j}}{\tau_{i}- \lambda_{j}}\right),\]
where \(\gamma_{i}\in\left[0,1-\sum_{j=1}^{i-1}\gamma_{j}\right]\) and \(g_{j,i}\in\left[0,1-\sum_{k=j+1}^{i-1}g_{j,k}\right]\), \(j\in\{1,\ldots,i-1\}\), can take any value in the interval \(\left[0,(\tau_{i}-\lambda_{i})\left(\alpha-\sum_{j\leq i-1}\frac{\alpha_{j}}{ \tau_{j}-\lambda_{j}}(S_{j}-C_{j})\right)\right]\). Since \(\alpha_{i}\) is continuous in \(\gamma_{i}\) and \((g_{j,i})_{j<i}\), it is sufficient to show that \(\alpha_{i}(0,(0)_{j<i})=0\) and \(\alpha_{i}\left(1-\sum_{j=1}^{i-1}\gamma_{j},\left(1-\sum_{k=j+1}^{i-1}g_{j,k }\right)_{j<i}\right)=(\tau_{i}-\lambda_{i})\left(\alpha-\sum_{j\leq i-1}\frac {\alpha_{j}}{\tau_{j}-\lambda_{j}}(S_{j}-C_{j})\right)\). The first equation follows immediately, hence we only need to show the second (we set \(U_{j}=C_{j}-S_{j}+1\) for all \(j\in\mathbb{N}\)):
\[\alpha_{i}\left(1-\sum_{j=1}^{i-1}\gamma_{j},\left(1-\sum_{k=j+1 }^{i-1}g_{j,k}\right)_{j<i}\right)-(\tau_{i}-\lambda_{i})\left(\alpha-\sum_{j =1}^{i-1}\frac{\alpha_{j}}{\tau_{j}-\lambda_{j}}(1-U_{j})\right)\] \[=(\tau_{i}-\lambda_{i})\left(\alpha\left(1-\sum_{j=1}^{i-1}\gamma _{j}\right)+\sum_{j=1}^{i-1}\left(1-\sum_{k=j+1}^{i-1}g_{j,k}\right)U_{j} \frac{\alpha_{j}}{\tau_{i}-\lambda_{j}}\right)\] \[-(\tau_{i}-\lambda_{i})\left(\alpha-\sum_{j=1}^{i-1}\left(\alpha \gamma_{j}+\sum_{k=1}^{j-1}g_{k,j}U_{k}\frac{\alpha_{k}}{\tau_{k}-\lambda_{k}} \right)(1-U_{j})\right)\] \[=(\tau_{i}-\lambda_{i})\left(-\alpha\sum_{j=1}^{i-1}\gamma_{j}U_{j }+\sum_{j=1}^{i-1}U_{j}\frac{\alpha_{j}}{\tau_{i}-\lambda_{j}}-\sum_{j=1}^{i-1 }\sum_{k=j+1}^{i-1}g_{j,k}U_{j}\frac{\alpha_{j}}{\tau_{i}-\lambda_{j}}\right.\] \[\left.+\sum_{j=1}^{i-1}\sum_{k=1}^{j-1}g_{k,j}U_{k}\frac{\alpha_{k }}{\tau_{k}-\lambda_{k}}-\sum_{j=1}^{i-1}\left(\sum_{k=1}^{j-1}g_{k,j}U_{k} \frac{\alpha_{k}}{\tau_{k}-\lambda_{k}}\right)U_{j}\right)\] \[=(\tau_{i}-\lambda_{i})\left(\sum_{j=1}^{i-1}U_{j}\frac{\alpha_{j }}{\tau_{i}-\lambda_{j}}-\sum_{j=1}^{i-1}U_{j}\left(\alpha\gamma_{j}+\sum_{k=1} ^{j-1}g_{k,j}U_{k}\frac{\alpha_{k}}{\tau_{k}-\lambda_{k}}\right)\right)=0.\]
Proof of Theorem 5.2.: Let \(\alpha_{j}^{0}=W_{0}\gamma_{j}+\sum_{k=1}^{j-1}h_{k,j}R_{k}[\alpha T_{k}+(\alpha-W_ {0})T_{k}^{c}]\) for all \(j\in\mathbb{N}\). Note that
\[\sum_{j=1}^{i}\alpha_{j}^{0} =\sum_{j=1}^{i}W_{0}\gamma_{j}+\sum_{j=1}^{i}\sum_{k=1}^{j-1}h_{k, j}R_{k}[\alpha T_{k}+(\alpha-W_{0})T_{k}^{c}]\] \[\leq W_{0}+\sum_{k=1}^{i-1}R_{k}[\alpha T_{k}+(\alpha-W_{0})T_{k}^ {c}]\sum_{j=k+1}^{i}h_{k,j}\] \[\leq W_{0}+\sum_{k=1}^{i-1}R_{k}[\alpha T_{k}+(\alpha-W_{0})T_{k}^ {c}]\] \[=W_{0}+(\alpha-W_{0})T_{i}+\alpha(|R(i-1)|-1)T_{i}\] \[\leq\alpha(|R(i)|\lor 1)\]
With this, the proof can be performed as for Theorem 3.2 by replacing \(\alpha\gamma_{j}\) with \(\alpha_{j}^{0}\) on the left side in equation (13) and \(\alpha\) with \(\alpha(|R(i)|\lor 1)\) on the right side.
## Funding
L. Fischer acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project number 281474342/GRK2224/2.
M. Bofill Roig is a member of the EU Patient-centric clinical trial platform (EU-PEARL). EU-PEARL has received funding from the Innovative Medicines Initiative 2 Joint Undertaking under grant agreement No 853966. This Joint Undertaking receives support from the European Union's Horizon 2020 research and innovation programme and EFPIA and Children's Tumor Foundation, Global Alliance for TB Drug Development non-profit organization, Springworks Therapeutics Inc. This publication reflects the author's views. Neither IMI nor the European Union, EFPIA, or any Associated Partners are responsible for any use that may be made of the information contained herein.
## Supplementary Material
The R code for the simulations and case study can be found at the GitHub repository [https://github.com/fischer23/Adaptive-Discard-Graph](https://github.com/fischer23/Adaptive-Discard-Graph).
|
2310.08501
|
Unsupervised Learning of Object-Centric Embeddings for Cell Instance
Segmentation in Microscopy Images
|
Segmentation of objects in microscopy images is required for many biomedical
applications. We introduce object-centric embeddings (OCEs), which embed image
patches such that the spatial offsets between patches cropped from the same
object are preserved. Those learnt embeddings can be used to delineate
individual objects and thus obtain instance segmentations. Here, we show
theoretically that, under assumptions commonly found in microscopy images, OCEs
can be learnt through a self-supervised task that predicts the spatial offset
between image patches. Together, this forms an unsupervised cell instance
segmentation method which we evaluate on nine diverse large-scale microscopy
datasets. Segmentations obtained with our method lead to substantially improved
results, compared to state-of-the-art baselines on six out of nine datasets,
and perform on par on the remaining three datasets. If ground-truth annotations
are available, our method serves as an excellent starting point for supervised
training, reducing the required amount of ground-truth needed by one order of
magnitude, thus substantially increasing the practical applicability of our
method. Source code is available at https://github.com/funkelab/cellulus.
|
Steffen Wolf, Manan Lalit, Henry Westmacott, Katie McDole, Jan Funke
|
2023-10-12T16:59:50Z
|
http://arxiv.org/abs/2310.08501v1
|
# Unsupervised Learning of Object-Centric Embeddings
###### Abstract
Segmentation of objects in microscopy images is required for many biomedical applications. We introduce object-centric embeddings (OCEs), which embed image patches such that the spatial offsets between patches cropped from the same object are preserved. Those learnt embeddings can be used to delineate individual objects and thus obtain instance segmentations. Here, we show theoretically that, under assumptions commonly found in microscopy images, OCEs can be learnt through a self-supervised task that predicts the spatial offset between image patches. Together, this forms an unsupervised cell instance segmentation method which we evaluate on nine diverse large-scale microscopy datasets. Segmentations obtained with our method lead to substantially improved results, compared to state-of-the-art baselines on six out of nine datasets, and perform on par on the remaining three datasets. If ground-truth annotations are available, our method serves as an excellent starting point for supervised training, reducing the required amount of ground-truth needed by one order of magnitude, thus substantially increasing the practical applicability of our method. Source code is available at github.com/funkelab/cellulus.
## 1 Introduction
Determining whether two image regions belong to the same object is a fundamental challenge in instance segmentation, albeit a simple task for humans. A plausible hypothesis is that humans learn to recognize parts as belonging to a whole by repeatedly observing them in each other's vicinity. We introduce object-centric embeddings (OCEs), which leverage this assumption for unsupervised instance segmentation. OCEs map image patches in such a way that the spatial offsets between patches cropped from the same object are preserved in embedding space. We investigate the usage of OCEs in the domain of microscopy imaging and introduce Cellulus, a method that identifies and segments individual cells in microscopy images.
By relying on reasonable assumptions about microscopy images, namely that (i) the objects in these images have a similar appearance and (ii) the objects in these images are randomly distributed, we show that OCEs can be learnt in an unsupervised fashion.
Cell instance segmentation is crucial for answering important life science questions. In recent years, deep learning-based segmentation approaches [23, 12] have achieved the best performance on standard benchmarking datasets, but these approaches rely on large amounts of annotated training data. Our proposed unsupervised method Cellulus, in contrast, circumvents the problem of acquiring these manual annotations.
With Cellulus, we provide an approach for employing the learnt object-centric embedding locations per patch, identifying image patches that are part of the same cell and thus segmenting cell instances in a unsupervised way (see Figure 1 for a few examples). We demonstrate that this unsupervised segmentation pipeline achieves competitive results with respect to pre-trained baseline models on a diverse set of nine microscopy image datasets (see Table 1).
Additionally, instance segmentations obtained through our proposed unsupervised pipeline are excellent starting points to support supervised training when very little manually generated ground truth annotations are available. We show that we obtain comparable performance to supervised segmentation methods, after fine-tuning on one order of magnitude less data (see Figure 5).
More generally, supervised training supported by unsupervised segmentation is at least as good as purely supervised learning on all investigated datasets, demonstrating that our method dramatically reduces the amount of ground truth annotations needed, and at times not requiring any.
Reducing or eliminating the need for manual ground truth is of particular importance to biological research, as new light-microscopy methods are capable of generating terabytes of data in a single experiment. Manually annotating even small regions of such datasets can take hundreds
or thousands of human hours. Thus, there is a tremendous need for self-supervised learning methods to help cope with the vast amount of data generated by modern microscopes. Cellulus is available at github.com/funkelab/cellulus.
## 2 Related Work
Currently, machine learning and deep learning-based methods dominate the field of cell instance segmentation [26, 23, 12]. These cell segmentation methods can be categorized by their intermediate (auxiliary) representation used to derive the predicted segmentation.
StarDist[22], for example, represents objects as star-convex polygons (_i.e._, distances from a center point to the cell boundary along sets of equi-distant rays). On the other hand, Cellpose[23] encodes cells by vectors that point inwards from the boundary. The representations of StarDist and Cellpose are pre-defined and tailored to the tasks of cell segmentation.
Alternatively, pixel-level representations (here referred to as _embeddings_) can be learnt from labels directly by pulling embeddings of pixels within instances together and pushing embeddings across instances apart [3]. Initially developed for natural images, this concept was further developed into a cell segmentation and tracking algorithm in the work by Payer _et al_. [18], which established the state-of-the-art on six Cell Tracking Challenge (CTC) datasets.
Recent submissions to the CTC further improved the segmentation and tracking performance. While Arbelle _et al_. [1] and Scherr _et al_. [21] relied on boundary classification to separate densely clustered cells, Loffler _et al_. [15] used _spatial embeddings_.
Spatial embedding-based approaches learn a function which associates each pixel at location \(i\) in the raw image, to a relative spatial embedding (offset vector) \(r_{i}\), such that the resulting absolute spatial embedding \(e_{i}=i+r_{i}\) for all pixels belonging to an object instance point to a common point (_e.g._ the instance centroid).
Typically, the embeddings are learnt using a regression loss function, either minimizing the distance between absolute spatial embeddings of pairs of pixels \(i,j\) from the same instance \(\mathcal{L}_{\text{regr}}=\sum_{i,j}\sigma\left(e_{i}-e_{j}\right)\) or equivalently by approaching the mean over the whole instance [16, 12]. Here, \(\sigma\) is a measure of distance, _e.g._, \(|\cdot|^{2}\). Recently, EmbedSeg used spatial embeddings to establish the state-of-the-art on multiple 2D and 3D microscopy datasets [12].
We note that our learning approach has parallels with supervised learning of pixel-wise spatial embeddings. In our work, self-supervised learning leads to object-centric embeddings, which are post-processed using the mean-shift clustering algorithm, analogous to De Brabandere _et al_. [3].
Self-supervised learning methods learn representation by solving tasks that predict an intentionally hidden part of the data. Predicting the spatial arrangement of image patches provides a rich signal for learning meaningful representations for downstream tasks. Spatial tasks include solving jigsaws [17], predicting patch rotations [6], or classifying relative patch positions from a grid-like pattern [5]. More recently, contrastive learning between multiple views [8, 9, 25, 14] enabled learning of representations
Figure 1: **Method overview and example segmentations on diverse datasets**. Top row: An unsupervised learning objective gives rise to object-centric embeddings (OCEs), such that patches extracted from the same object (green boxes) maintain their relative position to each other. Predicted densely, these OCEs allow instance segmentation of cells in microscopy images, by using a post-processing step such as mean-shift clustering. Bottom row: Example raw images and dense OCEs/instance segmentations on four datasets spanning different imaging modalities, cell sizes and shapes.
that transfer well to downstream tasks. These learnt representations have been shown to reduce the required amounts of annotated data in tasks such as image classification [10] and semantic segmentation [13, 24].
### Unsupervised Methods for Cell Segmentation
Recently, methods for cell instance segmentation have been proposed that do not rely on human annotation.
The unsupervised segmentation pipeline proposed by Din and Yu [4] employed a Convolutional Neural Network (CNN), which when centered on each cell nucleus is tasked to predict a binary mask for each cell. The model is trained without any ground-truth and is tasked to predict consistent masks that cover all foreground pixels. However, this method still relied on pre-trained networks for locating the nuclei using which the cell segmentations are predicted and can therefore not be considered fully unsupervised.
Completely unsupervised _instance separation_ has been proposed by Wolf [27], where inpainting networks are used to determine which image regions are independent. These independent regions are determined by a hierarchical optimization strategy that continually subdivides the image until all instances are separated. In contrast to our proposed method Cellulus, the post-processing step of Wolf [27] is very computationally expensive and does not provide a method for detecting background regions automatically.
Xie [28] proposed a self-supervised method that employed two proxy tasks of estimating nuclei size and ranking count of nuclei and this enabled the model to mine instance-aware representations from raw data.
## 3 Method
We aim to learn an embedding of image patches that reflects the relative spatial arrangement of these patches (the offset between the predicted embeddings should be equal to their spatial offset), as if they were extracted from the same object (see Figure 2). We refer to the spatial offset between patches extracted from the same object as _intra-object offset_ and the learnt embeddings as _object-centric embeddings_ (OCEs).
### Unsupervised Learning of OCEs
Under conditions that are commonly found in microscopy images, OCEs can be learnt in an unsupervised manner, without the provision of segmentation ground-truth. Those conditions are:
1. Objects in the image are similar
2. Objects are randomly distributed in the image plane
3. A patch cropped from an object contains enough information to identify its position inside the object (, no two parts of an object look exactly identical)
Under these conditions, the _expected_ offset between two image patches is proportional to the _intra-object offset_ of those two patches,, the spatial offset between those patches if they were part of the same object.
Let \(a\) and \(b\) be two different patches found on an object (, the left- and the right-most patches of a cell, see Figure 3). If multiple similar objects are present in an image, there will be multiple locations \(i\in\Omega\) where the patch \(a\) is visible, and distinct locations \(j\in\Omega\) where the patch \(b\) is visible. Here, \(\Omega\) is the set of all pixel locations and \(x:\Omega\mapsto\mathcal{R}\) is the image. We will refer to the image patch at a location
Figure 2: **Unsupervised Learning of Object-Centric Embeddings**. During learning, small image patches are randomly cropped from the raw image and embedded through a learnable function \(f_{\theta}\) into a 2D embedding space. The objective of the loss \(\mathcal{L}_{\text{OCE}}\) is to ensure that the spatial offset between pairs of patches in the raw image (green arrows) is preserved in the embedding space (see Equation 4).
\(i\) as \(p(i)\) and denote the set of all locations that contain a given patch \(a\) as \(\Omega_{a}\), _i.e._, \(\Omega_{a}=\{i\in\Omega\ \mid\ p(i)=a\}\).
Consider the expected observed offset \(\vv{i}=j-i\) between all occurrences of patches \(a\) and \(b\): for each object contained in the image, patches \(a\) and \(b\) are observed once with their _intra-object offset_, _i.e._, the offset they have to each other as being part of the same object. For every pair of different objects, however, patches \(a\) and \(b\) will be observed at random offsets, following the assumption that objects in the image are randomly distributed. The key insight that allows unsupervised learning of OCEs is that the observed offsets of patches from different objects have zero mean.
Formally, the expected offset between all locations of two image patches \(a\) and \(b\) is given as
\[\mathbb{E}\left[\vv{i}\vv{i},b\right]\approx\frac{1}{N}\sum_{i\in\Omega_{a}} \sum_{j\in\Omega_{b}}\vv{i}, \tag{1}\]
where \(N=|\Omega_{a}|\cdot|\Omega_{b}|\) is the number of pairs of image locations \(i,j\), where patches \(a\) and \(b\) are observed.
This expectation can be rewritten to distinguish observed offsets from the _same_ versus _different_ objects. For that, let \(\Omega_{b}^{i}\) denote all locations \(j\) where patch \(b\) appears and is part of the same object at location \(i\). Similarly, let \(\vv{\Omega}_{b}^{i}\) be the set of locations \(j\) where patch \(b\) appears, but is not part of the object at location \(i\). We can now rewrite the expected observed offset \(\vv{i}\vv{j}\) as
\[\mathbb{E}\left[\vv{i}\vv{i},a,b\right] \approx\frac{1}{N}\sum_{i\in\Omega_{a}}\left[\sum_{j\in\Omega_{b} ^{i}}\vv{i}+\sum_{j\in\vv{\Omega_{b}^{i}}}\vv{i}\vv{i}\right] \tag{2}\] \[=\underbrace{\frac{1}{N_{\text{s}}}\sum_{i\in\Omega_{a}}\sum_{j \in\Omega_{b}^{i}}\vv{i}\vv{i}}_{\text{intra-object offset}}+\underbrace{ \frac{1}{N_{\text{d}}}\sum_{i\in\Omega_{a}}\sum_{j\in\vv{\Omega_{b}^{i}}}}_{ \approx 0}\vv{i}\vv{i}, \tag{3}\]
where \(N_{\text{s}}\) and \(N_{\text{d}}\) denote the number of times that patches \(a\) and \(b\) are observed in the same object and different objects, respectively.
The first term in Equation 3 is, by definition, the intra-object offset, _i.e._, the quantity we aim to infer. The second term is the expected offset between patches \(a\) and \(b\) if both are part of different objects. Under the assumption that multiple similar objects are randomly distributed in the image, this expectation is zero: observing patch \(a\) relative to patch \(b\) with offset \(\vv{i}\vv{j}\) is just as likely as observing them at the inverse offset \(\vv{j}\vv{i}\). Without any supervision, the constants \(N_{\text{s}}\) and \(N_{\text{d}}\) are not known. The expected offset, calculated as in Equation 1, is thus proportional to the sought after intra-object offset.
In conclusion, the expected offset given any two patches approximates the offset between the patches extracted from the same object. We can leverage this property to devise a loss function that minimizes the differences between the spatial and embedding offsets between pairs of patches, and thus learn an object-centric embedding in an unsupervised fashion.
Let \(f_{\theta}:\mathcal{P}\mapsto\mathbb{R}^{2}\) be a parameterized embedding function, mapping from the set of all image patches \(\mathcal{P}\) to a 2D embedding space. We denote a patch located at \(i\) as \(p(i)\) and its embedding as \(r(i)=f_{\theta}(p(i))\). We propose the following unsupervised loss, minimizing the difference between \(d(i,j)=i-j\) and \(\hat{d}(i,j)=r(i)-r(j)\) for pairs of patches:
\[\mathcal{L}_{\text{OCE}}=\sum_{i,j\in\Omega}\sigma\left(d(i,j)-\hat{d}(i,j) \right), \tag{4}\]
where \(\sigma\) is a measure of distance, _e.g._, \(|\cdot|_{2}\) (we will discuss our choice of \(\sigma\) below).
### Loss Implementation
In practice, the embedding function will be implemented as a Convolutional Neural Network (CNN) and its weights can be updated using stochastic gradient descent. In this setting, strong gradient contributions resulting from pairs of patches of different objects can be problematic due to their high variance, even if they have zero mean. To address this, we dampen the effect of large distances in our loss function by using a sigmoid distance function, _i.e._,
Figure 3: **Illustration of the expected offset between two example patches in an idealized image:** Black squares show all image locations where the two patches [] and [] are found. The expected offset between those patches stems from offsets observed within the _same_ object (_intra-object offsets_, shown as green arrows) and offsets observed between _different_ objects (_inter-object offsets_, shown as orange arrows for the center object only). Assuming a random distribution of objects in large images, the average offset between _different_ objects is zero, thus the expected offset \(\vv{i}\vv{i}\) between the given patches is proportional to the intra-object offset.
\(\sigma(\delta)=\left(1+\exp(-\frac{\|\delta\|_{2}^{2}}{\tau})\right)^{-1}\), where \(\tau\) is a hyperparameter controlling the rate of damping.
Furthermore, we limit the sampling of pairs of patches to have a maximal distance \(\kappa\) and add an L2 regularization term to obtain our final unsupervised loss function as
\[\mathcal{L}=\sum_{i,j\in P}\sigma\left(d(i,j)-\hat{d}(i,j)\right)+\lambda_{ \text{reg}}\|r(i)\|_{2}, \tag{5}\]
where \(P\subset\{i,j\in\Omega\mid|i-j|_{2}\leq\kappa\}\). For more details, see Appendix A.
### Instance Segmentation from OCEs
An instance segmentation can be obtained from OCEs by firstly segmenting foreground vs. background, followed by partitioning the foreground into individual instances.
To address the background identification, we exploit the sensitivity of the OCEs to noise in background: We observe that certain noise patterns in the background (_e.g._, single bright pixels) become the center point of locally consistent embeddings, thus creating spurious objects (see Figure 4, first column for an example). To identify background, we repeatedly introduce artificial noise to the raw image and measure the variance of the predicted embeddings (we found salt-and-pepper noise to be effective). We find that the distribution of the variance of these embeddings over image locations is bi-modal, such that a parameter-free thresholding method like Otsu's is sufficient to separate foreground from background.
After identifying the background, we segment individual instances in the foreground through a mean-shift clustering on the dense OCE predictions [2, 19] (see Figure 4).
## 4 Experiments
**Used Datasets**. We test our method Cellulus on nine publicly available datasets for which dense ground truth annotations are available. The datasets were chosen to represent a diverse set of image modalities, cell/tissue types, and imaging platforms.
_TissueNet_[7] is the largest of the analyzed datasets, with \(1.3\) million annotated cells. It covers six imaging platforms and includes histologically normal and diseased tissue of humans, mice, and macaques. The included tissue types (Immune, Lung, Pancreas, Skin cells) vary widely in cell appearance and density. Therefore, we add evaluations where we restrict the dataset to the four individual tissue types. For reference, constructing _TissueNet_ required \(>4,000\) hours of human annotation time.
The nuclei and whole cell are labeled in _TissueNet_ and both of these image channels were used during training and inference. For evaluation purposes during inference, the predicted instance segmentations are compared against the ground truth labels for the whole cell image channel.
_Cell Tracking Challenge (CTC)_[26] provides diverse 2D and 3D datasets 1. We select five 2D datasets with distinct cell appearances: HSC, HU7, Simulated, FluoHela and PSC.
Footnote 1: [http://celltrackingchallenge.net/2d-datasets/](http://celltrackingchallenge.net/2d-datasets/)
Each dataset comes with two sets of image sequences: \(1\) and \(2\). We used set \(1\) for training, while set \(2\) is held out for evaluation. Images in the _CTC_ datasets contain only one channel.
**Segmentation Metrics**. We use two widely used cell segmentation scores: (i) SEG score (used by CTC [26]) matches every ground truth object to a predicted instance segmentation and measures the average intersection over union (IoU) of all matches. (ii) F1 score (used by Greenwald _et al_. [7]) matches all predictions and ground truth objects with an IoU greater than or equal to a fixed threshold (\(0.5\) unless specified) and reports the F1 measure of successfully found matches.
Figure 4: **Overview of the inference pipeline**. Input image to the trained object-centric embedding (OCE) network is augmented repeatedly with salt and pepper noise, producing several noisy instances of the raw image (first column). OCEs are predicted densely for each noisy instance of the input raw image (second column). Next, the pixel-wise mean and variance of the predicted OCEs is calculated (third column). Images locations with high variance are treated as the background. The remaining foreground region is clustered into individual object instances using mean-shift clustering (fourth column).
### Unsupervised Segmentation
We compare Cellulus against two state-of-the-art pre-trained segmentation models that are widely used across datasets. We investigate the segmentation performance under the condition that no ground truth annotations are available.
**Baseline Methods**. StarDist[22] is a widely used cell/nucleus segmentation method. It predicts, for each pixel, the distances to the boundary in a predefined set of directions. Cellpose[23] uses a supervised network to predict spatial embeddings and clusters pixels together using a diffusion-based aggregation method.
**Segmentation Performance.** For each dataset, we train an object-centric embedding network. Raw images are intensity-normalized (\(1\) percentile intensity is mapped to \(0\) while \(99.8\) percentile intensity is mapped to \(1\)) and input to the network to produce dense object-centric embeddings. During inference, these object-centric embeddings are processed to obtain instance segmentations and the F1 and SEG scores are computed with respect to the ground truth masks for the set of images held-out for evaluation purposes (see Table 1). An overview of the datasets and the predicted embeddings and instance segmentations is shown in Figure 7.
Our method outperforms both baselines on real-world datasets HU7, HSC, Simulated, Immune, Pancreas and Skin (according to SEG score). On the Simulated dataset, our method performs exceptionally well (see Table 1 and Appendix D). To highlight the success and failure modes of our method, we measure the F1 score per image and report the [\(0^{\text{th}}\), \(25^{\text{th}}\), \(50^{\text{th}}\), \(75^{\text{th}}\), and \(10^{\text{th}}\)] percentile images for different tissue types in Figure 6.
We find that our method can compensate for some variations in object sizes. Compare, for example, the small cells in the \(75^{\text{th}}\) percentile image of the Immune dataset with more voluminous cells in the \(100^{\text{th}}\) percentile (see Figure 6). However, we also observe that larger outlier objects (_e.g._, the \(0^{\text{th}}\) percentile Skin image) lead to structural under-segmentation.
**Background Detection Performance**. We observe that our background detection generally matches the ground truth in the datasets HU7, FluoHela, TissueNet, PSC and Simulated, where no additional structure in the background is visible. When objects are exceptionally dim, their embeddings may vary with the added noise, which leads to them being treated as background (e.g. see Figure 6, \(25^{\text{th}}\) percentile in the Skin dataset).
The HSC dataset is exceptionally challenging, with a visible culture plate in the background adding additional structure to the image. This leads to additional segments in our method (see Figure 7). All studied methods struggle to predict these segments accurately, with an F1 score below \(0.1\). Remarkably, our method receives a high SEG score compared to the other methods. Further analysis of this dataset, including an evaluation of segmentation metrics for all matching thresholds, can be found in Appendix C.
**Scale Informs All Parameter Choices**. The size of the selected image patches determines what object sizes can be detected. Patches should be smaller than individual objects but still contain meaningful features. To keep all network parameters and training setup constant, we resize the training data. Specifically, datasets HU7, PSC and Simulated are re-scaled by a factor of [\(0.5\),\(2\),\(\cdot\), \(\frac{2}{3}\)], respectively. All other datasets were analyzed at their native resolution.
We also explore the performance of Cellulus across a range of scale factors for two datasets Immune and Lung (see Table 2). Note that predictions produced for any scale
\begin{table}
\begin{tabular}{l c c|c c|c c} \hline \hline & \multicolumn{2}{c}{Cellpose} & \multicolumn{2}{c}{StarDist} & \multicolumn{2}{c}{Cellulus} \\ & F1 & SEG & F1 & SEG & F1 & SEG \\ \hline HSC & \(0.00\) & \(0.00\) & **0.09** & \(0.14\) & \(0.06\) & **0.42** \\ HU7 & \(0.40\) & \(0.27\) & \(0.03\) & \(0.02\) & **0.75** & **0.55** \\ Simulated & \(0.49\) & \(0.34\) & \(0.23\) & \(0.35\) & **0.83** & **0.65** \\ FluoHela & \(0.36\) & \(0.65\) & **0.38** & **0.79** & \(0.34\) & \(0.70\) \\ PSC & **0.76** & **0.58** & \(0.64\) & \(0.47\) & \(0.64\) & \(0.51\) \\ \hline Immune & \(0.44\) & \(0.21\) & \(0.66\) & \(0.41\) & **0.69** & **0.57** \\ Lung & \(0.76\) & \(0.53\) & **0.81** & **0.59** & \(0.51\) & \(0.51\) \\ Pancreas & \(0.56\) & \(0.36\) & \(0.58\) & \(0.36\) & **0.67** & **0.49** \\ Skin & \(0.48\) & \(0.26\) & \(0.39\) & \(0.24\) & **0.60** & **0.46** \\ TissueNet (all) & 0.55 & 0.32 & 0.59 & 0.38 & **0.64** & **0.52** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Quantitative results when no annotations are available (fully unsupervised setting)**.
The pretrained models of StarDist[22] and Cellpose[23] are compared with Cellulus on nine diverse microscopy image datasets. Two instance segmentation metrics F1 and SEG are evaluated by comparing the quality of predicted instance segmentation with the ground truth instance segmentation. Best performing method on each dataset is shown in bold. The last row TissueNet (all) shows a weighted average (weights proportional to the number of images) of results for Immune, Lung, Pancreas and Skin.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline Scale Factor & \(0.5\) & \(0.6\) & \(0.7\) & \(0.8\) & \(0.9\) & \(1.0\) & \(1.1\) & \(1.2\) & \(1.3\) & \(1.4\) & \(1.5\) \\ \hline \multicolumn{8}{c}{Immune} \\ \hline F1 & \(0.12\) & \(0.33\) & \(0.48\) & \(0.44\) & \(0.64\) & \(0.69\) & \(0.69\) & \(0.66\) & \(0.67\) & \(0.59\) & \(0.58\) \\ SEG & \(0.20\) & \(0.28\) & \(0.38\) & \(0.36\) & \(0.48\) & \(0.57\) & \(0.55\) & \(0.57\) & \(0.57\) & \(0.56\) & \(0.54\) \\ \hline \multicolumn{8}{c}{Lung} \\ \hline F1 & \(0.17\) & \(0.23\) & \(0.32\) & \(0.52\) & \(0.48\) & \(0.51\) & \(0.36\) & \(0.46\) & \(0.37\) & \(0.27\) & \(0.28\) \\ SEG & \(0.27\) & \(0.25\) & \(0.35\) & \(0.49\) & \(0.44\) & \(0.51\) & \(0.40\) & \(0.52\) & \(0.48\) & \(0.38\) & \(0.44\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Performance of Cellulus across a range of scale factors for two datasets.**
Two instance segmentation metrics F1 and SEG are evaluated by comparing the quality of predicted instance segmentation obtained using Cellulus at different scale factors, with the ground truth labels at scale factor = \(1.0\). Scale factor is inversely related to the employed patch size.
factor are compared against ground truth labels at scale factor = \(1.0\) to obtain the F1 and SEG scores.
**Implementation.** For learning the object-centric embeddings, we use a U-Net architecture with a limited field of view of \(16\times 16\) (single 2x down-sampling layer, ReLU activation). For more details, see Appendix A.
After the training, a scale-appropriate bandwidth of mean-shift clustering has to be chosen. We use the implementation of mean-shift clustering provided by _scikit-learn_[19] and perform a line search to determine the optimal value.
When instances are tightly packed, our segmentation matches closely with the ground truth without post-processing. However, when instances are surrounded by background, we find that patches close to the object borders get mapped to the object center. Therefore, we shrink our objects to correct for this halo (see Appendix D). We pick the optimal shrinkage distance between \(0\) and \(6\) pixels for all datasets and report the best score.
### Supporting Supervised Learning
In the following experiments, we investigate how our unsupervised segmentation can be used to increase model performance when only a few objects are annotated.
**Supervised Training Setup.** For the supervised training setup, we build two sparsely annotated supervised datasets, which we call the _sparse_ and _pseudo dataset_ by randomly sampling ground truth objects as a fixed percentage of annotated cells. (1) The _sparse dataset_ contains only the annotated samples. (2) The _pseudo dataset_ uses our predicted segmentations as a starting point (pseudo ground truth) and utilizes the same sampled annotations to correct our predictions. We mask all our predicted objects that overlap with the annotations and use the annotations instead. We include annotations of background pixels close to the labeled object (\(<30\) pixels).
We use these datasets to train a U-Net using a supervised StarDist training loss \(\mathcal{L}_{\textsc{StarDist}}\)[22]. Mini-batches contain half of the images from the _sparse dataset_ and the other half from the _pseudo dataset_. The StarDist loss is computed on each respective half (\(\mathcal{L}_{\textsc{StarDist}}^{\textsc{sparse}}\) and \(\mathcal{L}_{\textsc{StarDist}}^{\textsc{pseudo}}\)). The total loss \(\mathcal{L}_{\textsc{StarDist}}=(1-\alpha)\mathcal{L}_{\textsc{StarDist}}^{ \textsc{sparse}}+\alpha\mathcal{L}_{\textsc{StarDist}}^{\textsc{pseudo}}\) is a linear combination, where \(\alpha=0\) corresponds to classical supervised training. For further details, see Appendix B.
We train StarDist models with varying amounts of annotations and compare the performance of models trained only on annotated images (\(\alpha=0\), blue in Figure 5) with those supported by our segmentation (\(\alpha=0.5\), orange in Figure 5). Each experiment is repeated \(3\) times with different annotation samples. We evaluate the trained networks on the full TissueNet dataset as well as subsets of tissue types Immune, Pancreas, Lung and Skin. All mea
Figure 5: **Supervised cell segmentation performance for varying amounts of annotations. We compare a classical supervised learning approach (blue) trained using only manual annotations, against using a mixture of manual annotations and pseudo-ground truth derived from our unsupervised OCEs (orange), on the four tissue types (Immune, Pancreas, Lung and Skin) in the TissueNet dataset [7]. The results for All (left) are obtained by averaging the results obtained individually on the four datasets (right).**
sured performances and their standard deviations are visualized in Figure 5.
**Supported supervision makes a significant improvement at 1%.** When 1% of cell annotations from the TissueNet dataset are used, F1 \(=0.75\pm 0.03\) is obtained which is significantly better than the performance of the purely unsupervised segmentation at F1 \(=0.64\) (see Table 1). This effect could be due to the biases of the StarDist representation, which might help to refine the unsupervised segmentation.
We additionally perform a training experiment using 0% ground truth annotations (_i.e._\(\mathcal{L}_{\text{StARDIST}}=\mathcal{L}_{\text{StARDIST}}^{\text{pseudo}}\)) and notice no improvement (F1 \(=0.63\)). In conclusion, the combination of minimal annotations and the supporting pseudo ground truth significantly help.
**Supported supervision substantially outperforms purely supervised training.** Our proposed supported supervision method can be used as a replacement for training only on annotations without a performance compromise across all annotation levels. Notably, at annotation levels \(\leq 10\%\) our method outperforms the baseline substantially.
## 5 Discussion
We believe that this work offers a feasible way to accelerate the analysis of microscopy image datasets of cells. As our experiments on nine large cell segmentation datasets demonstrate (see Table 1), a surprisingly good segmentation can often be achieved in a completely unsupervised fashion. Depending on the biological question at hand, those results might already be sufficient for downstream analy
Figure 6: **Predicted OCEs and segmentations on the TissueNet dataset with tissue types Immune (top-left), Skin (top-right), Lung (bottom-left) and Pancreas (bottom-right). The F1-score is evaluated for each individual image and the 0th, 25th, 50th, 75th and 100th percentile images and their respective F1 scores are reported in each column. Rows (from the top) show the Raw Images, Dense Prediction of OCEs, the Predicted Instance Segmentation and the Ground Truth Instance Segmentation available for evaluation purposes.**
sis. Furthermore, to obtain more accurate cell segmentations, the segmentations generated in this unsupervised way can be used to augment very small amounts of manual labels and thus increase their efficacy without any additional costs (see Section 4.2). This will in turn drastically reduce the amount of human effort required to analyze large microscopy datasets, and provide a rich source of data for more quantitative and reproducible analyses.
However, we also note some limitations of our method stemming from violated assumptions: if objects are not randomly distributed (_e.g._, if cells always cluster together in pairs), there is no way to tell in a purely unsupervised manner which structure is to be considered as one instance (either the pair of cells, or individual cells). Similarly, if the objects in the image do not resemble many other examples, the proposed method is unlikely to learn a meaningful object-centric embedding. As such, cells with outlier morphologies could result in degenerate segmentations. Furthermore, we note that the proposed method is sensitive to the size of the objects to be segmented, _i.e._, the patch size has to be large enough to contain enough information to predict the relative position of the patch compared to others, but small enough to not contain entire objects. Although this introduces a hyper-parameter that has to be adjusted for each dataset, we believe that this is of little practical relevance since the size of cells in an image can easily be estimated.
While the work discussed here focuses on segmenting cells in 2D datasets, it is theoretically feasible to expand this method to 3D and even 4D datasets. This capability would be particularly useful to biological research, where cells and tissues are commonly imaged in 3D.
## Acknowledgments
S.W., H.W. and K.M. are supported by the Medical Research Council, as part of UK Research and Innovation [MCUP1201/23].
|
2308.12713
|
The quantum optics of gravitational waves
|
By utilizing quantum optics techniques, we examine the characteristics of a
quantum gravitational wave (GW) signature at interferometers. In particular, we
study the problem by analyzing the equations of motion of a GW interacting with
an idealized interferometer. Using this method, we reconstruct the classical GW
signal from a representation of the quantum version of an almost classical
monochromatic wave (a single-mode coherent state), then we discuss the
experimental signatures of some specific, more general quantum states. We
calculate the observables that could be used at future interferometers to probe
possible quantum states carried by the gravitational waves.
|
Luca Abrahao, Francesco Coradeschi, Antonia Micol Frassino, Thiago Guerreiro, Jennifer Rittenhouse West, Enrico Junior Schioppa
|
2023-08-24T11:17:47Z
|
http://arxiv.org/abs/2308.12713v2
|
# The quantum optics of gravitational waves
###### Abstract
By utilizing quantum optics techniques, we examine the characteristics of a quantum gravitational wave (GW) signature at interferometers. In particular, we study the problem by analyzing the equations of motion of a GW interacting with an idealized interferometer. Using this method, we reconstruct the classical GW signal from a representation of the quantum version of an _almost classical_ monochromatic wave (a single-mode coherent state), then we discuss the experimental signatures of some specific, more general quantum states. We calculate the observables that could be used at future interferometers to probe possible quantum states carried by the gravitational waves.
## 1 Introduction
Our understanding of gravity is simply and elegantly expressed through the theory of general relativity (GR). GR is a well-defined _classical_ (as in, non-quantum) theory and its study offers numerous fruitful avenues of research in various domains, ranging from cosmology to astrophysics to tabletop experiments. GR may of course also be viewed as the low energy limit of a _quantum_ field theory. While a full-blown quantum theory of gravity is as yet unavailable, it is possible to give an _effective_ quantum description of GR by treating the spacetime metric as a classical background, and perturbatively quantizing fluctuations around it [1, 2]. The resulting effective theory is a quantum field theory of gravitons in curved spacetime, and its non-renormalizability is harmless at energies much below the Planck scale. Such an effective description may arise, for instance, as the intermediate energy limit of a more complete theory. The detection of gravitational waves - long awaited theoretically and difficult to achieve experimentally - has been a huge scientific achievement. In the same vein, a natural next step is to search for experimental or astrophysical observations that can provide definitive proof of (or exclude) the existence of gravitons.
Recently, there has been an increasing interest in combining characteristic features of gravity with aspects of quantum optics. Some interesting applications that would benefit from this union would be the detection of gravitational waves using cavities [3, 4, 5, 6, 7, 8, 9, 10] or, alternatively, the analysis of possible quantum effects in the weak gravity regime [11, 12, 13]. While the former points in the direction of increasing sensitivity, the latter focuses more on the phenomenological advantages of considering such experimental setups.
From the phenomenological point of view, a significant benefit of combining quantum optics and gravity is the possibility of experimentally probing the existence of the graviton as the mediator of the gravitational force. Indeed, while empirical evidence supports the existence of quantum particles, such as photons, gluons, W/Z bosons, and Higgs bosons, which are known to give rise to all known interactions according to the Standard Model, gravitons have not been directly observed (and have even been argued to be in principle non-observable [14]), even though they are likely to be an ingredient of any quantum theory of gravity.
As already mentioned, the quantum field description of an interacting spin-2 particle, having GR as a classical limit, has been long well-known. Even though such a theory only works phenomenologically at low energy, a great deal of knowledge can be gained from just focusing on such a low-energy, weak-gravity (perturbative) quantum regime 1. From a practical point of view, this means that it makes sense to explore the possibility that weak gravitational phenomena, such as gravitational waves far from their sources, may have a quantum nature. Observations of gravitational waves by the LIGO-Virgo-KAGRA collaboration support classical general relativity, and quantum
corrections, if any, are assumed to be negligible because graviton shot noise is suppressed by the large occupancy number of gravitons in a detectable wave [16, 17]. However, if quantum effects (negligible or not) are present, the argument for their smallness is only reliable if the gravitational wave is in a mostly classical state, such as a coherent state. For such states, it is unlikely that any quantum behaviour will be observed in the near future; however this conclusion does not apply, at least straightforwardly, to more general quantum states.
Recently, considerable effort has been devoted to understanding possible signatures at LIGO-like experiments of hypothetical quantum states of gravity that have no classical analog [16, 17, 18, 19, 20] (see also [21, 15, 22]). These signatures can be fruitfully investigated by using an effective field theory description of gravity and using the formal tools of quantum optics. The picture that comes out gives a powerful perspective on some fundamental aspects of the possible phenomenology of quantum gravity.
We start by treating both gravity and the instrument one uses to detect it as quantum objects. We thus describe the dynamics of the system by a Hamiltonian operator of the form
\[\hat{H}=\hat{H}_{0}+\hat{H}_{int}\,. \tag{1}\]
where
\[\hat{H}_{0}=\hat{H}_{g}+\hat{H}_{d}\,, \tag{2}\]
gives the dynamics of pure gravity (subscript \(g\)) and of the detector (subscript \(d\)). The details of \(\hat{H}_{0}\) are not important for our discussion, beyond treating gravity as a quantum theory: the states we will consider are quantum superpositions of plane gravitational wave, that is, states of many gravitons of well-defined energy. The most interesting information lives instead in the interaction term \(\hat{H}_{int}\), the form of which is very specific to the system we are studying. In other words, even though we are going to make use of the formal tools of quantum optics, which are generic no matter the specific form of \(\hat{H}_{int}\), it is precisely the specific form of \(\hat{H}_{int}\) that carries the differences between electromagnetism and gravity. This will allow us to draw conclusions that specifically pertain to the quantum weak regime of the gravitational field.
Let us now briefly outline the way it is possible to arrive at these conclusions with details to follow in the next sections. As a first step, we formally prepare the gravitational part of our GW-detector system in some specific state \(|\Psi\left(t=0\right)\rangle\) (we will consider several different set-ups for \(\Psi\)), then we choose a quantum observable \(\hat{\mathcal{O}}\) and compute the outcome of (classical) measurements yielding expectation values of the form
\[\mathcal{O}^{n}\left(t\right)=\,\langle\Psi|\hat{U}^{\dagger}\left(t\right) \hat{\mathcal{O}}^{n}\hat{U}\left(t\right)|\Psi\rangle\, \tag{3}\]
where \(U\) is the time evolution operator
\[\hat{U}\left(t\right)=e^{-\frac{i}{\hbar}\hat{H}t}\,, \tag{4}\]
whose specific form for our GW-detector setup is known from previous results [18, 23]. Calculating expectation values for different values of \(n\), we can reconstruct the probability distribution for observable \(\hat{\mathcal{O}}\) on state \(\Psi\) in some detail (mean value, variance \(\left[\Delta\mathcal{O}\left(t\right)\right]^{2}=\mathcal{O}^{2}\left(t\right)- \left[\mathcal{O}\left(t\right)\right]^{2}\), and so on); we show that the moments of certain observables (in particular, the cavity's electric field \(\mathcal{E}\)) can depend strongly on the choice of \(\Psi\), meaning that measurements of \(\mathcal{O}^{n}\) can discriminate between different states \(\Psi\) and - potentially - probe whether \(\Psi\) possesses inherently quantum features that can't be mimicked by a purely classical GW.
The outline of the paper is the following. In Section 2, we make use of a few important previous results [24, 25] to reduce the complicated starting forms of \(\hat{H}_{d}\) and \(\hat{H}_{int}\) to simple expressions. Moving forward, we will focus on examining the changes in observables when the gravitational waveform \(\left|\Psi_{g}\left(t=0\right)\right\rangle\) is prepared in specifically selected states in the remaining half of the Hilbert space.
In Section 3, we test our approach by applying the equations of motion derived in Sec. 2 to some example cases. First we examine the simplest allowed states for the gravitational field: the vacuum and the single-mode coherent state. Although the former does show some interesting quantum effects, they ultimately seem too small to be measured at present-day experiments. On the other hand, the latter serves to validate our program by providing the expected signal from a classical analog of a single mode coherent state: a classical plane monochromatic gravitational wave. Additional validation comes from Sec. 3.3, where we show how the thermal gravitational background, when treated as an ensemble of mixed states, produces gravitational-induced decoherence [19]. We thus recover known results from previous work [26].
In Section 4, we comment on the effect of GW fluctuations on electric fields, and how these can be used to obtain information on the GW state.
In Section 5, we study hypothetical collective quantum states of gravitational waves with no classic analogue, and analyze their detection at an interferometer. We find that the gravitational _squeezed vacuum_ gives an effect on the _noise_ of the intereferometer [16, 18], and that gravitational _squeezed coherent_ states affect its _signal_, with some room left for such an effect being very significant [19]. Needless to say, these conclusions remain hypothetical before we actually do see such a signal in an experiment. Nonetheless, and this is the topic of Section 5.1, we give some heuristic arguments in favour of the existence of squeezed states in gravity by noting similarities and differences with the case of electromagnetism.
Finally, in Section 6 we give our conclusions.
## 2 Dynamics of the system
We will be working within the framework of linearized gravity, employing a flat metric background to enable us to consider weak gravity far from sources. With this choice, the quadratic part of the Einstein-Hilbert action (in vacuum) in the harmonic gauge
\(\partial_{\mu}h^{\mu\nu}=0\) reduces to
\[S_{EH}=\frac{c^{4}}{32\pi G}\int d^{4}x\left(\frac{1}{2}\partial_{\mu}h_{\alpha \beta}\partial^{\mu}h^{\alpha\beta}-\frac{1}{4}\partial_{\mu}h\partial^{\mu}h \right)\,, \tag{5}\]
where, as usual, the field \(h^{\mu\nu}\) represents the small perturbations of the otherwise flat metric \(\eta_{\mu\nu}\), and \(h=\eta^{\mu\nu}h_{\mu\nu}\) is contracted by the flat metric tensor. We neglect higher order terms (that is, gravitational self-interactions) throughout, as their impact is negligible in GWs far from their source.
In the transverse traceless (TT) gauge, we expand the field into Fourier components as
\[h_{ij}^{TT}\left(t,\mathbf{x}\right)=\int\frac{d^{3}\mathbf{k}}{\sqrt{(2\pi)^{3}}} \epsilon_{ij}^{\lambda}\left(\mathbf{k}\right)h_{\lambda}\left(t,\mathbf{k}\right)e^ {i\mathbf{k}\cdot\mathbf{x}}\,, \tag{6}\]
where the \(\epsilon_{ij}^{\lambda}\left(\mathbf{k}\right)\) are the tensors for the two polarization states \(\lambda=+,\times\), satisfying the due conditions of orthonormality (\(\epsilon_{ij}^{\lambda}\epsilon_{jk}^{\lambda^{\prime}}=\delta_{ik}\delta^{ \lambda\lambda^{\prime}}\)), transverseness (\(\epsilon_{ij}^{\lambda}k^{j}=0\)) and tracelessness (\(\mathrm{Tr}\left[\epsilon_{ij}^{\lambda}\right]=0\)). Notice that Greek indices have become Latin indices, as the time components of the field in the TT gauge are null (\(h_{0\mu}=0\)).
With the field expressed in this form, we can execute canonical quantization by rewriting the field to operators. We promote the Fourier coefficients to annihilation and creation operators as follows
\[h_{\lambda}\left(t,\mathbf{k}\right) \rightarrow \hat{\mathfrak{b}}_{\mathbf{k}}^{\lambda}\,, \tag{7}\] \[h_{\lambda}^{*}\left(t,\mathbf{k}\right) \rightarrow \hat{\mathfrak{b}}_{\mathbf{k}}^{\lambda\dagger}\,, \tag{8}\]
which obey the standard commutation relations (from here on we work in units where \(\hbar=1\)),
\[\left[\hat{\mathfrak{b}}_{\mathbf{k}}^{\lambda},\hat{\mathfrak{b}}_{\mathbf{k}^{ \prime}}^{\lambda^{\prime}\dagger}\right]=\delta_{\lambda\lambda^{\prime}} \delta^{\left(3\right)}\left(\mathbf{k},\mathbf{k}^{\prime}\right)\,. \tag{9}\]
The classical field now gets promoted to a quantum field operator and we can write explicitly
\[\hat{h}_{ij}\left(t,\mathbf{x}\right)=\int\frac{d^{3}\mathbf{k}}{\sqrt{(2\pi)^{3}}} \left(\sqrt{\frac{8\pi G}{k}}\epsilon_{ij}^{\lambda}\left(\mathbf{k}\right)\hat{ \mathfrak{b}}_{\mathbf{k}}^{\lambda}e^{i\left(\mathbf{k}\cdot\mathbf{x}-\Omega_{k}t\right) }+h.c.\right)\,. \tag{10}\]
Equation (10) concludes the description of the setup we will be using for the gravitational part of our problem; we will mostly consider single-mode metric perturbations in the following, \(\sim h_{\mu\nu}e^{ikx}\), corresponding to planar waves of well-defined frequency. This does not limit the scope of our calculations since we will be able to express any potential initial (quantum) state of gravity as a superposition of plane waves.
As far as the detector is concerned, we want to model a GW interferometer. Arvanitaki and Geraci [27] have shown that already a single-mode Fabry-Perot cavity is sensitive to gravitational waves. This can be achieved either by inserting a nanosphere in the setup [27], or by letting one of the two mirrors be free of moving, as described by Buonanno and Chen in [24], and by Pang and Chen in [25]. Let us narrow our focus to the second case of study. Pang and Chen have demonstrated, by making realistic assumptions (see [25]), that a complete model of a GW interferometer (including power
recycling and signal recycling mirrors) can be mapped to a single Fabry-Perot cavity where one mirror is fixed and the other is free to move. This simplifies the complexity of the interferometer, and we can work with a single cavity of length \(L_{0}\) as the only degree of freedom to describe our model detector. When a GW of strain \(h\) passes through such a cavity perpendicularly to its axis, its length changes in the following way:
\[L_{0}\quad\rightarrow\quad L_{0}\left(1+\frac{1}{2}h\right)\,. \tag{11}\]
This can be seen as a "gravitomechanical" coupling between the GW and the detector, much like an optomechanical coupling between the electromagnetic field and a mechanical oscillator [28].
Instead of working with the GW coupled to the detector's mirror, one can move to a perspective in which the GW couples directly to the cavity's electromagnetic field (the laser beam). When the cavity is stretched, its resonance frequency changes accordingly as
\[\omega_{0}=\frac{n\pi}{L_{0}}\quad\rightarrow\quad\omega=\omega_{0}\,\frac{n \pi}{L_{0}\left(1+\frac{1}{2}h\right)}\,, \tag{12}\]
which can be expanded as
\[\omega=\omega_{0}\left(1-\frac{1}{2}h+\mathcal{O}\left(h^{2}\right)\right)\,. \tag{13}\]
The induced frequency shift can be interpreted as producing an effective coupling between the GW and the electromagnetic field inside the cavity. It turns out, following [18, 29], that for a \(+\) polarized GW propagating in the \(z\) direction perpendicularly to the cavity axis (\(x\) direction), and satisfying \(k_{x}L_{0}\ll 1\), such a coupling is represented by an interaction Hamiltonian of the form
\[\hat{H}^{\rm int}_{\rm GW}=-\frac{\omega_{0}}{4}\hat{a}^{\dagger}\hat{a}\int \frac{d^{3}\mathbf{k}}{\sqrt{(2\pi)^{3}}}\left(\sqrt{\frac{8\pi G}{k}}\hat{b}_{\bm {k}}+\rm{h.c.}\right)\,. \tag{14}\]
In this expression, \(\hat{a}\) and \(\hat{a}^{\dagger}\) are the annihilation and creation operators of the cavity field, which we take to be in a single mode state for simplicity, while the operators \(\hat{\bf b}\) are defined in (7). Notice that, having fixed the polarization of the GW, the \(\lambda\) index has dropped.
Following a procedure which is standard in quantum optics [30], we introduce a quantization volume \(V\) to define a dimensionless quantity \(\hat{b}_{\mathbf{k}}=\hat{\mathfrak{b}}_{\mathbf{k}}/\sqrt{V}\), and transform the continuous integral in Eq. (14) into its discretized version [18]
\[\hat{H}^{\rm int}_{\rm GW}=-\frac{\omega_{0}}{4}\hat{a}^{\dagger}\hat{a}\sum_ {\mathbf{k}}\left(\sqrt{\frac{8\pi G}{Vk}}\hat{b}_{\mathbf{k}}+\rm{h.c.}\right)\,. \tag{15}\]
Let us now define, respectively, the single graviton strain \(f_{\mathbf{k}}\), the opto-gravitational coupling constant \(g_{\mathbf{k}}\), and the dimensionless coupling \(q_{\mathbf{k}}\), in the following way:
\[f_{\mathbf{k}}=\sqrt{\frac{8\pi G}{Vk}},\quad g_{\mathbf{k}}=\frac{\omega_{0}f_{\mathbf{k} }}{4},\quad q_{\mathbf{k}}=\frac{g_{\mathbf{k}}}{\Omega_{k}}\,. \tag{16}\]
Here, \(\mathbf{k}\) represents the GW frequency for the mode \(\mathbf{k}\), where \(\left|\mathbf{k}\right|=\Omega_{k}\). Since \(q_{\mathbf{k}}\) is a small number by definition, we will treat it as a perturbative parameter. With these definitions, the Hamiltonian for the complete system, including the GW, the cavity field, and their effective interaction, can be defined as
\[\hat{H}=\hat{H}_{0}+\hat{H}_{\mathrm{GW}}^{\mathrm{int}}\,, \tag{17}\]
with the free Hamiltonian given by
\[\hat{H}_{0}=\omega\hat{a}^{\dagger}\hat{a}+\sum_{\mathbf{k}}\Omega_{k}\hat{b}_{\bm {k}}^{\dagger}\hat{b}_{\mathbf{k}}\,, \tag{18}\]
and the interaction Hamiltonian further reduced to
\[\hat{H}_{\mathrm{GW}}^{\mathrm{int}}=-\hat{a}^{\dagger}\hat{a}\sum_{\mathbf{k}}q_ {\mathbf{k}}\Omega_{\mathbf{k}}\left(\hat{b}_{\mathbf{k}}+\hat{b}_{\mathbf{k}}^{\dagger}\right)\,. \tag{19}\]
The derivation of the explicit form of the time evolution operator for the interaction term (19) is a lengthy but straightforward calculation, using an approach which is standard in quantum optics. As reported in [18, 23], we can express the result for a single mode \(\mathbf{k}\) (omitting the index for the sake of readability) as
\[\hat{U}\left(t\right)\left|\Psi\left(t\right)\right\rangle=e^{\hat{q}a^{ \dagger}\hat{a}\left[\eta(t)\hat{b}-\eta^{*}(t)\hat{b}^{\dagger}\right]}e^{iB \left(t\right)\left(\hat{a}^{\dagger}\hat{a}\right)^{2}}\left|\Psi\left(t \right)\right\rangle\,, \tag{20}\]
where the time evolution is contained in the definition of
\[\eta\left(t\right) =1-e^{-it}\,, \tag{21}\] \[\eta^{*}\left(t\right) =1-e^{it}\,,\] (22) \[B\left(t\right) =q^{2}\left(t-\sin\left(t\right)\right)\,, \tag{23}\]
and the time-evolving state that appears in (20) is defined as \(\left|\Psi\left(t\right)\right\rangle=e^{-i\hat{b}^{\dagger}\hat{b}t}\left|\Psi\right\rangle\).
## 3 GW state reconstruction
When examining how a GW interacts with an optical cavity, the most suitable observable is the electric field operator that characterizes the cavity field's state
\[\hat{\mathcal{E}}=\sqrt{\frac{\omega}{V_{c}}}\frac{\hat{a}+\hat{a}^{\dagger}} {\sqrt{2}}\,. \tag{24}\]
This is indeed the physical quantity that one measures at an interferometer to produce a detectable signal. Note that here \(V_{c}\) denotes the cavity mode volume.
Now that we have defined the operator, we can proceed to calculate the matrix elements as outlined in Eq. (3). Our focus will be on the mean value, specifically when \(n=1\). The classical GW signal at an interferometer is sensed as a variation of the phase of the field quadrature:
\[\mathcal{E}\quad\rightarrow\quad\mathcal{E}e^{i\phi}\,, \tag{25}\]
where \(\phi\) changes in time. For example, this variation of the phase produces typical chirp-like signatures observed for binary merger events. Any result we find that produces a departure of \(\phi\) from its classical behavior, namely which has the form
\[\mathcal{E}\quad\rightarrow\quad\mathcal{E}e^{i\left(\phi+\delta\phi\right)}\,, \tag{26}\]
is interpreted as an effect on the _signal_. Contrariwise, if we find a modification of the form
\[\mathcal{E}\quad\rightarrow\quad\mathcal{E}+\epsilon\,, \tag{27}\]
then we are witnessing an effect on the _noise_.
To begin with, we observe that the expression for the time evolution operator (20) includes an exponential term in \(q^{2}\) (the one defined as \(B\left(t\right)\) in (23)). However, since the terms in \(q\) is dominant, we can disregard this term for now (although we will revisit it in Sec. 3.3). Based on this assumption, we can use the time evolution of the electric field operator to obtain the following result
\[\mathcal{E}\left(t\right)=\sqrt{\frac{\omega}{V_{c}}}\left(\frac{\langle\Psi \left(t\right)|\hat{\mathcal{D}}\left[q\eta\left(t\right)\right]\hat{a}|\Psi \left(t\right)\rangle+h.c.}{\sqrt{2}}\right)\,, \tag{28}\]
where we have defined the operator in parenthesis as
\[\hat{\mathcal{D}}\left[q\eta\left(t\right)\right]=e^{q\hat{a}^{\dagger}\hat{a }\left[\eta\left(t\right)\hat{b}-\eta^{*}\left(t\right)\hat{b}^{\dagger} \right]}\,. \tag{29}\]
This operator acts on the gravitational field as a _displacement operator_ whose amplitude is proportional to the optical field's intensity.
Let us proceed with the selection of specific states within the Hilbert space, commencing with the detector component. To a very good approximation, the electromagnetic field inside the cavity is a monochromatic wave at frequency \(\omega\). From the quantum point of view, we can model it as a single mode coherent state \(|\alpha\rangle\) for some complex number \(\alpha\). With this hypothesis, we can easily trace out the detector component of the state \(|\Psi\left(t\right)\rangle\), and we are left with
\[\mathcal{E}\left(t\right)=\sqrt{\frac{\omega}{V_{c}}}\left(\frac{\alpha\, \langle\Psi_{g}\left(t\right)|\hat{\mathcal{D}}\left[q\eta\left(t\right) \right]|\Psi_{g}\left(t\right)\rangle+c.c.}{\sqrt{2}}\right)\,, \tag{30}\]
where now the (quantum) GW wavefunction \(|\Psi_{g}\rangle\) enters into play independently.
### Vacuum state
Now, let us consider the possibility of preparing GW states in specific quantum states, with the simplest one being the vacuum state. The vacuum state for the generic mode \(\bi{k}\) (that is, a state with no gravitons of energy \(\bi{k}\)) can be written as:
\[|\Psi_{g}\rangle=|0_{\bi{k}}\left(t\right)\rangle=e^{-i\hat{b}_{\bi{k}}^{ \dagger}b_{\bi{k}}\Omega_{k}t}\,|0_{\bi{k}}\rangle=|0\rangle\, \tag{31}\]
where we are following the standard harmonic oscillator convention of writing the eigenstates of the number operator \(a_{\bi{k}}^{\dagger}a_{\bi{k}}\) as \(\left|n_{\bi{k}}\right\rangle\), and \(\left|0\right\rangle\) is the state with no gravitons at all. Therefore, to obtain the mean field we must evaluate the following expression:
\[\mathcal{E}\left(t\right)=\sqrt{\frac{\omega}{V}}\left(\frac{\alpha\prod_{\bi{ k}}\left\langle 0|\hat{\mathcal{D}}\left[q_{\bi{k}}\eta\left(\Omega_{k}t \right)\right]|0\right\rangle+c.c.}{\sqrt{2}}\right)\,. \tag{32}\]
The matrix element is easily calculated by considering that the vacuum can be seen as the coherent state with \(\alpha=0\). The displacement operator acting on such a state gives
\[\hat{\mathcal{D}}\left[q_{\bi{k}}\eta\left(\Omega_{k}t\right)\right]\left|0 \right\rangle=\left|q_{\bi{k}}\eta\left(\Omega_{k}t\right)\right\rangle\,. \tag{33}\]
Using the normalization condition for coherent states,2 we can rewrite the previous expression as
Footnote 2: The normalization condition is that given two coherent states characterized by complex numbers \(\alpha\) and \(\beta\), one has
\[\langle\alpha|\beta\rangle=e^{-\frac{1}{2}\left(|\alpha|^{2}+|\beta|^{2}- \alpha^{*}\beta-\alpha\beta^{*}\right)}\,. \tag{34}\]
\[\langle 0|\hat{\mathcal{D}}\left[q_{\bi{k}}\eta\left(\Omega_{k}t\right) \right]|0\rangle=\langle 0|q_{\bi{k}}\eta\left(\Omega_{k}t\right)\rangle=e^{- \frac{1}{2}q_{\bi{k}}^{2}\left|\eta\left(\Omega_{k}t\right)\right|^{2}}\,, \tag{35}\]
and thus define
\[D\stackrel{{\rm def}}{{=}}\prod_{\bi{k}}\,\,\langle 0|\hat{ \mathcal{D}}\left[q_{\bi{k}}\eta\left(\Omega_{k}t\right)\right]|0\rangle=e^{- \frac{1}{2}\sum_{\bi{k}}q_{\bi{k}}^{2}\left|\eta\left(\Omega_{k}t\right) \right|^{2}}\,. \tag{36}\]
This quantity (36) was calculated in [18] by introducing both an infrared and an ultraviolet cutoff to avoid divergence and by noting that by simple algebra
\[\left|\eta\left(\Omega_{k}t\right)\right|^{2}=2\left[1-\cos\left(\Omega_{k}t \right)\right]\,. \tag{37}\]
The final result for the the mean field, in the case of a vacuum state, is
\[\mathcal{E}\left(t\right)=\sqrt{\frac{2\omega}{V_{c}}}D\,\mathrm{Re}\{\alpha \}\,. \tag{38}\]
We can express this as equation (27), which represents the effect of the detector's noise. Essentially, we have determined how the gravitational vacuum affects the interferometer's sensitivity curve. However, it's important to note that this impact is orders of magnitude below any reasonably achievable sensitivity limit [16, 18]. In fact, it's even lower than the theoretical quantum thermal noise of gravity across a wide range of frequencies, as demonstrated in [12]. Ultimately, this effect is impractical to measure.
When analyzed more closely, however, we do find an interesting side result. We performed this calculation after we had neglected the subdominant \(q^{2}\) term in equation (20). Had we retained it, we would have arrived to the surprising conclusion that the gravitational vacuum induces squeezing of the cavity field [18]. Once again, after plugging in the right numbers, this effect turns out to be practically unmeasurable. However, in general, we find that, unsurprisingly, to achieve a measurable effect in an experiment where gravity couples to optical observables, one needs to start from gravity modes populated with a large mean number of gravitons, as we will see in the following.
### Coherent state
The simplest state in which we can collect a large mean number of gravitons together is a coherent state. A calculation similar to the one we performed to arrive at equation (38) can be carried out to derive the effect on the cavity's field quadrature of a single mode coherent state of gravity. We write such state as
\[\left|\Psi_{g}\right\rangle=\begin{cases}\left|he^{i\Omega_{GW}t}\right\rangle& \text{if }k=k_{GW}\,,\\ \left|0\right\rangle&\text{otherwise}\,.\end{cases} \tag{39}\]
Here \(h\) is real (we set the phase to zero, for simplicity) and is indeed a large number linked to the population of the mode. Now we must calculate
\[\mathcal{E}\left(t\right)=\sqrt{\frac{\omega}{V_{c}}}\left(\frac{\alpha\left\langle he ^{i\Omega_{GW}t}\right|\hat{\mathcal{D}}\left[q_{GW}\eta\left(\Omega_{GW}t \right)\right]\right|he^{i\Omega_{GW}t}\rangle\prod_{\mathbf{k}\neq\mathbf{k}_{GW}} \left\langle 0|\hat{\mathcal{D}}\left[q_{\mathbf{k}}\eta\left(\Omega_{k}t\right) \right]\right|0\rangle+c.c.}{\sqrt{2}}\right)\,. \tag{40}\]
When evaluating the \(\mathbf{k}\neq\mathbf{k}_{GW}\) product, we would obtain a product of terms in \(\sim e^{\frac{i}{2}q_{\mathbf{k}}^{2}}\) which - because of the small value of \(q_{\mathbf{k}}\) - are all of order 1, much in the same way as we calculated expression (36). This means we can neglect all but
\[\mathcal{E}\left(t\right)\sim\sqrt{\frac{\omega}{V_{c}}}\left(\frac{\alpha \left\langle he^{i\Omega_{GW}t}\right|\hat{\mathcal{D}}\left[q_{GW}\eta\left( \Omega_{GW}t\right)\right]\right|he^{i\Omega_{GW}t}\rangle+c.c.}{\sqrt{2}} \right)\,. \tag{41}\]
Again, the full calculation is straightforward, arriving at
\[\mathcal{E}\left(t\right)\sim e^{i2qh\sin\Omega t}\,. \tag{42}\]
This expression is of the form (26), and it tells us that we are measuring a signal oscillating in phase with the GW. We have thus recovered the classical GW signal from a representation of the quantum analog of a classical monochromatic wave: the single-mode coherent state. This gives us a first validation check of our program.
### GW-induced decoherence
In Sec. 3, we initially neglected the \(q^{2}\) terms in the time evolution operator (20). This choice was justified as these terms give rise to effects, such as the aforementioned squeezing of the cavity field induced by the gravitational quantum vacuum, that is by far dominated by effects that are linear in \(q\). However, it is worth paying some more attention to such \(q\)-quadratic terms, as it turns out they produce gravity-induced decoherence. Although the effect is weak and difficult to measure, it serves as a secondary validation check by linking our setup to established results.
To see this, let us now repeat the calculations presented in Sec. 3 but this time by preparing our system as an electromagnetic (EM) qubit interacting with the gravitational vacuum
\[\left|\Psi\left(0\right)\right\rangle=\frac{\left|0\right\rangle_{\text{EM}}+ \left|N\right\rangle_{\text{EM}}}{\sqrt{2}}\otimes\left|0\right\rangle_{\text {GW}} \tag{43}\]
where, once again, \(\left|N\right\rangle_{\rm EM}\) (\(\left|N\right\rangle_{\rm GW}\)) denotes a state with \(N\) photons (gravitons). We can now perform the following three steps:
1. Evolve the state using the simplified time evolution operator as expressed in equation (29);
2. write the total density matrix \(\rho\left(t\right)\) associated to the time-evolving state;
3. calculate the density matrix \(\rho_{EM}\left(t\right)\) associated to the EM subsystem, by tracing out the GW degrees of freedom.
When carrying out the calculations, we arrive at the following density matrix [19]:
\[\rho_{\rm EM}\left(t\right)=\begin{pmatrix}\frac{1}{2}&\rho_{01}\\ \rho_{01}^{*}&\frac{1}{2}\end{pmatrix}\,, \tag{44}\]
where
\[\rho_{01}=\left\langle 0|qN\eta\right\rangle=e^{-\frac{1}{2}q^{2}N^{2}\left| \eta\right|^{2}}\,. \tag{45}\]
The presence of such time dependent off-diagonal terms (via the time dependence of \(\eta\left(t\right)\)) shows indeed that the \(q^{2}\) component of the time evolution operator is inducing decoherence.
This can be taken farther. By replacing the vacuum \(\left|0\right\rangle_{\rm GW}\) in equation (43) with a GW single mode coherent state \(\left|\alpha\right\rangle_{\rm GW}\), and repeating the same calculations, we obtain an EM density matrix of the same form as (44), but now with
\[\rho_{01}=e^{-\frac{1}{2}\left[q^{2}N^{2}\left|\eta\right|^{2}+qN(\eta^{*} \alpha-\eta\alpha^{*})\right]}\,. \tag{46}\]
When we extend it to a single mode GWs in a thermal state, we obtain
\[\rho_{01}=e^{-\frac{1}{2}q^{2}N^{2}\left|\eta\right|^{2}(1+\overline{n})} \tag{47}\]
where \(\overline{n}\) is the mean number of gravitons2. This result is useful in that it finally allows us to consider an ensemble of modes in thermal states. In such a case, we would need to reintroduce the state index \(\mathbf{k}\), and calculate
Footnote 2: To arrive at equation (47), one should remember that the density matrix of a thermal state with mean number of gravitons \(\overline{n}\), can be related to the continuum of coherent states \(\left|\alpha\right\rangle\) as
\[\rho=\int\frac{d^{2}\alpha}{\pi\overline{n}}e^{-\frac{\left|\alpha\right|^{2} }{\Lambda}}\left|\alpha\right\rangle\left\langle\alpha\right|\,. \tag{48}\]
where \(k_{B}\) is Boltzmann's constant. Using the explicit forms of \(q_{\mathbf{k}}\) and \(\eta_{\mathbf{k}}\), and averaging over the Bose-Einstein distribution (see [19]), we arrive at an expression of the form
\[\rho_{01}\approx e^{-\Gamma t}\,, \tag{51}\]
where
\[\Gamma\propto k_{B}T\left(\frac{\Delta E}{E_{\rm pl}}\right)^{2}\,, \tag{52}\]
and \(\Delta E=N\omega\) is the energy of the state \(\left|N\right\rangle_{\rm EM}\), in accordance with previous results on gravitational-induced decoherence [26].
## 4 GW-induced electric field fluctuations
Before we continue discussing other gravity wave states, it is important to comment on the practicality of measuring deviations from the classical theory with electromagnetic probes. In a previous work [19], we showed that the measurement problem for our quantum gravitational states can be stated in terms of photon-number tomography of the optical mode that interacts with the wave. In particular, we showed that if the GW state is Gaussian, it can be reconstructed from experimentally accessible data that can be measured from non-classical (yet macroscopic) observables. Here, we would like to point out that information on the GW states can also be obtained from field (homodyne) measurements, which is more practical than a photon-number resolving measurement experiments.
Reconstruction of the second moments of a general GW state \(|\Psi\rangle\) can be achieved by measuring expectation values of the form \(\langle\Psi|{\cal D}(nq\eta(t))|\Psi\rangle\), where \(n\) is an integer [19]. General reconstruction of the first and second moments can be achieved if we measure these expectation values for \(n=1,2,3\). For coherent states, this can be done by measuring the first three moments of the electric field.
The variance of the field is \(\Delta{\cal E}=\langle{\cal E}^{2}\rangle-\langle{\cal E}\rangle^{2}\) and we compute \(\langle{\cal E}^{2}\rangle\) by tracing out the detector component. Noticing that
\[{\cal E}^{2}(t)=\frac{\omega}{2V_{c}}\left(2a^{\dagger}a-1+a^{2}{\cal D}(-2q \eta(t))+\left(a^{\dagger}\right)^{2}{\cal D}^{*}(-2q\eta(t)\right), \tag{53}\]
we find that for general states,
\[\langle{\cal E}^{2}(t)\rangle=\frac{\omega}{2V_{c}}\left(|\alpha|^{2}-1+ \alpha^{2}\,\langle\Psi_{g}(t)|\hat{\cal D}(2q\eta(t)|\Psi_{g}(t)\rangle+c.c.\right)\,. \tag{54}\]
Assuming that the gravitational wave is initially a vacuum state, we have
\[\langle{\cal E}^{2}\rangle=\frac{\omega}{2V_{c}}\left(\alpha^{2}\,\langle 0| \hat{\cal D}\left[2q_{\mathbf{k}}\eta\left(\Omega_{k}t\right)\right]|0\rangle+ \alpha^{*2}\,\langle 0|\hat{\cal D}^{*}\left[2q_{\mathbf{k}}\eta\left(\Omega_{k}t \right)\right]|0\rangle-1+|\alpha|^{2}\right)\,. \tag{55}\]
Analogously,
\[\langle 0|\hat{\cal D}\left[2q_{\mathbf{k}}\eta\left(\Omega_{k}t\right)\right]|0 \rangle=\langle 0|2q_{\mathbf{k}}\eta\left(\Omega_{k}t\right)\rangle=e^{-2q_{\mathbf{k}} ^{2}|\eta\left(\Omega_{k}t\right)|^{2}}\,, \tag{56}\]
and we can define the quantity,
\[D_{2}\stackrel{{\mbox{\tiny{\tiny def}}}}{{=}}\prod_{\mathbf{k}}\, \langle 0|\hat{\mathcal{D}}\left[2q_{\mathbf{k}}\eta\left(\Omega_{k}t\right)\right] |0\rangle=e^{-2\sum_{\mathbf{k}}q_{\mathbf{k}}^{2}|\eta\left(\Omega_{k}t\right)|^{2}}\,. \tag{57}\]
Using the previous equations, the mean square value of the electric field interacting with the GW (55) becomes
\[\langle\mathcal{E}^{2}\rangle=\frac{\omega}{2V_{c}}\left[2D_{2}\,\mbox{Re}( \alpha^{2})-1+|\alpha|^{2}\right]\,. \tag{58}\]
As was shown in [19], to determine the second order correlation functions of the GW, we need to evaluate terms proportional up to \(\,\langle 0|\hat{\mathcal{D}}\left[3q_{\mathbf{k}}\eta\left(\Omega_{k}t\right) \right]|0\rangle\). In order to achieve that, we need to go up to the third order moment, the skewness (\(\Delta s\)), defined as
\[\Delta s=\langle\mathcal{E}^{3}\rangle-\langle\mathcal{E}\rangle\langle \mathcal{E}^{2}\rangle+2\langle\mathcal{E}\rangle^{3}\,. \tag{59}\]
Note that all the terms in the above definition have already been computed, except for \(\langle\mathcal{E}^{3}\rangle\). We now turn our attention to this particular term. Notice that
\[\mathcal{E}^{3}(t)= \left(\frac{\omega}{2V_{c}}\right)^{3/2}\left\{a^{3}\mathcal{D}[ -3q\eta(t)]+(2a^{\dagger}aa-3a)\mathcal{D}[-q\eta(t)]\right. \tag{60}\] \[\left.+(2a^{\dagger}a^{\dagger}a-3a^{\dagger})\mathcal{D}^{*}[-q \eta(t)]+(a^{\dagger})^{3}\mathcal{D}^{*}[-3q\eta(t)]\right\}\,.\]
For the initial vacuum state we find
\[\langle\mathcal{E}^{3}(t)\rangle=\left(\frac{\omega}{2V_{c}}\right)^{3/2} \left[\alpha^{3}\,\langle 0|\hat{\mathcal{D}}\left[3q_{\mathbf{k}}\eta\left( \Omega_{k}t\right)\right]|0\rangle+\left(|\alpha|^{2}\alpha-3\alpha\right)\, \langle 0|\hat{\mathcal{D}}\left[q_{\mathbf{k}}\eta\left(\Omega_{k}t\right) \right]|0\rangle+c.c\right], \tag{61}\]
and
\[\langle\mathcal{E}^{3}\rangle=\left(\frac{\omega}{2V_{c}}\right)^{3/2}\left[2 D_{3}\,\mbox{Re}(\alpha^{3})-2D\,\mbox{Re}(\alpha)(3-|\alpha|^{2})\right]\,, \tag{62}\]
where \(D_{3}\) is defined as
\[D_{3}\stackrel{{\mbox{\tiny def}}}{{=}}\prod_{\mathbf{k}}\,\langle 0 |\hat{\mathcal{D}}\left[3q_{\mathbf{k}}\eta\left(\Omega_{k}t\right)\right]|0 \rangle=e^{-\frac{9}{2}\sum_{\mathbf{k}}q_{\mathbf{k}}^{2}|\eta\left(\Omega_{k}t \right)|^{2}}\,. \tag{63}\]
With this, we see that the quantities \(D,D_{2}\) and \(D_{3}\) can be obtained from measurements of the first three moments of the electric field, which in turn can be measured via homodyne detection. In possession of these quantities, we can then reconstruct the first and second moments of GW vacuum fluctuations. This calculation can easily be extended to the case of a coherent sate. GW-induced electric field fluctuations can also be calculated for other states following the recipe introduced above, although in general perfect state reconstruction cannot be achieved from measurements of the electric field moments alone. These fluctuations could, however, lead to interesting signatures [31].
## 5 Squeezed gravity
So far, we have considered the vacuum, coherent states, and thermal states for the gravity modes. While interesting theoretical insight can be gained by studying these cases, none of them will yield measurable quantum effect under our assumptions. It is still interesting (and, indeed, a required sanity check of our approach) that in the case of a highly populated coherent state, we recover the classical signal even though we started from a completely quantum picture.
The natural next step in our investigation is asking the question: can any quantum states of gravity exist that have no classical analog and have a chance of yielding a detectable effect?
We give a tentative answer to this question by drawing from quantum optics experience and investigating what happens if we _assume_ that gravity can live in a _squeezed state_ - the analogue of squeezed states of light that are routinely produced in optics laboratories. Before going on, we stress that this assumption implies the existence of some mechanism that does put gravity into such a state, which is as yet unknown for GWs in the LIGO band. To be more precise, there is at least one candidate in the category: an established consensus exists on the hypothesis that inflation might indeed have squeezed gravity at primordial times; however, once again - much like in the cases of vacuum corrections or gravitational decoherence - this effect leads to a weak signal [12]. Nonetheless, calculating the potential signature of a gravitational squeezed vacuum on our model detector is instructive. After all, we cannot a priori exclude mechanisms, other than inflation, that could produce squeezing (see section 5.1 for more detailed discussion on this point).
Let us thus prepare gravity in a squeezed state, which we may model as a mode of the form
\[\left|\Psi_{g}\right\rangle=\left|\beta e^{2i\Omega t}\right\rangle=\hat{S} \left(\beta\right)\left|0\right\rangle\,, \tag{64}\]
where the complex number \(\beta\) is the squeezing parameter and \(\hat{S}\left(\beta\right)\) is the squeezing operator, as defined in textbooks. Now the matrix element for the mean electric field (28) contains terms of the form
\[\alpha\left\langle 0|\hat{S}^{\dagger}\left(\beta\right)\hat{\mathcal{D}} \left[q\eta\left(t\right)\right]\hat{S}\left(\beta\right)\right|0\rangle\,, \tag{65}\]
and after some calculations, which were slightly more involved but nonetheless straightforward, we arrive at our result:
\[\mathcal{E}\left(t\right)\sim 2\alpha\left[1-8q^{2}e^{2\left|\beta\right|} \sin^{4}\left(\frac{\Omega t}{2}\right)\right]\,. \tag{66}\]
This term is of the form of equation (27) and it thus tells us that we found an effect on the _noise_: a squeezed gravitational vacuum would manifest itself at an interferometer as an additional, oscillating term, in the noise spectrum of the instrument [16, 17, 18]. The most interesting part is that the amplitude of such an oscillating term contains an exponential factor. Such a factor could behave as an enhancement term to the noise,
depending on the magnitude of the squeezing parameter \(\beta\). The magnitude of \(\beta\), in turn, depends on the details of the source dynamics, which at this moment we cannot foresee. Nonetheless, if mechanisms exist in nature that would produce such an exponentially enhanced effect, we cannot exclude that future, more sensitive detectors could actually see it. In the subsequent section, we shall delve deeper into this topic.
Something even more interesting happens when we prepare gravity in a _squeezed-coherent_ state. For a single mode, this would mean
\[\left|\Psi_{g}\right\rangle=\hat{S}\left(\beta\right)\hat{\mathcal{D}}\left( he^{i\Omega_{GW}t}\right)\left|0\right\rangle\,, \tag{67}\]
where we have used the same notation as in eq. (39). Such a state would represent a squeezed quantum gravitational wave mode propagating from the source to the detector. The electromagnetic analog would be a squeezed laser beam, which we are able to generate and propagate in a laboratory.
After calculating the electric field matrix element as usual, for the first time we find an effect of the type described by equation (26) which deviates from the classical behavior. Specifically, after rewriting \(\beta=re^{i\xi}\), we get
\[\delta\phi=2hq\left[\sin\left(\Omega t\right)\cosh\left(r\right)+\sin\left(2 \xi-\Omega t\right)\sinh\left(r\right)-\sin\left(2\xi\right)\sinh\left(r \right)\right] \tag{68}\]
Thus, an exponentially enhanced or suppressed effect, but this time on the _signal_. Once again, the magnitude of the effect depends on the dynamics at the source, which at this point remains unknown. Nonetheless, equation (68) clearly shows how a purely quantum effect (squeezing) involving a state with a macroscopically high number of gravitons \(|h|\), would produce a signal which can be exponentially enhanced - even to order one - and can thus be detectable with current or near future technology [19].
### Are squeezed gravitational waves produced in nature?
In the framework in which we are working, where GR is seen as a classical limit of an intermediate-energy effective quantum field theory of (self)-interacting gravitons, squeezed gravitational waves definitely exist _theoretically_, that is, they are allowed states in the Hilbert space of the theory.
The question however remains open on whether there are any realistic astrophysical sources that could produce GWs with a sizable (i.e. potentially measurable) amount of squeezing. While the aforementioned hypothesis on the squeezing of the relic gravitational background induced by inflation seems to be widely accepted, it is predicted to be too small to be observed at gravitational interferometers. Drawing on our experience on quantum optics, we can outline two basic conditions that we can expect to be met in order to have measurable squeezing in a physical process:
1. The process should involve states characterized by a high - macroscopic - occupancy number;
2. The resulting state should be capable of propagating (ideally) undisturbed from source to detector.
Both conditions are naturally met by the processes that produce the gravitational waves that we are able to observe. First of all, there is no doubt that mergers of black holes (or really any other sources of high-intensity GWs) involve macroscopic number of gravitons (assuming, of course, that gravitons do exist). Furthermore, gravity is naturally weak-interacting at low energy, which means a GW basically stops interacting as soon as it leaves its source, traveling the distance to the detector nearly undisturbed. Note that this contrasts sharply with the behaviour of squeezed sources in electromagnetism: in quantum optics laboratories, squeezed beams of light are commonly produced by using intense laser beams, which are prone to losing coherence because of the high probability of interaction with any medium present in the laboratory. Transporting the quantum state of a laser beam (e.g. a squeezed coherent state) over long distances from source to detector is thus challenging, and can only be achieved with great effort in a laboratory. Gravitational waves can be expected to be free of this problem.
However, even if merger events (or other sources of strong GWs) do satisfy the minimum requirements to be candidate producers of squeezed gravitational waves, it is a challenge to understand if they _actually_ produce such states. Answering this question appears to involve the theoretical treatment of quantum effects at strong GR regimes, which is beyond our current capacities. However, while we are not (yet) capable of providing a formal argument in favour of the production of squeezed GWs at mergers and similar events (but for some discussion on the topic, see [19]), we would still like to argue, more heuristically, that this possibility does deserve further investigation. Let us think once again of the analogy with quantum optics. Squeezed states of light are produced in the lab by making intense laser beams interact with anisotropic crystals. In fact, the mechanism that turns a coherent laser beam into a squeezed coherent laser beam involves the interaction of intense light with a highly non-linear optical medium. As electromagnetism is a linear theory at the classical level (nonlinear effects are only manifest in the quantum regime), producing squeezed light is in a sense an "exotic" process, in that it needs well-controlled laboratory conditions and is not found spontaneously in nature. Compare this with the case of gravity. Contrary to Maxwell's equations, Einstein's equations are already nonlinear at the classical level, and nonlinear effects affect phenomena taking place in strong regimes [32, 33]. This means that squeezing of macroscopic gravitational waves might well be a natural effect, provided the source is strong enough - which is definitely true in the case of mergers. Strong nonlinear astronomical sources, perhaps those already known to emit GWs, seem therefore reasonable candidates to investigate. A recent step in this direction has been proposed in [20] where nonlinear effects present in black hole's ringdown [32, 33, 34, 35, 36, 37] have been considered.
## 6 Conclusions
In this paper, we have shown explicitly how to make use of quantum optics in order to derive phenomenological results in quantum gravity in the weak gravity regime. We
focused on the treatment of the problem from the point of view of the equations of motion and applied the result to a model GW interferometer interacting with a few possible quantum states of gravity. We have examined various quantum states ranging from the basic vacuum state to the coherent state, and ultimately concluded with an evaluation of squeezed states. Among the ones we evaluated, _squeezed-coherent_ gravitational waves have proven to be the most promising candidates for providing potentially detectable quantum aspects of gravity. The findings of Sec. 5.1, however basic, together with the results reported in Sec. 5, show how a squeezed coherent GW could produce an effect on the _signal_ of a GW interferometer, and that such an effect has the potential of being of order 1. This indicates to us that further research on the topic - especially regarding the existence of possible sources - has promise and is worth pursuing.
|
2303.17519
|
Infinite Horizon Privacy in Networked Control Systems: Utility/Privacy
Tradeoffs and Design Tools
|
We address the problem of synthesizing distorting mechanisms that maximize
infinite horizon privacy for Networked Control Systems (NCSs). We consider
stochastic LTI systems where information about the system state is obtained
through noisy sensor measurements and transmitted to a (possibly adversarial)
remote station via unsecured/public communication networks to compute control
actions (a remote LQR controller). Because the network/station is
untrustworthy, adversaries might access sensor and control data and estimate
the system state. To mitigate this risk, we pass sensor and control data
through distorting (privacy-preserving) mechanisms before transmission and send
the distorted data through the communication network. These mechanisms consist
of a linear coordinate transformation and additive-dependent Gaussian vectors.
We formulate the synthesis of the distorting mechanisms as a convex program. In
this convex program, we minimize the infinite horizon mutual information (our
privacy metric) between the system state and its optimal estimate at the remote
station for a desired upper bound on the control performance degradation (LQR
cost) induced by the distortion mechanism.
|
Haleh Hayati, Nathan van de Wouw, Carlos Murguia
|
2023-03-30T16:40:03Z
|
http://arxiv.org/abs/2303.17519v2
|
# Infinite Horizon Privacy in Networked Control Systems:
###### Abstract
We address the problem of synthesizing distorting mechanisms that maximize infinite horizon privacy for Networked Control Systems (NCSs). We consider stochastic LTI systems where information about the system state is obtained through noisy sensor measurements and transmitted to a (possibly adversarial) remote station via unsecured/public communication networks to compute control actions (a remote LQR controller). Because the network/station is untrustworthy, adversaries might access sensor and control data and estimate the system state. To mitigate this risk, we pass sensor and control data through distorting (privacy-preserving) mechanisms before transmission and send the distorted data through the communication network. These mechanisms consist of a linear coordinate transformation and additive-dependent Gaussian vectors. We formulate the synthesis of the distorting mechanisms as a convex program. In this convex program, we minimize the infinite horizon mutual information (our privacy metric) between the system state and its optimal estimate at the remote station for a desired upper bound on the control performance degradation (LQR cost) induced by the distortion mechanism.
## I Introduction
In recent years, control systems have become increasingly distributed and networked. Networked Control Systems (NCSs) involve closing control loops over real-time communication networks. This allows controllers, sensors, and actuators to be connected through multipurpose networks, providing benefits such as increased system flexibility, ease of installation and maintenance, and decreased wiring and cost [1]. However, when estimation/control tasks in NCSs are performed by third parties information sharing might result in private information leakage [2]-[5].
In NCSs, information about the plant state, say \(x\), is obtained through sensor measurements and then sent through communication networks to a remote station to perform computations, e.g., estimation or control tasks. Shared information is correlated with private variables that carry sensitive information, e.g., the state itself (as it can reveal private system trajectories like reactant levels and user behavior, or it could be used to launch state-dependent attacks [6]), and references (because they can reveal manufactured products specs, tracked trajectories, and visited locations). If communication networks and/or the remote station are untrustworthy, adversaries might access and estimate the system state. To avoid this, we randomize the disclosed data before transmission using additive-dependent Gaussian random vectors and transmit the distorted data over the network.
Using additive random noise is common practice to enforce privacy of sensitive data. In the context of privacy of databases, a popular approach is differential privacy [7], where random noise is added to the response of queries so that private information stored in the database cannot be inferred. Differential privacy has also been applied to various estimation and control problems [7, 8]. There are also techniques addressing privacy in dynamical systems from an information-theoretic perspective, see [9, 10, 11, 12]. In this line of work, privacy is characterized using information-theoretic metrics, e.g., mutual information, entropy, and Kullback-Leibler divergence. However, independently of the metric being used, if the data to be kept private follows continuous probability distributions, the problem of finding the optimal additive noise to maximize privacy is difficult to solve [10]. This issue has been addressed by assuming the data to be kept private is deterministic [10]. However, in a Cyber-Physical-Systems context, the inherent system dynamics and unavoidable system and sensor noise lead to stochastic non-stationary data, and thus, existing tools do not fit this problem setting.
It is crucial to note that data privacy fundamentally differs between static data, like databases, and dynamically correlated data, e.g., in feedback control systems. In networked control architectures, information flows bidirectionally between the remote station and the plant. The authors in [3] demonstrate the necessity of privacy masks for information flow directions by identifying the infinite horizon privacy consequences of bidirectional information flow in feedback control. To the best of the authors' knowledge, there are no privacy-preserving design tools offered for MIMO multi-dimensional feedback control systems that minimize infinite horizon privacy while maintaining a desired closed-loop control performance. There are works addressing information-theoretic infinite-horizon privacy [3, 13] but for SISO scalar systems.
Motivated by these results, in this manuscript, we present an optimization-based framework for synthesizing privacy-preserving Gaussian mechanisms that maximize privacy but keep distortion on control performance bounded. The proposed privacy mechanism consists of a coordinate transformation and additive Gaussian vectors that are designed to hide (as much as possible) the private state of the plant [12]. We distort disclosed data in both information flow directions, the measurement data in the uplink direction that is transmitted from the plant to the remote station and the
control data in the downlink direction that is transmitted from the remote station to the plant. We show that using coordinate transformations in the privacy mechanism (in combination with additive Gaussian vectors) can effectively reduce information leakage significantly more than adding stochastic vectors only. Note that it is not desired to overly distort the control performance while minimizing the information leakage. Therefore, when designing the privacy mechanisms, we consider the trade-off between _privacy_ and _performance degradation_. As _performance metric_, we use the _LQR control cost_ of the closed-loop system when operating on distorted privacy-preserving data. We follow an information-theoretic approach to privacy. As _privacy metric_, we use the _mutual information_[14] between the system infinite state sequence \(x^{\infty}=(x_{1},...,x_{\infty})\) and its optimal estimate \(\hat{x}^{\infty}=(\hat{x}_{1},...,\hat{x}_{\infty})\) obtained by Kalman filtering given the infinite sequence of distorted disclosed data. Mutual information \(I(x^{\infty};\hat{x}^{\infty})\) between the two jointly distributed infinite-dimensional vectors, \(x^{\infty}\) and \(\hat{x}^{\infty}\), is a measure of the statistical dependence between them. We design the privacy mechanisms to minimize \(I(x^{\infty};\hat{x}^{\infty})\) for a desired maximum level of control performance degradation on the closed-loop infinite horizon LQR control cost. As we prove in this manuscript, we can cast the problem of finding sub-optimal additive random vectors covariance matrices and coordinate transformations as a constrained convex program (convex cost with LMI constraints). This is the first piece of work that provides privacy-preserving design tools for MIMO multidimensional feedback control systems to maximize infinite horizon privacy by optimally distorting disclosed data while maintaining prescribed control performance. Providing infinite-horizon privacy is important in the context of dynamical systems since adversaries can infer information about private data from disclosed data over time.
## II Problem Formulation
### _System Description_
We consider the networked control architecture shown in Fig. 1. The dynamics of the plant is described as follows:
\[\mathcal{P}:=\begin{cases}x_{k+1}=Ax_{k}+Bu_{k}+w_{k},\\ y_{k}=x_{k}+h_{k},\\ u_{k}=Ky_{k},\end{cases} \tag{1}\]
with time-index \(k\in\mathds{N}\), state \(x_{k}\in\mathbb{R}^{n_{x}}\), measurable output \(y_{k}\in\mathbb{R}^{n_{y}}\), controller \(u_{k}\in\mathbb{R}^{n_{u}}\) with control feedback gain \(K\), and matrices \((A,B,K)\) of appropriate dimensions, \(n_{x},n_{y},n_{u}\in\mathbb{N}\). The state and output disturbances \(w_{k}\) and \(h_{k}\) are multivariate i.i.d. Gaussian processes with zero mean and covariance matrices \(\Sigma^{w}>0\) and \(\Sigma^{h}>0\), respectively. The initial state \(x_{1}\) is assumed to be a Gaussian random vector with zero mean and covariance matrix \(\Sigma_{1}^{x}:=E[x_{1}x_{1}^{\top}]\), \(\Sigma_{1}^{x}>0\). Disturbances \(w_{k}\) and \(h_{k}\) and the initial condition \(x_{1}\) are mutually independent. We assume that matrices \((A,B,\Sigma_{1}^{x},\Sigma^{w},\Sigma^{h},K)\) are known, and \((A,B)\) is stabilizable.
We consider the setting where the local plant is controlled by a remote station. The user who owns the plant transmits \(y_{k}\) to the remote station through an unsecured/public communication network to compute control actions (a remote LQR controller). Then, the control signal \(u_{k}\) is sent back to the user through the network. To characterize control performance for some given positive definite matrices \(Q\) and \(R\), we introduce the associated infinite horizon LQR cost:
\[C_{\infty}(x,u):=\limsup_{N\rightarrow\infty}\frac{1}{N+1}\sum_{k=0}^{N} \mathbb{E}\left(x_{k}^{\top}Qx_{k}+u_{k}^{\top}Ru_{k}\right), \tag{2}\]
where \(\mathbb{E}(\cdot)\) denotes expectation.
For privacy reasons, a full disclosure of the state trajectory \(x_{k}\), \(k\in\mathbb{N}\) is not desired. We aim to prevent adversaries from estimating \(x_{k}\) accurately. To this end, the user randomize measurement data \(y_{k}\) before disclosure, and requests the remote station to randomize control signals, \(u_{k}\), before transmission. By doing so, we protect against inference at the network and remote station. The idea is to distort \(y_{k}\) and \(u_{k}\) through random affine transformations of the form:
\[\mathcal{M}:=\begin{cases}\tilde{y}_{k}=Gy_{k}+v_{k},\\ \tilde{u}_{k}=u_{k}+z_{k},\end{cases} \tag{3}\]
where \(G\in\mathbb{R}^{n_{y}\times n_{y}}\) is a linear transformation, and \(v_{k}\) and \(z_{k}\) are zero mean i.i.d. Gaussian processes with covariance matrices \(\Sigma^{v}\) and \(\Sigma^{z}\), respectively. The distorted vectors \(\tilde{y}_{k}\) and \(\tilde{u}_{k}\) are transmitted over the network, see Fig. 1. It follows that the closed-loop dynamics when the privacy mechanism (3) is acting on the system is given by
\[\tilde{\mathcal{P}}:=\begin{cases}\tilde{x}_{k+1}=A\tilde{x}_{k}+B\tilde{u}_{k }+w_{k},\\ \tilde{y}_{k}=G\tilde{x}_{k}+Gh_{k}+v_{k},\\ \tilde{u}_{k}=KG\tilde{x}_{k}+KGh_{k}+Kv_{k}+z_{k}.\end{cases} \tag{4}\]
with distorted state \(\tilde{x}\in\mathbb{R}^{n_{x}}\). Here, we seek to synthesize \(G\), \(\Sigma^{v}\), and \(\Sigma^{z}\), to make estimating the infinite horizon state trajectory \(\tilde{x}_{k}\), \(k\in\mathbb{N}\), as "hard" as possible from the disclosed data, \((\tilde{y}_{k},\tilde{u}_{k})\), \(k\in\mathbb{N}\).
We assume the adversary uses a steady-state Kalman filter designed to estimate the state _in the absence of privacy mechanisms_. That is, we assume the adversary has prior knowledge of the system dynamics (matrices (\(A,B,\Sigma_{1}^{x},\Sigma^{w},\Sigma^{h}\)) but does not have knowledge about the privacy mechanism (matrices (\(G,\Sigma^{v},\Sigma^{z}\)). This creates an asymmetry we seek to exploit to increase privacy. The considered filter has the following structure:
\[\begin{cases}\hat{x}_{k|k-1}=A\hat{x}_{k-1}+Bu_{k-1},\\ \hat{x}_{k}=\hat{x}_{k|k-1}+L\left(\tilde{y}_{k}-\hat{x}_{k|k-1}\right),\end{cases} \tag{5}\]
with estimated state \(\hat{x}_{k}\in\mathbb{R}^{n_{x}}\) and gain \(L\in\mathbb{R}^{n_{x}\times n_{y}}\). The adversary designs the filter for the distortion-free system (1). Let \(\rho_{k}\) denote the estimation error _in the absence of the privacy distortions_\(\rho_{k}:=x_{k}-\hat{x}_{k}\). The observer gain \(L\) is designed to minimize the asymptotic covariance matrix \(\Sigma^{\rho}:=\lim_{k\rightarrow\infty}E\left(\rho_{k}\rho_{k}^{\top}\right)\)[15]. Because the system is observable (we have state measurements), \(\Sigma^{\rho}\) always exists.
Now let \(e_{k}\) denote the estimation error _in the presence
of privacy distortions_, i.e., \(e_{k}:=\tilde{x}_{k}-\hat{x}_{k}\). Given the distorted dynamics (4), the privacy mechanisms (3), and the estimator (5), the estimation error dynamics is governed by the following coupled difference equations:
\[\left\{\begin{aligned} \tilde{x}_{k+1}&=(A+ BKG)\tilde{x}_{k}+BK\tilde{v}_{k}+Bz_{k}+w_{k},\\ e_{k|k-1}&=Ae_{k-1}+Bz_{k-1}+w_{k-1},\\ e_{k}&=(I-L)e_{k|k-1}-L(G-I)\tilde{x}_{k}-L\tilde{v}_{k},\end{aligned}\right. \tag{6}\]
where \(\tilde{v}_{k}:=Gh_{k}+v_{k}\).
### _Problem Formulation_
The aim of our privacy scheme is to make the estimation of the infinite horizon state sequence, \(\tilde{x}^{\infty}:=(\tilde{x}_{1},\ldots,\tilde{x}_{\infty})\), from the disclosed distorted data, \(\tilde{y}^{\infty}:=(\tilde{y}_{1},\ldots,\tilde{y}_{\infty})\) and \(\tilde{u}^{\infty}:=(\tilde{u}_{1},\ldots,\tilde{u}_{\infty})\), as hard as possible without degrading the control performance excessively. Hence, when designing the distorting variables \((G,\Sigma^{v},\Sigma^{h})\), we need to consider the _trade-off between privacy and performance_.
As privacy metric, we use the mutual information rate \(I_{\infty}(\tilde{x};\hat{x})\)[14] between \(\tilde{x}^{\infty}\) and the infinite sequence of estimates \(\hat{x}^{\infty}:=(\hat{x}_{1},...,\hat{x}_{\infty})\) obtained by Kalman filtering:
\[I_{\infty}(\tilde{x};\hat{x}):=\limsup_{N\to\infty}\frac{1}{N+1}I(\tilde{x}^{N };\hat{x}^{N}), \tag{7}\]
where \(I(\tilde{x}^{N};\hat{x}^{N})\) denotes standard mutual information [14].
We use the LQR cost in (2) to quantify control performance _in the absence of attacks_. To quantify the effect of the privacy mechanism (3) on the control performance, we introduced the associated distorted LQR control cost:
\[\tilde{C}_{\infty}(\tilde{x},\tilde{u}):=\limsup_{N\to\infty}\frac{1}{N+1} \sum_{k=0}^{N}\mathbb{E}\left(\tilde{x}_{k}^{\top}Q\tilde{x}_{k}+\tilde{u}_{k }^{\top}R\tilde{u}_{k}\right). \tag{8}\]
We aim to minimize \(I_{\infty}(\tilde{x};\hat{x})\) subject to a constraint on the LQR cost increase due to the privacy mechanism, \(\tilde{C}_{\infty}(\tilde{x},\tilde{u})-C_{\infty}(x,u)\leq\epsilon\), for a desired maximum control performance degradation level \(\epsilon\in\mathbb{R}^{+}\), using as synthesis variables the mechanism matrices \(G\), \(\Sigma^{v}\), and \(\Sigma^{z}\). In what follows, we present the problem we seek to address.
**Problem 1** Given the system dynamics (1), distortion-free control performance (2), distorted control performance (8), privacy mechanism (3), distorted dynamics (4), Kalman filter (5), and maximum control degradation level \(\epsilon>0\), find the privacy mechanism variables, \(G\), \(\Sigma^{v}\), and \(\Sigma^{z}\), solution of the following optimization problem:
\[\left\{\begin{aligned} &\min_{G,\Sigma^{v},\Sigma^{z}}I_{ \infty}(\tilde{x};\hat{x}),\\ &\text{s.t. }\tilde{C}_{\infty}(\tilde{x},\tilde{u})-C_{ \infty}(x,u)\leq\epsilon.\end{aligned}\right. \tag{9}\]
## III Privacy Mechanism Design
To solve Problem 1, we first need to write the cost function and constraint in terms of the design variables.
### _Cost Function: Formulation and Convexity_
Mutual information \(I\left(\tilde{x}^{N};\hat{x}^{N}\right)\) can be written in terms of uplink \(I\left(\tilde{x}^{N}\to\hat{x}^{N}\right)\) (plant to the remote station) and downlink \(I\left(\tilde{x}^{N}\leftarrow\hat{x}^{N}\right)\) (remote station to the plant) directed information flows [16]:
\[I\left(\tilde{x}^{N};\hat{x}^{N}\right)=I\left(\tilde{x}^{N}\to\hat{x}^{N} \right)+I\left(\tilde{x}^{N}\leftarrow\hat{x}^{N}\right). \tag{10}\]
Then, the mutual information rate can be written as
\[I_{\infty}(\tilde{x};\hat{x}):=\limsup_{N\to\infty}\frac{1}{N+1} \left(I\left(\tilde{x}^{N}\to\hat{x}^{N}\right)\right.\\ +\left.I\left(\tilde{x}^{N}\leftarrow\hat{x}^{N}\right)\right). \tag{11}\]
The decomposition of \(I\left(\tilde{x}^{N};\hat{x}^{N}\right)\) in terms of uplink and downlink directed information is essential in enabling us to express mutual information as a stage additive function of covariance matrices. The latter allows writing \(I_{\infty}(\tilde{x};\hat{x})\) in terms of the solution of Lyapunov equations/inequalities, which in turn enables a convex reformulation of cost and constraints. In Lemma 1, we write the resulting expression of \(I\left(\tilde{x}^{N};\hat{x}^{N}\right)\) in terms of the design variables. Then, \(I_{\infty}(\tilde{x};\hat{x})\) can be obtained by taking the limit in (11). Please refer to the proof of Lemma 1 for a step by step derivation of \(I\left(\tilde{x}^{N};\hat{x}^{N}\right)\).
**Lemma 1**: _Mutual information \(I\left(\tilde{x}^{N};\hat{x}^{N}\right)\) can be written in terms of \(G\), \(\Sigma^{v}\), and \(\Sigma^{z}\), as follows:_
\[I\left(\tilde{x}^{N};\hat{x}^{N}\right)=\sum_{k=1}^{N}\left( \frac{1}{2}\log\det\left(LG\Sigma_{k|k-1}^{e}G^{\top}L^{\top}+L\Sigma^{\tilde {v}}L^{\top}\right)\right.\\ -\frac{1}{2}\log\det\left(L\Sigma^{\tilde{v}}L^{\top}\right)-\frac {1}{2}\log\det\left(B\Sigma^{z}B^{\top}+\Sigma^{w}\right)\\ +\left.\frac{1}{2}\log\det\left(BK\Sigma^{\tilde{v}}K^{\top}B^{ \top}+B\Sigma^{z}B^{\top}+\Sigma^{w}\right)\right), \tag{12}\]
_with covariance matrices \(\Sigma_{k|k-1}^{e}:=\mathbb{E}(e_{k|k-1}e_{k|k-1}^{\top})\) and \(\Sigma^{\tilde{v}}:=G\Sigma^{h}G^{\top}+\Sigma^{v}\)._
_Proof_: See Appendix A.
Note that \(\Sigma^{v}\) only appears in the expression for \(\Sigma^{\tilde{v}}\). Given \((G,\Sigma^{\tilde{v}})\), matrix \(\Sigma^{v}\) is fully determined and vice versa. That is, \((G,\Sigma^{\tilde{v}})\to(G,\Sigma^{v})\) is an invertible transformation. Therefore, we can pose both the cost and constraint of Problem 1 in terms of either \(\Sigma^{\tilde{v}}\) or \(\Sigma^{v}\). Casting the problem in terms of \(\Sigma^{\tilde{v}}\) allows us to write convex cost and constraint. Hereafter, we pose the problem in terms of \((G,\Sigma^{\tilde{v}})\). Once we have found optimal \((G,\Sigma^{\tilde{v}})\), we extract the optimal \(\Sigma^{v}\) as \(\Sigma^{v}=\Sigma^{\tilde{v}}-G\Sigma^{h}G^{\top}\). Note, however, that due to the negative term \(-G\Sigma^{h}G^{\top}\), the extracted \(\Sigma^{v}\) might be negative
Fig. 1: System configuration.
semidefinite, which is of course wrong as \(\Sigma^{v}\) is a covariance matrix. To avoid this, we enforce that the extracted \(\Sigma^{v}\) is always positive definite in the synthesis program by adding \(\Sigma^{\tilde{v}}-G\Sigma^{h}G^{\top}>\mathbf{0}\) as an extra constraint. This constraint can be equivalently written as the following linear inequality in \((G,\Sigma^{\tilde{v}})\) using Schur complement properties [17]:
\[\left[\begin{array}{cc}\Sigma^{\tilde{v}}&G\\ G^{\top}&(\Sigma^{h})^{-1}\end{array}\right]>\mathbf{0}. \tag{13}\]
We use inequality (13) later when we solve the complete optimization problem to enforce that the optimal \((G,\Sigma^{\tilde{v}})\) leads to a positive definite \(\Sigma^{v}\).
In Lemma 1, we have an expression of mutual information in terms of the design variables and the estimation error covariance \(\Sigma^{e}_{k|k-1}\). Consider the closed-loop dynamics (6), and define the extended state \(\zeta_{k}:=\text{col}\left[e_{k|k-1},\tilde{x}_{k}\right]\), we have
\[\zeta_{k+1} =\left[\begin{array}{cc}A(I-L)&-AL(G-I)\\ \mathbf{0}&A+BKG\end{array}\right]\zeta_{k} \tag{14}\] \[+\left[\begin{array}{cc}-AL&B&I\\ BK&B&I\end{array}\right]\left[\begin{array}{c}\tilde{v}_{k}\\ z_{k}\\ w_{k}\end{array}\right].\]
Because \((\tilde{v}_{k},z_{k},w_{k})\) are all zero mean i.i.d. processes, the covariance of \(\zeta_{k}\), \(\Sigma^{c}_{k}:=\mathbb{E}(\zeta_{k}\zeta^{\top}_{k})\), satisfies the following:
\[\Sigma^{\zeta}_{k+1}=A\Sigma^{\zeta}_{k}\mathcal{A}^{T}+\mathcal{B}, \tag{15}\]
where
\[\begin{cases}\mathcal{A}:=\left[\begin{array}{cc}A(I-L)&-AL(G-I)\\ \mathbf{0}&A+BKG\end{array}\right],\\ \mathcal{B}:=\left[\begin{array}{cc}-AL&B&I\\ BK&B&I\end{array}\right]\left[\begin{array}{cc}\Sigma^{\tilde{v}}&\Sigma^{z} \\ &\Sigma^{w}\end{array}\right]\left[\begin{array}{cc}-AL&B&I\\ BK&B&I\end{array}\right]^{\top}.\end{cases} \tag{16}\]
If \(\mathcal{A}\) is Schur stable (which is always the case for \(G=I\) by construction), the limit \(\Sigma^{\zeta}:=\lim_{k\rightarrow\infty}\Sigma^{\zeta}_{k}\), with \(\Sigma^{\zeta}_{k}\) solution of (15), exists and coincides with the unique positive definite solution of the Lyapunov equation:
\[\mathcal{A}\Sigma^{\zeta}\mathcal{A}^{T}-\Sigma^{\zeta}+\mathcal{B}=\mathbf{0}. \tag{17}\]
Moreover, because \(\zeta_{k}=\text{col}\left[e_{k|k-1},\tilde{x}_{k}\right]\), we have
\[\Sigma^{\tilde{x}} :=\lim_{k\rightarrow\infty}\Sigma^{\tilde{x}}_{k}=\begin{bmatrix} \mathbf{0}&I\end{bmatrix}\Sigma^{\zeta}\begin{bmatrix}\mathbf{0}&I\end{bmatrix} ^{\top}, \tag{18}\] \[\Sigma^{e} :=\lim_{k\rightarrow\infty}\Sigma^{e}_{k|k-1}=\begin{bmatrix}I& \mathbf{0}\end{bmatrix}\Sigma^{\zeta}\begin{bmatrix}I&\mathbf{0}\end{bmatrix} ^{\top}, \tag{19}\]
which allows writing the following corollary of Lemma 1 by taking the limit in (11).
**Corollary 1**: _The mutual information rate \(I_{\infty}\left(\tilde{x};\tilde{x}\right)\) in (12) can be written in terms of \(G\), \(\Sigma^{\tilde{v}}\), and \(\Sigma^{z}\), as follows:_
\[I_{\infty}\left(\tilde{x};\hat{x}\right)=\frac{1}{2}\log\det \left(L\!G\Sigma^{e}G^{\top}L^{\top}+L\Sigma^{\tilde{v}}L^{\top}\right) \tag{20}\] \[-\frac{1}{2}\log\det\left(L\Sigma^{\tilde{v}}L^{\top}\right)- \frac{1}{2}\log\det\left(B\Sigma^{z}B^{\top}+\Sigma^{w}\right)\] \[+\frac{1}{2}\log\det\left(BK\Sigma^{\tilde{v}}K^{\top}B^{\top}+B \Sigma^{z}B^{\top}+\Sigma^{w}\right),\]
_with \(\Sigma^{e}=\lim_{k\rightarrow\infty}\Sigma^{e}_{k|k-1}\) as defined in (19)._
Note that the cost \(I_{\infty}\left(\tilde{x};\hat{x}\right)\) in (20) is non-convex in the design variables. The term \(L\!G\Sigma^{e}G^{\top}L^{\top}\) is quadratic in \(G\) and \(\Sigma^{e}\) depends on the solution of the Lyapunov equation (17), which is itself quadratic in \(G\). To tackle this, we derive a convex upper bound on the cost (20) and minimize this bound. We start with an upper bound, \(\Sigma\), on the solution \(\Sigma^{\zeta}\) of the Lyapunov equation (17). Having this \(\Sigma\) and using (19) and monotonicity of \(\log\det(\cdot)\) allow us to upper bound the first term of the cost in (20). In the following lemma, we propose a convex program to find \(\Sigma\).
**Lemma 2**: _An upper bound \(\Sigma\) on the solution \(\Sigma^{\zeta}\) of (17) can be found by solving the following convex program:_
\[\left\{\begin{array}{c}\min_{\Sigma,\Pi_{1},\Pi_{2}}\operatorname{trace}( \Sigma),\\ \text{s.t.}\,\,\left[\begin{array}{cc}\Sigma-\mathcal{B}&\mathcal{A}_{0}\Pi_ {1}+\mathcal{A}_{1}\Pi_{2}\\ *&\Pi_{1}+\Pi_{1}^{\top}-\Sigma\end{array}\right]\geq\mathbf{0},\\ \Pi_{1}=\left[\begin{array}{cc}\Pi_{11}&\Pi_{12}\\ \mathbf{0}&\Pi_{13}\end{array}\right],\\ \Pi_{2}=\left[\begin{array}{cc}\mathbf{0}&\Pi_{21}\end{array}\right],\end{array}\]
_where_
\[\left\{\mathcal{A}_{0}:=\left[\begin{array}{cc}A(I-L)&AL\\ \mathbf{0}&A\end{array}\right],\,\,\mathcal{A}_{1}:=\left[\begin{array}{c}-AL \\ BK\end{array}\right]. \tag{22}\]
_Proof_: See Appendix B. \(\blacksquare\)
We defined new variables \(\Pi_{1}\) and \(\Pi_{2}\) to convexity the constraints in (21). Given \((\Pi_{1},\Pi_{21})\), matrix \(G\) can be extracted as \(G=\Pi_{21}\Pi_{13}^{-1}\) (see the proof of Lemma 2). Therefore, we can pose both cost and constraints in terms of either \(G\) or \(\Pi_{21}\). Casting the problem in terms of \(\Pi_{21}\) allows us to linearize some constraints. Hereafter, we pose the problem in terms of \((\Pi_{1},\Pi_{21})\). Once we have found optimal \((\Pi_{1},\Pi_{21})\), we extract the optimal \(G\) using \(\Pi_{21}=G\Pi_{13}\).
Lemma 2 allows casting the computation of an upper bound, \(\Sigma\), on the solution, \(\Sigma^{\zeta}\), of the Lyapunov equation (17) as the solution of an optimization problem. Matrix \(\Sigma\) obtained by solving (21) satisfies \(\Sigma\geq\Sigma^{\zeta}=\lim_{k\rightarrow\infty}\Sigma^{\zeta}_{k}\). Therefore, given \(\Sigma\), by (18)-(19), we also have the following upper bounds on \(\Sigma^{\tilde{x}}\) and \(\Sigma^{e}\)
\[\begin{cases}\Sigma^{\tilde{x}}=\lim_{k\rightarrow\infty}\Sigma^{\tilde{x}}_{k} \leq N_{\tilde{x}}\Sigma N^{\top}_{\tilde{x}},\\ \Sigma^{e}=\lim_{k\rightarrow\infty}\Sigma^{e}_{k|k-1}\leq N_{e}\Sigma N^{\top}_ {e},\\ N_{\tilde{x}}:=\begin{bmatrix}\mathbf{0}&I\end{bmatrix},N_{e}:=\begin{bmatrix}I& \mathbf{0}\end{bmatrix}.\end{cases} \tag{23}\]
In Corollary 1, the mutual information rate is written in terms of privacy mechanism variables and \(\Sigma^{e}\). Hence, given (23) and monotonicity of the determinant function, an upper
bound on \(I_{\infty}(\tilde{x};\hat{x})\) in terms of \(\Sigma\) can be written as follows:
\[\left\{\begin{aligned} & I_{\infty}(\tilde{x};\hat{x})\leq\frac{1}{2} \log\det\left(LGN_{e}\Sigma N_{e}^{\top}G^{\top}L^{\top}+L\Sigma^{\tilde{v}}L^{ \top}\right)\\ &\quad-\frac{1}{2}\log\det\left(L\Sigma^{\tilde{v}}L^{\top} \right)-\frac{1}{2}\log\det\left(B\Sigma^{z}B^{\top}+\Sigma^{w}\right)\\ &\quad+\frac{1}{2}\log\det\left(BK_{c}\Sigma^{\tilde{v}}K_{c}^{ \top}B^{\top}+B\Sigma^{z}B^{\top}+\Sigma^{w}\right).\end{aligned}\right. \tag{24}\]
So far, we have an upper bound (24) on the cost function in Problem 1 in terms of the solution \(\Sigma\) of program (21) and the mechanism parameters. However, (24) is still non-convex in \(G\) and \(\Sigma\). In Lemma 3, we pose the problem of minimizing the right-hand side of (24) as a convex program. This reformulation is achieved using Schur complement properties, an epigraph reformulation of the minimization problem, and the monotonicity of the \(\text{logdet}(\cdot)\) function. Moreover, as we will later need to combine the program in Lemma 2 with the convex reformulation of the bound in (24), we write, in Lemma 3, \(G\) in terms of \(\Pi_{2}\) and \(\Pi_{1}\) as we do in Lemma 2 (\(G=\Pi_{21}\Pi_{13}^{-1}\), see the discussion below Lemma 2). This is necessary as we have to use the same coordinates in the reformulation of cost and constraints to be able to later solve all together as a single optimization problem.
**Lemma 3**: _Consider the solution of the convex program:_
\[\left\{\begin{aligned} &\min_{\Pi_{13},\Pi_{21},\Pi_{31},\Pi_{4}, \Sigma^{\tilde{v}},\Sigma^{z},\Sigma^{z}}\left(-\frac{1}{2}\text{logdet}(\Pi_{ 3})-\frac{1}{2}\text{logdet}\left(\Pi_{4}\right)\right.\\ &\left.-\frac{1}{2}\text{logdet}(L\Sigma^{\tilde{v}}L^{\top})- \frac{1}{2}\text{logdet}\left(B\Sigma^{z}B^{\top}+\Sigma^{w}\right)\right)\\ &\left\{\begin{aligned} &\text{s.t.}\\ &\left[\begin{array}{cc}2I-\Pi_{3}-L\Sigma^{\tilde{v}}L^{\top}&L\Pi_{21}\\ *&\Pi_{13}+\Pi_{13}^{\top}-N_{e}\Sigma N_{e}^{\top}\end{array}\right]\geq \mathbf{0},\\ &2I-\Pi_{4}\geq\left(BK\Sigma^{\tilde{v}}K^{\top}B^{\top}+B\Sigma^{z}B^{\top}+ \Sigma^{w}\right).\end{aligned}\right.\end{aligned}\right. \tag{25}\]
_The resulting \(\Sigma\), \(\Sigma^{\tilde{v}}\), \(\Sigma^{z}\), and \(G=\Pi_{21}\Pi_{13}^{-1}\) minimize the upper bound on \(I_{\infty}(\tilde{x};\hat{x})\) in (24)._
_Proof_: See Appendix C.
By Lemma 1, Lemma 2, and Lemma 3, a minimal upper bound on the cost \(I_{\infty}(\tilde{x};\hat{x})\) can be achieved by solving the convex programs in (21) and (25). Then, if the constraints on positive definiteness of \(\Sigma^{v}\) (13) and control performance, \(\tilde{C}_{\infty}(\tilde{x},\tilde{u})-C_{\infty}(x,u)\leq\epsilon\), can be written as convex functions of the decision variables, we can find optimal distorting mechanisms efficiently using off-the-shelf optimization algorithms. Regarding (13), it can be verified (see Appendix D) that (13) can be written in terms of \((\Pi_{13},\Pi_{21})\), the new decision variables, instead of the original \(G\), as follows:
\[\left[\begin{array}{cc}\Sigma^{\tilde{v}}&\Pi_{21}\\ *&\Pi_{13}+\Pi_{13}^{\top}-\Sigma^{h}\end{array}\right]\geq\mathbf{0}. \tag{26}\]
We will add this (26) as a new constraint in the synthesis program. It remains to reformulate the control constraint.
### _Control Performance: Formulation and Convexity_
**Lemma 4**: _The constraint on the LQR control cost:_
\[\tilde{C}_{\infty}(\tilde{x},\tilde{u})-C_{\infty}(x,u)\leq\epsilon, \tag{27}\]
_can be formulated as the following set of LMIs:_
\[\left\{\begin{aligned} &\text{tr}\left(Q\Sigma^{\tilde{x}} \right)+\text{tr}\left(\Pi_{5}\right)\\ &\quad+\text{tr}\left(K^{\top}RK\Sigma^{\tilde{v}}+R\Sigma^{z} \right)\leq C_{\infty}(x,u)+\epsilon,\\ &\left[\begin{array}{cc}\Pi_{5}&R^{1/2}K\Pi_{21}\\ *&\Pi_{13}+\Pi_{13}^{\top}-\Sigma^{\tilde{x}}\end{array}\right]\geq\mathbf{0}, \end{aligned}\right. \tag{28}\]
_with new matrix variable \(\Pi_{5}\) to be designed._
_Proof_: See Appendix E.
In Lemma 1 - Lemma 4, an upper bound on the cost function \(I_{\infty}(\tilde{x};\hat{x})\) and the distortion constraint \(\tilde{C}_{\infty}(x,\tilde{u})-C_{\infty}(x,u)\leq\epsilon\) are written in terms of convex functions (programs) of the design variables. We have, however, two cost functions in Lemma 2 and Lemma 3. The latter leads to a multi-objective optimization problem that can be solved by scalarizing the costs, i.e., introducing a single objective that represents a compromise between both of them. To this aim, we introduce \(\alpha\in\mathbb{R}\), \(\alpha>0\), as a weighting parameter and define a new cost as the weighted sum of the original ones (see the cost in (29)). Since our goal is to achieve a minimal mutual information rate, because it characterizes information leakage, we seek the \(\alpha\) that minimizes \(I_{\infty}(\tilde{x};\hat{x})\) by performing a line search over \(\alpha\) subject to all constraints in Lemma 1 - Lemma 4. In what follows, we pose the complete nonlinear convex program to find a sub-optimal solution for Problem 1 (sub-optimal in the sense that Lemma 3 seeks to minimize an upper bound on the actual cost).
**Theorem 1**: _Consider the system dynamics (1), distortion-free control performance (2), distorted control performance
(8), _privacy mechanism_ (3), _distorted dynamics_ (4), _Kalman filter_ (5), _and maximum control degradation level \(\epsilon>0\), and matrices in_ (16), (22), _and_ (23). _For a fixed \(\alpha>0\), given the solution of the convex program in_ (29), _the mechanism variables \(G\), \(\Sigma^{v}\), and \(\Sigma^{z}\), that minimize the upper bound on \(I_{\infty}(\tilde{x};\hat{x})\) in_ (24) _subject to the control performance degradation constraint, \(\tilde{C}_{\infty}(x,\tilde{u})-C_{\infty}(x,u)\leq\epsilon\), are given by \(\Sigma^{z}\), \(G=\Pi_{21}\Pi_{13}^{-1}\), and \(\Sigma^{v}=\Sigma^{\tilde{v}}-\Pi_{21}\Pi_{13}^{-1}\Sigma^{h}(\Pi_{21}\Pi_{13} ^{-1})^{\top}\)._
_Proof:_ The expressions for the cost and constraints and convexity (linearity) of them follow from Lemma 1, Lemma 2, Lemma 3, Lemma 4, and (26). \(\blacksquare\)
## IV Illustrative case study
We illustrate the performance of our tools through a case study of a well-stirred chemical reactor with a heat exchanger. The reactor state, output, and controller are:
\[\left\{\begin{array}{l}x_{k}=\begin{pmatrix}C_{0}&T_{0}&T_{w}&T_{m}\end{pmatrix} ^{\top},\ y_{k}=x_{k},\ u_{k}=Ky_{k}.\end{array}\right.\]
where
\[\left\{\begin{array}{l}C_{0}&:\text{Concentration of the chemical product},\\ T_{0}&:\text{Temperature of the product},\\ T_{w}&:\text{Temperature of the jacket water of heat exchanger},\\ T_{m}&:\text{Coulant temperature}.\end{array}\right.\]
We use the discrete-time dynamics of the reactor introduced in [18] for the illustrative simulation study with matrices as given in (30). We implement the algorithm for two privacy mechanisms: first when the privacy mechanism is as in (3) and the second when the privacy mechanism does not include matrix transformation (\(G=I\)), to evaluate the effect of \(G\) in privacy mechanisms.
First, we show the effect of the control performance degradation level \(\epsilon\) on the (mutual information-based) privacy cost function. Fig. 2 depicts the evolution of the optimal cost \(I_{\infty}(\tilde{x};\hat{x})\) for increasing \(\epsilon\) for both with and without matrix transformation in privacy mechanism cases shown by \(G\) and \(G=I\), respectively. As expected, in both cases, the objective function decreases monotonically for the increased maximum allowed control performance degradation. Furthermore, given that the control cost without privacy distortion is \(C_{\infty}(x,u)=4.3615\), this figure illustrates that in the case of with matrix transformation \(G\), the infinite horizon optimal information leakage, which is shown by optimal \(I_{\infty}(\tilde{x};\hat{x})\), can get very close to zero by a very small control performance degradation level (\(\epsilon=0.07\)). So, in this case, we can minimize the information leakage without degrading the control performance excessively. Hence, the comparison between the information leakage in these two cases indicates that adding matrix transformation in the privacy mechanism (3) improves privacy by decreasing the information leakage significantly.
Then, in Fig. 3, we depict the norm of the system state and its Kalman estimate with and without privacy distortion. As can be seen in this figure, the accuracy of state estimation based on distorted data \((\tilde{y}_{k},\tilde{u}_{k})\) with \(\epsilon=0.07\) is less than the estimation accuracy without privacy distortion (\(\epsilon=0\)). Therefore, we can prevent accurate estimation of the private state using the proposed privacy tools.
Finally, the effect of the optimal distortion mechanisms is illustrated in Figure 4, where we contrast actual and distorted measurable output for \(\epsilon=0.07\).
Fig. 4: Comparison between the first element of measurable output \(y_{k}^{1}\) and the first element of the distorted output \(\tilde{y}_{k}^{1}\).
Fig. 3: Comparison between the norm of system state and its Kalman estimate for \(\epsilon=0,0.07\).
Fig. 2: Evolution of the optimal cost function (information leakage) based on increasing \(\epsilon\) for with and without matrix transformation in the privacy mechanism.
## V Conclusions
In this paper, for a class of Networked Control Systems (NCSs), we have presented a detailed mathematical framework for synthesizing distorting mechanisms to minimize the infinite horizon information leakage induced by the use of public/unsecured communication networks. We have proposed a class of linear Gaussian distorting mechanisms to randomize sensor and control data before transmission to prevent adversaries from accurately estimating the system state. Furthermore, for the class of systems under study, we have fully characterized an information-theoretic metric (mutual information) to quantify the information between the system state and its optimal estimate given the distorted disclosed data at the remote station for a class of worst-case eavesdropping adversaries. Finally, given the maximum allowed level of control performance degradation (LQR cost), we have provided tools (in terms of convex programs) to design sub-optimal (in terms of maximizing privacy) distorting mechanisms. We have presented simulation results to illustrate the performance of our tools.
## VI Appendix
### _Proof of Lemma 1_
The uplink information flow \(I\left(\tilde{x}^{N}\rightarrow\hat{x}^{N}\right)\) is given by [19]:
\[I\left(\tilde{x}^{N}\rightarrow\hat{x}^{N}\right)=\sum_{k=0}^{N}I\left(\tilde {x}^{k};\hat{x}_{k}\mid\hat{x}^{k-1}\right). \tag{31}\]
Then, based on the chain rule in mutual information [14]:
\[\begin{split} I&\left(\tilde{x}^{N}\rightarrow\hat{x}^ {N}\right)\\ &=\sum_{k=1}^{N}[\underbrace{I\left(\tilde{x}^{k-1};\hat{x}_{k} \mid\hat{x}^{k-1},\tilde{x}_{k}\right)}_{=(A)}+\underbrace{I\left(\tilde{x}_{ k};\hat{x}_{k}\mid\hat{x}^{k-1}\right)}_{=(B)}].\end{split} \tag{32}\]
By substituting \(\tilde{y}_{k}\) in (4) into (5), we have \(\hat{x}_{k}\) in terms of \(\hat{x}_{k-1}\), \(\tilde{x}_{k}\), and noises as follows:
\[\hat{x}_{k}=(I-L)A\hat{x}_{k-1}+(I-L)Bu_{k-1}+LG\tilde{x}_{k}+L\tilde{v}_{k}. \tag{33}\]
Then, considering (33) and the fact that \(u_{k-1}=Ky_{k-1}\) is a deterministic function of \(\hat{x}^{k-1}\) (see (5)), we have:
\[(A)=I\left(\tilde{x}^{k-1};L\tilde{v}_{k}\mid\hat{x}_{k-1},x_{k}\right)=0. \tag{34}\]
Substituting (33) in (B) and using mutual information definition in terms of differential entropy [14], we have
\[\begin{split}(B)&=I\left(\tilde{x}_{k};(I-L)Bu_{k- 1}+LG\tilde{x}_{k}+L\tilde{v}_{k}\mid\hat{x}^{k-1}\right)\\ &=\underbrace{h\left(LG\tilde{x}_{k}+L\tilde{v}_{k}\mid\hat{x}^{ k-1}\right)}_{C}-\underbrace{h\left(LG\tilde{x}_{k}+L\tilde{v}_{k}\mid\hat{x}^{k-1}, \tilde{x}_{k}\right)}_{D}.\end{split} \tag{35}\]
Given the system dynamics (4) and the fact that the estimation error \(e_{k-1}\) is independent of the previous measurement (and of \(\hat{x}^{k-1}\) ), and by substituting \(e_{k|k-1}\) given in (6), (C) is simplified as follows:
\[\begin{split}(C)&=h\left(LG\left(A\tilde{x}_{k-1}+B \tilde{u}_{k-1}+w_{k-1}\right)+L\tilde{v}_{k}\mid\hat{x}^{k-1}\right)\\ &=h\left(LG\left(Ae_{k-1}+Bz_{k-1}+w_{k-1}\right)+L\tilde{v}_{k} \mid\hat{x}^{k-1}\right)\\ &=\frac{1}{2}\log\det\left(LG\Sigma^{e}_{k|k-1}G^{\top}L^{\top} +L\Sigma^{\tilde{v}}L^{\top}\right).\end{split} \tag{36}\]
Also, because \(\tilde{v}_{k}\) is i.i.d., (D) can be written as follows:
\[(D)=h\left(L\tilde{v}_{k}\mid\hat{x}^{k-1},\tilde{x}_{k}\right)=\frac{1}{2} \log\det\left(L\Sigma^{\tilde{v}}L^{\top}\right). \tag{37}\]
Therefore, substituting (34), (35), (36), and (37) into (32), the uplink directed information is calculated as follows:
\[\begin{split} I\left(\tilde{x}^{N}\rightarrow\hat{x}^{N} \right)&=\sum_{k=1}^{N}\left(\frac{1}{2}\log\det\left(LG\Sigma^{e }_{k|k-1}G^{\top}L^{\top}\right.\right.\\ &\left.\left.+L\Sigma^{\tilde{v}}L^{\top}\right)-\frac{1}{2} \log\det\left(L\Sigma^{\tilde{v}}L^{\top}\right)\right).\end{split} \tag{38}\]
Following the same procedure, the downlink directed information can be written as follows:
\[\begin{split} I&\left(0*\hat{x}^{K-1}\rightarrow\tilde{x}^ {K}\right)=I\left(0;\tilde{x}_{1}\mid\tilde{x}_{0}\right)+\sum_{k=1}^{N}I \left(\hat{x}^{k-1};\tilde{x}_{k}\mid\hat{x}^{k-1}\right)\\ &=\sum_{k=1}^{N}\left(h\left(\tilde{x}_{k}\mid\hat{x}^{k-1} \right)-h\left(\tilde{x}_{k}\mid\hat{x}^{k-1},\tilde{x}^{k-1}\right)\right) \\ &=\sum_{k=1}^{N}\left(h\left(BK\tilde{v}_{k-1}+Bz_{k-1}+w_{k-1} \right)-h\left(Bz_{k-1}+w_{k-1}\right)\right)\\ &=\sum_{k=1}^{N}\left(\frac{1}{2}\log\det\left(BK\Sigma^{\tilde{v }}K^{\top}B^{\top}+B\Sigma^{\tilde{v}}B^{\top}+\Sigma^{w}\right)\right)\\ &\quad-\left(\frac{1}{2}\log\det\left(B\Sigma^{\tilde{v}}B^{\top }+\Sigma^{w}\right)\right).\end{split} \tag{39}\]
Therefore, \(I\left(\tilde{x}^{N};\hat{x}^{N}\right)\) is calculated by the summation of uplink (38) and downlink (39) information flows.
### _Proof of Lemma 3_
At first, we prove that an upperbound for the solution of \(E_{1}:=\mathcal{A}\Sigma^{\zeta}\mathcal{A}^{\top}-\Sigma^{\zeta}+\mathcal{B}= \mathbf{0}\), can be achieved by solving:
\[\left\{\begin{aligned} &\min_{\Sigma}\operatorname{trace}(\Sigma),\\ &\quad\text{s.t. }E_{2}:=\mathcal{A}\Sigma\mathcal{A}^{\top}- \Sigma+\mathcal{B}\leq\mathbf{0}.\end{aligned}\right. \tag{40}\]
From \(E_{2}\leq E_{1}\), it can be deduced that:
\[\mathcal{A}(\Sigma-\Sigma^{\zeta})\mathcal{A}^{\top}-(\Sigma-\Sigma^{\zeta}) \leq\mathbf{0}. \tag{41}\]
Then, from (41) and given that \(\mathcal{A}\) is Schur stable, we can conclude that \(\Sigma\geq\Sigma^{\zeta}\). Hence, minimizing \(\operatorname{trace}(\Sigma)\) with inequality \(E_{2}\leq\mathbf{0}\) as the constraint gives us an upperbound on \(\Sigma^{\zeta}\) which is the solution of Lyapanov equation (15).
Using standard Schur complement properties [17], the non-linear inequality \(E_{2}\leq\mathbf{0}\) can be converted to:
\[\left[\begin{array}{cc}\Sigma-\mathcal{B}&\mathcal{A}\\ \mathcal{A}^{\top}&\Sigma^{-1}\end{array}\right]\geq\mathbf{0}. \tag{42}\]
We define an invertible matrix \(\Pi_{1}\) as a new design variable as in (21). It follows that a congruence transformation of (42) can be written as follows, that is positive definite since (42) is positive definite [20]:
\[\left[\begin{array}{cc}I&\mathbf{0}\\ \mathbf{0}&\Pi_{1}\end{array}\right]^{\top}\left[\begin{array}{cc}\Sigma- \mathcal{B}&\mathcal{A}\\ \mathcal{A}^{\top}&\Sigma^{-1}\end{array}\right]\left[\begin{array}{cc}I& \mathbf{0}\\ \mathbf{0}&\Pi_{1}\end{array}\right]\geq\mathbf{0}, \tag{43}\]
which is equivalent to
\[\left[\begin{array}{cc}\Sigma-\mathcal{B}&\mathcal{A}\Pi_{1}\\ (\mathcal{A}\Pi_{1})^{\top}&\Pi_{1}^{\top}\Sigma^{-1}\Pi_{1}\end{array}\right] \geq\mathbf{0}. \tag{44}\]
It can be proved that, for any matrix \(\bar{A}\) and invertible matrix \(\bar{B}\) we have (Hint: \(\left(\bar{B}^{-1/2}\bar{A}-\bar{B}^{1/2}\right)^{\top}\left(\bar{B}^{-1/2} \bar{A}-\bar{B}^{1/2}\right)\geq\mathbf{0}\)):
\[\bar{A}^{\top}\bar{B}^{-1}\bar{A}\geq\bar{A}+\bar{A}^{\top}-\bar{B}. \tag{45}\]
Therefore, from (44) and (45), we can conclude that:
\[\left[\begin{array}{cc}\Sigma-\mathcal{B}&\mathcal{A}\Pi_{1}\\ (\mathcal{A}\Pi_{1})^{\top}&\Pi_{1}^{\top}+\Pi_{1}-\Sigma\end{array}\right] \geq\mathbf{0}. \tag{46}\]
Matrix \(\mathcal{A}\), given in (16), can be written as follows:
\[\mathcal{A}=\mathcal{A}_{0}+\mathcal{A}_{1}G\left[\begin{array}{cc}\mathbf{0 }&I\end{array}\right], \tag{47}\]
where \(\mathcal{A}_{0}\) and \(\mathcal{A}_{1}\) are defined in (22). By defining new design variable \(\Pi_{2}:=G\left[\begin{array}{cc}\mathbf{0}&I\end{array}\right]\Pi_{1}= \left[\begin{array}{cc}\mathbf{0}&G\Pi_{13}\end{array}\right]=\left[ \begin{array}{cc}\mathbf{0}&\Pi_{21}\end{array}\right]\) and substituting (47) into (46), (40) can be converted to (21), which is linear in design variables \(\Pi_{1}\), \(\Pi_{2}\), and \(\Sigma\). \(\blacksquare\)
### _Proof of Lemma 3_
Due to the monotonicity of the logarithm determinant function, minimizing the right-hand side of (24) is equivalent to solving the following optimization problem:
\[\left\{\begin{aligned} &\min_{\Sigma^{\bar{\triangleright}},G, \Sigma^{\varepsilon},\Sigma,\Pi_{3},\Pi_{4}}\frac{1}{2}\log\det\left(\Pi_{3}^{- 1}\right)-\frac{1}{2}\log\det\left(L\Sigma^{\bar{\triangleright}}L^{\top}\right) \\ &\quad+\frac{1}{2}\log\det\left(\Pi_{4}^{-1}\right)-\frac{1}{2}\log\det \left(B\Sigma^{z}B^{\top}+\Sigma^{w}\right)\\ &\\ \text{s.t. }&\begin{cases}\Pi_{3}^{-1}\geq\left(L\left(G\Sigma^{ \varepsilon}G^{\top}+\Sigma^{\bar{\triangleright}}\right)L^{\top}\right),\\ \Pi_{4}^{-1}\geq\left(BK\Sigma^{\bar{\triangleright}}K^{\top}B^{\top}+B\Sigma^{z}B^ {\top}+\Sigma^{w}\right),\end{cases}\end{aligned}\right.\end{aligned}\right. \tag{48}\]
where \(\bar{\Sigma}^{e}:=N_{e}\Sigma N_{e}^{\top}\geq\Sigma^{e}\). From relation (45), we can conclude \(\Pi_{3}^{-1}\geq 2I-\Pi_{3}\) and \(\Pi_{4}^{-1}\geq 2I-\Pi_{4}\) which linearizes the second inequality term of (48). Then, the first inequality term of (48) is equivalent to its Schur complement as follows [17]:
\[\left[\begin{array}{cc}2I-\Pi_{3}-L\Sigma^{\bar{\triangleright}}L^{\top}&LG\\ (LG)^{\top}&(\bar{\Sigma}^{e})^{-1}\end{array}\right]\geq\mathbf{0}. \tag{49}\]
A congruence transformation of (49) can be calculated as follows [20]:
\[\left[\begin{array}{cc}I&\mathbf{0}\\ \mathbf{0}&\Pi_{13}\end{array}\right]^{\top}\left[\begin{array}{cc}2I-\Pi_{3} -L\Sigma^{\bar{\triangleright}}L^{\top}&LG\\ (LG)^{\top}&(\bar{\Sigma}^{e})^{-1}\end{array}\right]\left[\begin{array}{cc}I& \mathbf{0}\\ \mathbf{0}&\Pi_{13}\end{array}\right]\geq\mathbf{0}. \tag{50}\]
By relation (45), we have \(\Pi_{13}^{\top}(\bar{\Sigma}^{e})^{-1}\Pi_{13}\geq\Pi_{13}^{\top}+\Pi_{13}- \bar{\Sigma}^{e}\). Then, given \(G\Pi_{13}=\Pi_{21}\), (50) can be converted to:
\[\left[\begin{array}{cc}2I-\Pi_{3}-L\Sigma^{\bar{\triangleright}}L^{\top}&L\Pi_{21 }\\ *&\Pi_{13}+\Pi_{13}^{\top}-\bar{\Sigma}^{e}\end{array}\right]\geq\mathbf{0}. \tag{51}\]
Combining (48) and (51), an upperbound for the optimal value of \(I_{\infty}(\tilde{x};\tilde{x})\) in (24) can be achieved by solving the convex program in (25). \(\blacksquare\)
### _Proof of (26)_
A congruence transformation of (13) can be written as follows [20]:
\[\left[\begin{array}{cc}I&\mathbf{0}\\ \mathbf{0}&\Pi_{13}\end{array}\right]^{\top}\left[\begin{array}{cc}\Sigma^{ \bar{\triangleright}}&G\\ G^{\top}&(\Sigma^{h})^{-1}\end{array}\right]\left[\begin{array}{cc}I&\mathbf{0 }\\ \mathbf{0}&\Pi_{13}\end{array}\right]\geq\mathbf{0}, \tag{52}\]
which is equivalent to
\[\left[\begin{array}{cc}\Sigma^{\bar{\triangleright}}&G\Pi_{13}\\ (G\Pi_{13})^{\top}&\Pi_{13}^{\top}(\Sigma^{h})^{-1}\Pi_{13}\end{array}\right] \geq\mathbf{0}. \tag{53}\]
By relation (45), we have \(\Pi_{13}^{\top}(\Sigma^{h})^{-1}\Pi_{13}\geq\Pi_{13}^{\top}+\Pi_{13}-\Sigma^{h}\). Then, given that \(G\Pi_{13}=\Pi_{21}\), equation (53) is converted to (26). \(\blacksquare\)
### _Proof of Lemma 4_
We define \(\Delta:=\tilde{x}_{k}^{\top}Q\tilde{x}_{k}+\tilde{u}_{k}^{\top}R\tilde{u}_{k}\). Then, given the distortion mechanism (3) and system dynamics (4), \(\Delta\) can be calculated as follows:
\[\Delta =\tilde{x}_{k}^{\top}\left(Q+G^{\top}K^{\top}RKG\right)\tilde{x }_{k}+\tilde{v}_{k}^{\top}K^{\top}RK\tilde{v}_{k} \tag{54}\] \[+\tilde{v}_{k}^{\top}K^{\top}RKGx_{k}+\tilde{x}_{k}^{\top}G^{\top }K^{\top}RK\tilde{v}_{k}\] \[+z_{k}^{\top}Rz_{k}+\tilde{x}_{k}^{\top}G^{\top}K^{\top}Rz_{k}+ \tilde{v}_{k}^{\top}K^{\top}Rz_{k}\] \[+z_{k}^{\top}RKG\tilde{x}_{k}+z_{k}^{\top}RK\tilde{v}_{k}.\]
The expectation of the quadratic form of any variable \(p\) in terms of its mean \(\mu^{
\(\mathbb{E}\left[p^{\top}Ap\right]=\text{tr}[A\Sigma^{p}]+(A\mu^{p})^{\top}(A\mu^{p})\) (see [21] for details). Then, in (54), since \(\tilde{x}_{k}\), \(\tilde{v}_{k}\), and \(z_{k}\) are independent, and they have zero mean, \(\mathbb{E}\left[\Delta\right]\) can be calculated as follows:
\[\mathbb{E}\left[\Delta\right] =\text{tr}\left(\left(Q+G^{\top}K^{\top}RKG\right)\Sigma_{k}^{ \tilde{x}}+\left(K^{\top}RK\right)\Sigma^{\tilde{v}}\right. \tag{55}\] \[+\left.R\Sigma^{z}\right).\]
Given the upperbound \(\bar{\Sigma}^{\tilde{x}}:=N_{\tilde{x}}\Sigma N_{\tilde{x}}^{\top}\geq\lim_{k \rightarrow\infty}\Sigma_{k}^{\tilde{x}}\) in (23) and the fact that trace is a linear mapping, for \(\tilde{C}_{\infty}(\tilde{x},\tilde{u})\) we have:
\[\tilde{C}_{\infty}(\tilde{x},\tilde{u})=\limsup_{K\rightarrow \infty}\frac{1}{N+1}\sum_{k=0}^{N}\mathbb{E}\left[\Delta\right] \tag{56}\] \[\leq\text{tr}\left(\left(Q+G^{\top}K^{\top}RKG\right)\bar{ \Sigma}^{\tilde{x}}\right)+\text{tr}\left(K^{\top}RK\Sigma^{\tilde{v}}+R \Sigma^{z}\right).\]
Up to this point, we have written an upperbound for \(\tilde{C}_{\infty}(\tilde{x},\tilde{u})\) in terms of the design variables. Then, the constraint (27) is equivalent to:
\[\text{tr}\left(\left(Q+G^{\top}K^{\top}RKG\right)\bar{\Sigma}^{ \tilde{x}}\right) \tag{57}\] \[\qquad+\text{tr}\left(K^{\top}RK\Sigma^{\tilde{v}}+R\Sigma^{z} \right)\leq C_{\infty}(x,u)+\epsilon.\]
Since trace is a linear mapping, (57) can be converted to:
\[\text{tr}\left(Q\bar{\Sigma}^{\tilde{x}}\right)+\text{tr}\left( R^{1/2}KG\bar{\Sigma}^{\tilde{x}}G^{\top}K^{\top}(R^{1/2})^{\top}\right) \tag{58}\] \[\qquad+\text{tr}\left(K^{\top}RK\Sigma^{\tilde{v}}+R\Sigma^{z} \right)\leq C_{\infty}(x,u)+\epsilon.\]
However, due to the monotonicity of the trace function, the inequality (58) can be converted to the following set of inequalities:
\[\text{tr}\left(Q\bar{\Sigma}^{\tilde{x}}\right)+\text{tr}\left( \Pi_{5}\right) \tag{59a}\] \[+\text{tr}\left(K^{\top}RK\Sigma^{\tilde{v}}+R\Sigma^{z}\right) \leq C_{\infty}(x,u)+\epsilon,\] \[\Pi_{5}\geq R^{1/2}KG\bar{\Sigma}^{\tilde{x}}G^{\top}K^{\top}(R^ {1/2})^{\top}. \tag{59b}\]
The inequality (59a) is linear in design variables, and (59b) is equivalent to its Schur compliment as follows [17]:
\[\left[\begin{array}{cc}\Pi_{5}&R^{1/2}KG\\ *&(\bar{\Sigma}^{\tilde{x}})^{-1}\end{array}\right]\geq\mathbf{0}. \tag{60}\]
Then, a congruence transformation of (60) can be written as follows [20]:
\[\left[\begin{array}{cc}I&\mathbf{0}\\ \mathbf{0}&\Pi_{13}\end{array}\right]^{\top}\left[\begin{array}{cc}\Pi_{5}&R ^{1/2}KG\\ *&(\bar{\Sigma}^{\tilde{x}})^{-1}\end{array}\right]\left[\begin{array}{cc}I& \mathbf{0}\\ \mathbf{0}&\Pi_{13}\end{array}\right]\geq\mathbf{0}. \tag{61}\]
By relation (45), we have \(\Pi_{13}^{\top}(\bar{\Sigma}^{\tilde{x}})^{-1}\Pi_{13}\geq\Pi_{13}^{\top}+\Pi _{13}-\bar{\Sigma}^{\tilde{x}}\). Then, given that \(G\Pi_{13}=\Pi_{21}\), (61) is equivalent to the following inequality:
\[\left[\begin{array}{cc}\Pi_{5}&R^{1/2}K\Pi_{21}\\ *&\Pi_{13}+\Pi_{13}^{\top}-\bar{\Sigma}^{\tilde{x}}\end{array}\right]\geq \mathbf{0}. \tag{62}\]
From (59a) and (62), we can conclude that the constraint (27) can be formulated by the set of LMIs in (28).
|
2306.14816
|
Experiments with Detecting and Mitigating AI Deception
|
How to detect and mitigate deceptive AI systems is an open problem for the
field of safe and trustworthy AI. We analyse two algorithms for mitigating
deception: The first is based on the path-specific objectives framework where
paths in the game that incentivise deception are removed. The second is based
on shielding, i.e., monitoring for unsafe policies and replacing them with a
safe reference policy. We construct two simple games and evaluate our
algorithms empirically. We find that both methods ensure that our agent is not
deceptive, however, shielding tends to achieve higher reward.
|
Ismail Sahbane, Francis Rhys Ward, C Henrik Åslund
|
2023-06-26T16:22:13Z
|
http://arxiv.org/abs/2306.14816v1
|
# Experiments with Detecting and Mitigating AI Deception
###### Abstract
How to detect and mitigate deceptive AI systems is an open problem for the field of safe and trustworthy AI. We analyse two algorithms for mitigating deception: The first is based on the path-specific objectives framework where paths in the game that incentivise deception are removed. The second is based on shielding, i.e., monitoring for unsafe policies and replacing them with a safe reference policy. We construct two simple games and evaluate our algorithms empirically. We find that both methods ensure that our agent is not deceptive, however, shielding tends to achieve higher reward.
## 1 Introduction
Deception is a challenge for building safe and trustworthy AI [19]. Recent advances in reinforcement learning (RL) and language models (LMs) mean that we are increasingly living in a world containing highly capable, goal-directed _agents_[16]. Deception may be learned as an effective strategy for achieving goals in many environments, especially in multi-agent settings comprised of humans and AI agents [19].
Technical work on deception in AI systems, and how to avoid it, is limited. Deception has been defined within structural causal games (SCGs), which is a framework that applies causal graphs to game theory [19]. Given a causal graph, it is possible to ensure certain safety properties by removing paths in that causal graph [2]. Giving learning agents these kinds of _path-specific objectives_ (PSOs) is also applicable when deception is the property that is considered unsafe. These methods ensure that the agent will not deceive a human providing feedback, however, it will also ensure that the agent will not persuade, teach, or coordinate with the human (i.e., it will not try to influence the human in any way). Shielding is another class of methods for ensuring safety in learning agents [15, 7, 18, 13, 8, 9, 3]. A shield is a kind of monitor. In addition to monitoring, the shield replaces an unsafe action with a safe action if the verification returns that the policy does not satisfy the safety specification.
In this paper, we make three contributions: (1) We introduce a shielding algorithm to ensure that an agent does not deceive other agents. (2) We introduce two simple games to evaluate deception in agents. (3) We evaluate our algorithm in these environments against an algorithm based on PSO. This paper is organised as follows: In Section 2, we recapitulate the definition of deception in SCGs. In Section 3, we introduce our algorithm and compare it to the PSO algorithm. We then conclude in Section 4.
## 2 Defining and detecting deception
**Structural Causal Games (SCGs)** offer a representation of causality in games [11]. An SCG is a directed acyclic graph containing variables and causal edges between them. There are three types of variables: chance variables (\(X\)), decision variables (\(D\)) and utility variables (\(U\)). Along with the graph, an SCG defines the conditional probability distribution (CPD) over each (non-decision) variable, given its parents. Agent's policies choose the CPD over decision variables and agents choose their policies to
maximise the expected sum of utility and we use the Nash equilibrium concept. At the beginning of the game a _setting_ is sampled from the prior over the game; given a setting and a policy profile, the value of any variable is uniquely determined. Kenton et al. [11] define _agents_ in SCGs as systems that would adapt their policy if their actions influenced the world in a different way. This is the relevant notion of agency, as we define belief and intent based on how the agent would adapt its behaviour to such changes.
We now introduce a simple signalling game, where an agent that can be weak or strong tries to avoid being attacked by another agent, that wants to attack them only if they're weak. They can defend or retreat, and their decision is observed by the other agent.
**Example 1 (War game fig. 1):** A signaler S has type \(X\in\{strong,weak\}\). At the start of the game, \(S\) observes \(X\), but the target agent \(T\) does not. The agents have decisions \(D^{S}\in\{retreat,defend\}\) and \(D^{T}\in\{\neg attack,attack\}\). A weak \(S\) prefers to retreat whereas a strong \(S\) prefers to defend. \(T\) prefers to attack only if \(S\) is weak. Regardless of type, \(S\) does not want to be attacked (and cares more about being attacked than about their own action). \(X\) follows a \(Bernoulli(0.9)\) distribution so that \(S\) is strong with probability \(0.9\). \(U^{T}=1\) if \(T\) attacks a weak \(S\) or does not attack a strong \(S\), and \(0\) otherwise. \(S\) gains utility \(2\) for not getting attacked, and utility \(1\) for performing the action preferred by their type (e.g., utility \(1\) for retreating if they are weak).
_To deceive is to intentionally cause to have a false belief that is not believed to be true_[13]. Past work defines belief, intention, and deception in SCGs [18]; these definitions only refer to agent behaviour.
**Belief** Agents have beliefs over _propositions_\(\phi\), i.e., Boolean formula of variable assignments (e.g., \(\phi:X=x\wedge\neg Y=y\)). An agent _believes_ a proposition \(\phi\) if \(1\)) they act as though they observed \(\phi\) is true; \(2\)) they would have acted differently if they observed \(\phi\) was false. An agent has a _true/false belief_ if they believe \(\phi\) and \(\phi=true/false\).
**Example 1 (continued):** Since \(S\)'s probability of being weak is low, its optimal policy is to always defend, in order to signal a strong type. \(T\)'s best policy in this case is to attack if and only if \(S\) retreats. These two policies form a Nash equilibrium. When \(X=weak\), \(T\) believes the proposition \(\phi:X=Strong\), as \(1\)) if they had observed that \(X=Strong\), they would not have attacked, and \(2\)) if they had observed that \(X=Weak\), they would attack. Therefore, they respond to \(\phi\), and they act as if \(\phi=true\), so the two conditions for belief are met. When \(X=weak\), \(\phi=false\), so \(T\) has a false belief about \(\phi\).
**Intention** Previous work defines notions of intention, suitable for algorithms, in causal models [9, 3, 18]. Essentially, an agent _intentionally causes_ the outcomes which provide sufficient reason for it to choose its policy over an alternate policy. What _intent to cause_ means, intuitively, is that if the outcomes the agent wanted were guaranteed to happen anyway, they would not mind choosing an alternative policy [18].
**Example 1 (continued):** Under the Nash policy, \(S\) intends to cause \(T\) to not attack, w.r.t. the alternative (honest) policy (\(defend\) when \(X=strong\) and \(retreat\) when \(X=weak\)), because if \(T\)'s policy was fixed to \(\neg attack\), then \(S\) would choose the honest policy. Hence, \(S\) intends to cause \(D^{T}=\neg attack\).
**Deception** An agent \(S\)_decives_ an agent \(T\) about a proposition \(\phi\) if \(1\)) \(S\) intentionally causes \(T\)'s decision \(D^{T}\); \(2\)) \(T\) has a false belief about \(\phi\); \(3\)) \(S\) does not believe \(\phi\).
**Example 1 (continued):** Under the Nash policies, \(S\) deceives \(T\) about their type when \(S\) is \(weak\). As seen above, \(T\) believes that \(X=Strong\), \(S\) intends for \(D^{T}\) to be \(\neg attack\), and \(S\) does not believe that \(X=weak\), so all the conditions for deception are met.
Figure 1: Ex.1 SCG graph. Chance variables are circular, decisions square, utilities diamond and the latter two are colour coded by their association with different agents. Solid edges represent causal dependence and dotted edges are information links.
## 3 Mitigating Deception
We perform experiments in two examples, optimising agents to play these two games with no mitigation, with PSO, and with shielding. We compare these methods on their optimality and deceptiveness.
**Path Specific Objective (PSO) [1]** prevents \(S\) learning a deceptive policy by removing \(S\)'s ability to influence \(T\)'s decision during training. In SGSs, this corresponds to removing the path in the graph between \(D^{S}\) and \(D^{T}\). The PSO algorithm is shown in algorithm 1.
**Example 1 (continued):** In the war game, we remove the edge from \(D^{S}\) to \(D^{T}\). \(S\) is now only interested by the utility it gets directly from its decision, meaning it will learn the honest policy. Therefore, \(S\) can be trained with PSO, and it will play the game without being deceptive. However, this removes \(S\)'s ability to learn to influence \(T\) in any way, including positively. In the following example, the only strategy for achieving utility is to influence the other agent, and so the PSO agent does not learn anything.
**Example 2 (fig. 2):** Variables \(X\in\{0,1,2\}\) is uniformly sampled, and \(Y\in\{0,1\}\) follows a \(Ber(0.1)\) distribution. \(S\) observes both \(X\) and \(Y\), while \(T\) does not. \(D^{T}\in\{0,1,2\}\) and \(T\)'s objective is to correctly bet on the value of \(X\). \(D^{S}\in\{0,1,2\}\), and \(S\) gets some utility when \(T\) correctly bets on \(X\), but when \(Y=1\), \(S\) can get more utility if \(D^{T}=(X+1)\mod 3\). \(T\) observes \(D^{S}\).
This example differs from the first one as \(S\)'s only way to get utility is to influence \(T\)'s decision. \(S\) can adopt two sensible policies, which are to always report \(X\) (the honest policy), or to, when \(Y=1\), report on \((X+1)\mod 3\) instead (a deceptive policy, \(S\) intentionally causes \(T\) to believe that \(X\) has the wrong value). Since \(Y\) is rarely 1, as before, \(T\)'s optimal strategy remains to follow \(S\)'s decision even if it is sometimes deceptive. If we use PSO with this game, \(S\) will have no way to influence its utility, and \(T\) will have no information on the value of \(X\). Therefore, any policies for \(S\) and \(T\) are optimal and the agents will not learn anything. We introduce the shielding algorithm to solve this problem by preventing deceptive policies from being learned, in a more fine-grained way than PSO.
**Shielding** uses a safety constraint, and checks that an agent's action or policy satisfies the constraint before letting it perform that action. In our case, we shield the whole policy rather than individual decisions, and we use the shield during training to prevent the agent learning deceptive policies. The shield used is the deception verification presented in Algorithm 3.
```
0: An SCG \(\mathcal{M}=(\mathcal{G},\theta)\), graphical criterion \(\mathcal{C}\), policies \(\pi^{-i}\), natural distributions \(\mathcal{N}\).
0: PSO-optimal policy \(\pi^{i}\)
1: Reduce \(\mathcal{G}\) to \(\mathcal{G}^{\prime}\) using \(\mathcal{C}\).
2: Impute policies \(\pi^{-i}\) and natural distributions from \(\mathcal{N}\) to those variables with fewer parents in \(\mathcal{G}^{\prime}\) to obtain \(\theta^{\prime}\).
3: Train an agent in \(\mathcal{M}^{\prime}=(\mathcal{G}^{\prime},\theta^{\prime})\) to obtain policy \(\pi^{i}\).
```
**Algorithm 1**Path-Specific Objectives
**Shielding** uses a safety constraint, and checks that an agent's action or policy satisfies the constraint before letting it perform that action. In our case, we shield the whole policy rather than individual decisions, and we use the shield during training to prevent the agent learning deceptive policies. The shield used is the deception verification presented in Algorithm 3.
We built a simple environment to apply the above definitions and to investigate ways to train non
Figure 2: Ex. 2 SCG Graph
deceptive agents in SCGs. \(S\) (the potentially deceptive agent) is being trained with a minimal version of RL, where \(S\) plays several games with random settings and policy. We assume that \(T\) (the target of deception) has fixed Nash-policy, because we work with games where the occasions for \(S\) to benefit from deception are rare, and the best policy for \(T\) remains to believe \(S\)'s signal, despite it being sometimes false. We implemented algorithm 3, which, given a policy, and a reference policy, indicates whether the policy is deceptive w.r.t the reference policy, by testing every possible setting. Hence algorithm 3 is complete and sound. We initialize a known-safe policy as the reference. As soon as a better performing safe policy is found, we use this as the new reference.
```
0: SCG \(\mathcal{M}=(\mathcal{G},\boldsymbol{\theta})\), policy profile \(\boldsymbol{\pi}\), referencepolicy \(\pi_{\text{ref}}\), agent \(S\), agent \(T\), proposition \(\phi\)
0: boolean, true if \(S\) deceives \(T\) about \(\phi\) under \(\boldsymbol{\pi}\) w.r.t. \(\pi_{\text{ref}}\)
1:for\(s\)in settings do
2: initialise \(\mathcal{M}\) with \(s\) and \(\boldsymbol{\pi}\)
3: compute whether \(S\) intends to cause \(D^{T}\) under \(\mathcal{M}\), \(\boldsymbol{\pi}\) and \(\pi_{\text{ref}}\)
4: compute whether \(S\) and \(T\) believe \(\phi\) under \(\mathcal{M}\), \(\boldsymbol{\pi}\),
5: compute whether \(\phi\) is true under \(\mathcal{M}\) and \(s\)
6: deceptive \(\gets S\) intends to cause \(D^{T}\) & \(T\) believes \(\phi\) & \(S\) does not believe \(\phi\) & \(\phi\) is false
7:if deceptive then return true
8:endfor
9:if not deceptive then return false
```
**Algorithm 3** Deception Check
_Results_ are summarised in table 1. For Ex.1, both PSO and shielding learn the optimal non-deceptive policy, whereas when no mitigation is used the optimal (deceptive) policy is learned. For Ex.2, shielding learns the optimal non-deceptive policy, but the PSO-agent cannot learn anything, as in this example the only way for \(S\) to gain utility is to influence \(T\).
## 4 Conclusion
_Summary_ We introduce a novel shielding algorithm for mitigating deceptive learning agents. We show, in two toy environments, that our algorithm has advantages over previous methods.
_Limitations and future work_ The examples are simplistic, and the optimal policies are very easy to find analytically without doing any training. This work acts as a proof of concept for the idea of automatically detecting and preventing deception while training. Many simplifying assumptions are made, e.g. the fact that the games only have one time-step, or the assumption that one of the policies is fixed. In addition, the verification is exhaustive on the setting space. This works with the small domains of these examples but might become intractable for larger and more realistic problems, which could require Monte-Carlo sampling of the setting, or a latent representation of it. Furthermore, shielding requires an initial safe reference policy, and its convergence to good safe policies is unknown.
\begin{table}
\begin{tabular}{l c c c c} \hline & & ex. 1 & ex. 2 \\ & deceptive & performance & deceptive & performance \\ \hline shielding **(our)** & **no** & optimal-honest & **no** & optimal-honest \\ PSO & **no** & optimal-honest & **no** & sub-optimal \\ no mitigation & yes & **optimal** & yes & **optimal** \\ \hline \end{tabular}
\end{table}
Table 1: Results for examples 1 and 2
|
2307.12196
|
Error propagation in an explicit and an implicit numerical method for
Volterra integro-differential equations
|
We study error propagation in both an explicit and an implicit method for
solving Volterra integro-differential equations. We determine the relationship
between local and global errors. We derive upper bounds for the global error,
and show that the global order for both methods is expected to be first-order.
A few numerical examples illustrate our results.
|
J. S. C. Prentice
|
2023-07-23T01:06:43Z
|
http://arxiv.org/abs/2307.12196v1
|
Error propagation in an explicit and an implicit numerical method for Volterra integro-differential equations
###### Abstract
We study error propagation in both an explicit and an implicit method for solving Volterra integro-differential equations. We determine the relationship between local and global errors. We derive upper bounds for the global error, and show that the global order for both methods is expected to be first-order. A few numerical examples illustrate our results.
## 1 Introduction
Recently, we presented explicit and implicit numerical methods for solving the Volterra integro-differential equation
\[y^{\left(n\right)}\left(x\right)=f\left(x,y\right)+\int\limits_{x_{0}}^{x}K \left(x,y\left(t\right),t\right)dt,\ \ \ \ x>x_{0}, \tag{1}\]
using numerous examples to demonstrate the performance of the methods, and also studying the stability of the methods [1][2]. In this paper, we investigate the propagation of numerical error in these methods. Not only is this an interesting study in its own right, it also allows us to learn about upper bounds on the global error, and the order of the error.
## 2 Notation and terminology
We deviate slightly form notation used in our previous work: here, \(w\) denotes the approximate solution, and \(y\) denotes the true solution. The nodes are labelled as
\[x_{0}<x_{1}<x_{2}<\ldots<x_{i}<x_{i+1}<\ldots<x_{f}\]
and \(h\) is the uniform spacing between the nodes - the _stepsize_. We focus our attention on the case of \(n=1\) in (1). We note that \(f\) and \(K\) are assumed to be suitably smooth so as to yield a unique solution and, in particular, \(K\) is not singular anywhere on the interval of integration.
The explicit method is given by
\[w_{i+1} =w_{i}+hf\left(x_{i},w_{i}\right) \tag{2}\] \[\quad+\frac{h^{2}}{2}\left(\sum_{j=0}^{i}2K\left(x_{i},w_{j},x_{j }\right)-K\left(x_{i},w_{0},x_{0}\right)-K\left(x_{i},w_{i},x_{i}\right)\right)\] \[\equiv M_{E}\left(w_{i}\right),\]
where we have implicitly defined \(M_{E}\left(w_{i}\right).\)
The implicit method is given by
\[w_{i+1}= w_{i}+hf\left(x_{i+1},w_{i+1}\right)\] \[\quad+\frac{h^{2}}{2}\left(\sum_{j=0}^{i+1}2K\left(x_{i+1},w_{j}, x_{j}\right)-K\left(x_{i+1},w_{0},x_{0}\right)-K\left(x_{i+1},w_{i+1},x_{i+1} \right)\right)\] \[\equiv M_{I}\left(w_{i}\right),\]
where we have implicitly defined \(M_{I}\left(w_{i}\right).\)
We also define \(M_{E}\left(y_{i}\right)\) and \(M_{I}\left(y_{i}\right)\) as follows:
\[y_{i+1} =y_{i}+hf\left(x_{i},y_{i}\right)\] \[\quad+\frac{h^{2}}{2}\left(\sum_{j=0}^{i}2K\left(x_{i},y_{j},x_{j }\right)-K\left(x_{i},y_{0},x_{0}\right)-K\left(x_{i},y_{i},x_{i}\right)\right)\] \[\equiv M_{E}\left(y_{i}\right).\]
\[y_{i+1}= y_{i}+hf\left(x_{i+1},y_{i+1}\right)\] \[\quad+\frac{h^{2}}{2}\left(\sum_{j=0}^{i+1}2K\left(x_{i+1},y_{j}, x_{j}\right)-K\left(x_{i+1},y_{0},x_{0}\right)-K\left(x_{i+1},y_{i+1},x_{i+1} \right)\right)\] \[\equiv M_{I}\left(y_{i}\right).\]
The _global_ error at \(x_{i+1}\) is defined as
\[\Delta_{i+1} =w_{i+1}-y_{i+1}=M_{E}\left(w_{i}\right)-y_{i+1}\quad\text{(explicit method)}\] \[\Delta_{i+1} =w_{i+1}-y_{i+1}=M_{I}\left(w_{i}\right)-y_{i+1}\quad\text{(implicit method)}\]
and the _local_ error at \(x_{i+1}\) is defined as
\[\varepsilon_{i+1} =M_{E}\left(y_{i}\right)-y_{i+1}\quad\text{(explicit method)}\] \[\varepsilon_{i+1} =M_{I}\left(y_{i}\right)-y_{i+1}\quad\text{(implicit method)}\]
The local error has the form
\[\varepsilon_{i+1}=\varepsilon_{i+1}^{D}+\varepsilon_{i+1}^{Q}=O\left(h^{2}\right)\]
where \(\varepsilon_{i+1}^{D}\) is the error associated with the Euler approximation to the derivative in the IDE, and \(\varepsilon_{i+1}^{Q}\) is the error associated with the composite Trapezium approximation to the integral in the IDE. These are \(O\left(h\right),\) at worst, but on multiplication by \(h,\) as required by the structure of the methods, these errors acquire, at worst, an \(O\left(h^{2}\right)\) character. The precise form of these errors will not concern us here; it is enough for our purposes to simply accept that \(\varepsilon_{i+1}=O\left(h^{2}\right).\) Nevertheless, we discuss this matter to some extent in the Appendix.
## 3 Analysis - explicit case
We consider the explicit case first, and do so in detail. The parameters \(\xi\) and \(\eta\) denote values appropriate for the various Taylor residual terms that arise.
### Error propagation
At \(x_{1}\) we have
\[w_{1} =\ w_{0}+hf\left(x_{0},w_{0}\right)\] \[\Rightarrow\Delta_{1}+y_{1} =\Delta_{0}+y_{0}+hf\left(x_{0},\Delta_{0}+y_{0}\right)\] \[=\Delta_{0}+y_{0}+hf\left(x_{0},y_{0}\right)+\Delta_{0}hf_{y} \left(x_{0},\xi_{0}\right)\] \[\Rightarrow\Delta_{1} =\left[y_{0}+hf\left(x_{0},y_{0}\right)-y_{1}\right]+\Delta_{0} \left(1+hf_{y}\left(x_{0},\xi_{0}\right)\right)\] \[=\varepsilon_{1}+\Delta_{0}\left(1+hf_{y}\left(x_{0},\xi_{0} \right)\right).\]
At \(x_{2}\) we have
\[w_{2} = w_{1}+hf\left(x_{1},w_{1}\right)+\frac{h^{2}}{2}K\left(x_{0},w_{ 0},x_{0}\right)+\frac{h^{2}}{2}K\left(x_{1},w_{1},x_{1}\right)\] \[\Rightarrow\Delta_{2}+y_{2} = \Delta_{1}+y_{1}+hf\left(x_{1},\Delta_{1}+y_{1}\right)+\frac{h^{ 2}}{2}K\left(x_{0},\Delta_{0}+y_{0},x_{0}\right)+\frac{h^{2}}{2}K\left(x_{1}, \Delta_{1}+y_{1},x_{1}\right)\] \[= \Delta_{1}+y_{1}+hf\left(x_{1},y_{1}\right)+\Delta_{1}hf_{y} \left(x_{1},\xi_{1}\right)\] \[+\frac{h^{2}}{2}\left(\begin{array}{c}K\left(x_{0},y_{0},x_{0} \right)+\Delta_{0}K_{y}\left(x_{0},\eta_{0},x_{0}\right)\dots\\ \dots+K\left(x_{1},y_{1},x_{1}\right)+\Delta_{1}K_{y}\left(x_{1},\eta_{1},x_{ 1}\right)\end{array}\right)\] \[\Rightarrow\Delta_{2} = \left[y_{1}+hf\left(x_{1},y_{1}\right)+\frac{h^{2}}{2}\left(K \left(x_{0},y_{0},x_{0}\right)+K\left(x_{1},y_{1},x_{1}\right)\right)-y_{2}\right]\] \[+\Delta_{1}\left(1+hf_{y}\left(x_{1},\xi_{1}\right)+\frac{h^{2} }{2}K_{y}\left(x_{1},\eta_{1},x_{1}\right)\right)+\Delta_{0}\frac{h^{2}}{2}K_ {y}\left(x_{0},\eta_{0},x_{0}\right)\] \[= \left[M_{E}\left(y_{1}\right)-y_{2}\right]+\Delta_{1}\left(1+hf_ {y}\left(x_{1},\xi_{1}\right)+\frac{h^{2}}{2}K_{y}\left(x_{1},\eta_{1},x_{1} \right)\right)+\Delta_{0}\frac{h^{2}}{2}K_{y}\left(x_{0},\eta_{0},x_{0}\right)\] \[= \varepsilon_{2}+\Delta_{1}\left(1+hf_{y}\left(x_{1},\xi_{1} \right)+\frac{h^{2}}{2}K_{y}\left(x_{1},\eta_{1},x_{1}\right)\right)+\sum_{j=0 }^{0}\Delta_{j}\frac{h^{2}}{2}K_{y}\left(x_{j},\eta_{j},x_{j}\right),\]
and at \(x_{3}\) we have
\[w_{3}= \,w_{2}+hf\left(x_{2},w_{2}\right)+\frac{h^{2}}{2}K\left(x_{0},w_{0}, x_{0}\right)+h^{2}K\left(x_{1},w_{1},x_{1}\right)+\frac{h^{2}}{2}K\left(x_{2},w_{2},x_{2}\right)\] \[\Rightarrow\Delta_{3}+y_{3}= \,\Delta_{2}+y_{2}+hf\left(x_{2},\Delta_{2}+y_{2}\right)+\frac{h^ {2}}{2}K\left(x_{0},\Delta_{0}+y_{0},x_{0}\right)+h^{2}K\left(x_{1},\Delta_{1 }+y_{1},x_{1}\right)\] \[+\frac{h^{2}}{2}K\left(x_{2},\Delta_{2}+y_{2},x_{2}\right)\] \[= \,\Delta_{2}+y_{2}+hf\left(x_{2},y_{2}\right)+\Delta_{2}hf_{y} \left(x_{2},\xi_{2}\right)\] \[+\frac{h^{2}}{2}\left(\begin{array}{c}K\left(x_{0},y_{0},x_{0} \right)+\Delta_{0}K_{y}\left(x_{0},\eta_{0},x_{0}\right)+2K\left(x_{1},y_{1}, x_{1}\right)\ldots\\ \ldots+2\Delta_{1}K_{y}\left(x_{1},\eta_{1},x_{1}\right)+K\left(x_{2},y_{2},x_ {2}\right)+\Delta_{2}K_{y}\left(x_{2},\eta_{2},x_{2}\right)\end{array}\right)\]
\[\Rightarrow\Delta_{3}= \,\left[y_{2}+hf\left(x_{2},y_{2}\right)+\frac{h^{2}}{2}\left(K \left(x_{0},y_{0},x_{0}\right)+2K\left(x_{1},y_{1},x_{1}\right)+K\left(x_{2}, y_{2},x_{2}\right)\right)-y_{3}\right]\] \[+\Delta_{2}\left(1+hf_{y}\left(x_{2},\xi_{2}\right)+\frac{h^{2}} {2}K_{y}\left(x_{2},\eta_{2},x_{2}\right)\right)+\Delta_{1}h^{2}K\left(x_{1}, \eta_{1},x_{1}\right)+\Delta_{0}\frac{h^{2}}{2}K_{y}\left(x_{0},\eta_{0},x_{0}\right)\] \[= \,\left[M_{E}\left(y_{2}\right)-y_{3}\right]+\Delta_{2}\left(1+ hf_{y}\left(x_{2},\xi_{2}\right)+\frac{h^{2}}{2}K_{y}\left(x_{2},\eta_{2},x_{2} \right)\right)+\Delta_{1}h^{2}K\left(x_{1},\eta_{1},x_{1}\right)\] \[+\Delta_{0}\frac{h^{2}}{2}K_{y}\left(x_{0},\eta_{0},x_{0}\right)\] \[= \,\varepsilon_{3}+\Delta_{2}\left(1+hf_{y}\left(x_{2},\xi_{2} \right)+\frac{h^{2}}{2}K_{y}\left(x_{2},\eta_{2},x_{2}\right)\right)+\sum_{j=0 }^{1}\Delta_{j}\frac{h^{2}}{2}K_{y}\left(x_{j},\eta_{j},x_{j}\right)\] \[+\sum_{j=1}^{1}\Delta_{j}\frac{h^{2}}{2}K_{y}\left(x_{j},\eta_{j},x_{j}\right).\]
In general, for \(i>1\), we have
\[\Delta_{i+1}= \,\varepsilon_{i+1}+\Delta_{i}\left(1+hf_{y}\left(x_{i},\xi_{i} \right)+\frac{h^{2}}{2}K_{y}\left(x_{i},\eta_{i},x_{i}\right)\right)\] \[+\frac{h^{2}}{2}\left(\sum_{j=0}^{i-1}\Delta_{j}K_{y}\left(x_{j}, \eta_{j},x_{j}\right)+\sum_{j=1}^{i-1}\Delta_{j}K_{y}\left(x_{j},\eta_{j},x_{j} \right)\right)\] \[\Rightarrow\Delta_{i+1}= \,\widetilde{\varepsilon}_{i+1}+\widetilde{\alpha}_{i}\Delta_{i}, \tag{4}\]
where
\[\widetilde{\varepsilon}_{i+1} \equiv\varepsilon_{i+1}+\frac{h^{2}}{2}\left(\sum_{j=0}^{i-1} \Delta_{j}K_{y}\left(x_{j},\eta_{j},x_{j}\right)+\sum_{j=1}^{i-1}\Delta_{j}K_{y }\left(x_{j},\eta_{j},x_{j}\right)\right)\] \[=\varepsilon_{i+1}+\frac{h^{2}}{2}\left(\sum_{j=0}^{i-1}2\Delta_{ j}K_{y}\left(x_{j},\eta_{j},x_{j}\right)-\frac{\Delta_{0}}{2}K_{y}\left(x_{0}, \eta_{0},x_{0}\right)\right)\] \[=\varepsilon_{i+1}+h^{2}\sum_{j=1}^{i-1}\Delta_{j}K_{y}\left(x_{ j},\eta_{j},x_{j}\right)\ \ \text{if}\ \Delta_{0}=0.\]
and
\[\widetilde{\alpha}_{i}\equiv 1+hf_{y}\left(x_{i},\xi_{i}\right)+\frac{h^{2}}{2 }K_{y}\left(x_{i},\eta_{i},x_{i}\right)=1+h\left(f_{y}\left(x_{i},\xi_{i} \right)+\frac{h}{2}K_{y}\left(x_{i},\eta_{i},x_{i}\right)\right).\]
Note that \(\widetilde{\varepsilon}_{i+1}\) is not a local error, but it is convenient to combine the terms in this way, as we shall soon see. Also, we can write
\[\widetilde{\varepsilon}_{i+1} =\left(C_{i+1}^{1}+\sum_{j=0}^{i-1}\Delta_{j}K_{y}\left(x_{j}, \eta_{j},x_{j}\right)\right)h^{2}\] \[\equiv\left(C_{i+1}^{1}+C_{i+1}^{2}\right)h^{2}\] \[=\widetilde{C}_{i+1}h^{2}\]
wherein the coefficients \(C_{i+1}^{1},C_{i+1}^{2}\) and \(\widetilde{C}_{i+1}\) have been implicitly defined. Equation (4) is the defining expression for the propagation of error in the explicit method.
In the remainder of this paper, we will assume \(\Delta_{0}=0.\)
### Upper bounds
Assume \(f_{y}+\frac{h}{2}K_{y}>0.\) With
\[\widetilde{\varepsilon}_{\max} \equiv\max_{\left[x_{0},x_{f}\right]}\left|\widetilde{\varepsilon }_{i}\right|=\max_{\left[x_{0},x_{f}\right]}\left|\widetilde{C}_{i}\right|h^ {2}\equiv\widetilde{C}h^{2}\] \[\widetilde{\alpha} \equiv 1+\max_{\left[x_{0},x_{f}\right]}\left(hf_{y}+\frac{h^{2}}{2}K _{y}\right)\equiv 1+hL\] \[\Rightarrow L =\max_{\left[x_{0},x_{f}\right]}\left(f_{y}+\frac{h}{2}K_{y}\right)\]
we find
\[|\Delta_{i+1}| \leqslant\widetilde{\varepsilon}_{\max}\left(1+\widetilde{\alpha}+ \widetilde{\alpha}^{2}+\ldots+\widetilde{\alpha}^{i}\right)\] \[=\widetilde{\varepsilon}_{\max}\left(\frac{\widetilde{\alpha}^{i+1 }-1}{\widetilde{\alpha}-1}\right)\] \[=\frac{\widetilde{\varepsilon}_{\max}}{hL}\left(\left(1+hL \right)^{i+1}-1\right)\] \[=\frac{\widetilde{\varepsilon}_{\max}}{hL}\left(\left(1+\frac{ \left(i+1\right)hL}{i+1}\right)^{i+1}-1\right)\] \[=\frac{\widetilde{C}h^{2}}{hL}\left(\left(1+\frac{\left(x_{i+1}- x_{0}\right)L}{i+1}\right)^{i+1}-1\right)\] \[\approx\frac{\widetilde{C}h}{L}\left(e^{\left(x_{i+1}-x_{0} \right)L}-1\right)\ \ \text{for large }i. \tag{5}\]
If \(f_{y}+\frac{h}{2}K_{y}<0\) we define \(L\equiv-\max_{\left[x_{0},x_{f}\right]}\left|f_{y}+\frac{h}{2}K_{y}\right|.\) We can then choose \(h\) so that \(\widetilde{\alpha}=1+hL>0\), and we then find
\[|\Delta_{i+1}|\lesssim\left|\frac{\widetilde{C}h}{L}\left(e^{\left(x_{i+1}-x_ {0}\right)L}-1\right)\right|\approx\left|-\frac{\widetilde{C}}{L}\right|h\ \text{ if }f_{y}+\frac{h}{2}K_{y}\ll 0. \tag{6}\]
If \(f_{y}+\frac{h}{2}K_{y}=0\) we have \(\widetilde{\alpha}=1\) and so
\[|\Delta_{i+1}| \leqslant\widetilde{\varepsilon}_{\max}(\underbrace{1+1+1+ \ldots+1}_{i+1\text{ times}}\] \[=\widetilde{C}h\left(i+1\right)h\] \[=\widetilde{C}\left(x_{i+1}-x_{0}\right)h.\]
We see that all of these bounds exhibit a first-order \(\left(O\left(h\right)\right)\) character.
### Order
Assume \(h\) is sufficiently small so that \(\widetilde{\alpha}_{i}\approx 1\)
\[\Delta_{1} =\widetilde{\varepsilon}_{1}+\widetilde{\alpha}_{0}\Delta_{0}= \widetilde{\varepsilon}_{1}=\left(C_{1}^{1}+\Delta_{0}K_{y}\left(x_{0},\eta_{0},x_{0}\right)\right)h^{2}=C_{1}^{1}h^{2}\] \[\Delta_{2} =\widetilde{\varepsilon}_{2}+\widetilde{\alpha}_{1}\Delta_{1} \approx\left(C_{2}^{1}+\Delta_{0}K_{y}\left(x_{0},\eta_{0},x_{0}\right)+ \Delta_{1}K_{y}\left(x_{1},\eta_{1},x_{1}\right)\right)h^{2}+\Delta_{1}\] \[=\left(C_{2}^{1}+C_{1}^{1}h^{2}K_{y}\left(x_{1},\eta_{1},x_{1} \right)\right)h^{2}+C_{1}^{1}h^{2}\approx\left(C_{2}^{1}+C_{1}^{1}\right)h^{2}\] \[=\left(\frac{C_{2}^{1}+C_{1}^{1}}{2}\right)2h^{2}=\left(\frac{C_ {2}^{1}+C_{1}^{1}}{2}\right)\left(2h\right)h\] \[\Delta_{3} =\left(\frac{C_{3}^{1}+C_{2}^{1}+C_{1}^{1}}{3}\right)3h^{2}= \left(\frac{C_{3}^{1}+C_{2}^{1}+C_{1}^{1}}{3}\right)\left(3h\right)h\] \[\quad\vdots\] \[\Delta_{i+1} =\left(\frac{C_{i+1}^{1}+\ldots+C_{2}^{1}+C_{1}^{1}}{i+1}\right) \left(i+1\right)h^{2}=\left(\frac{C_{i+1}^{1}+\ldots+C_{2}^{1}+C_{1}^{1}}{i+1 }\right)\left(\left(i+1\right)h\right)h\]
Now, let \(x_{d}>x_{0}\) and choose \(h_{1}\) so that
\[x_{d}=x_{0}+m_{1}h_{1}.\]
Hence,
\[\Delta_{m_{1}} =\left(\frac{C_{m_{1}}^{1}+\ldots+C_{2}^{1}+C_{1}^{1}}{m_{1}} \right)\left(m_{1}h_{1}\right)h_{1}\] \[=\left(\frac{C_{m_{1}}^{1}+\ldots+C_{2}^{1}+C_{1}^{1}}{m_{1}} \right)\left(x_{d}-x_{0}\right)h_{1}.\]
Now, choose \(m_{2}\neq m_{1}\) and \(h_{2}\) such that
\[x_{d}=x_{0}+m_{2}h_{2}.\]
Note that
\[m_{1}h_{1}=m_{2}h_{2}\Rightarrow h_{2}=\left(\frac{m_{1}}{m_{2}}\right)h_{1}.\]
Hence,
\[\Delta_{m_{2}} =\left(\frac{C_{m_{2}}^{1}+\ldots+C_{2}^{1}+C_{1}^{1}}{m_{2}} \right)\left(m_{2}h_{2}\right)h_{2}\] \[=\left(\frac{C_{m_{2}}^{1}+\ldots+C_{2}^{1}+C_{1}^{1}}{m_{2}} \right)\left(x_{d}-x_{0}\right)h_{2}\] \[=\left(\frac{C_{m_{2}}^{1}+\ldots+C_{2}^{1}+C_{1}^{1}}{m_{2}} \right)\left(x_{d}-x_{0}\right)\left(\frac{m_{1}}{m_{2}}\right)h_{1}\] \[\approx\Delta_{m_{1}}\left(\frac{m_{1}}{m_{2}}\right)\]
so that the global error at \(x_{d}\) scales in the same way as the stepsize, i.e. the global error is first-order. This aligns with the \(O\left(h\right)\) nature of the upper bounds considered earlier. Note that this implies that the explicit method is _convergent_ (\(\Delta\to 0\) as \(h\to 0\)).
## 4 Analysis - implicit case
### Error propagation
For the implicit method we have
\[w_{1}=\ w_{0}+hf\left(x_{1},w_{1}\right)+\frac{h^{2}}{2}\left(K\left(x_{1},w_{0},x_{0}\right)+K\left(x_{1},w_{1},x_{1}\right)\right)\]
which gives (with \(\Delta_{0}=0\))
\[\Delta_{1}+y_{1} =\ y_{0}+hf\left(x_{1},\Delta_{1}+y_{1}\right)+\frac{h^{2}}{2} \left(K\left(x_{1},y_{0},x_{0}\right)+K\left(x_{1},\Delta_{1}+y_{1},x_{1} \right)\right)\] \[\Rightarrow\Delta_{1} =\frac{\left[y_{0}+hf\left(x_{1},y_{1}\right)+\frac{h^{2}}{2} \left(K\left(x_{1},y_{0},x_{0}\right)+K\left(x_{1},y_{1},x_{1}\right)\right)- y_{1}\right]}{1-hf_{y}\left(x_{1},\xi_{1}\right)-\frac{h^{2}}{2}K_{y}\left(x_{1}, \eta_{1},x_{1}\right)}\] \[\Rightarrow\Delta_{1} =\widetilde{\widetilde{\alpha}}_{1}\left(M_{I}\left(y_{0}\right) -y_{1}\right)\] \[=\widetilde{\widetilde{\alpha}}_{1}\varepsilon_{1},\]
wherein we have implicitly defined \(\widetilde{\widetilde{\alpha}}_{1}\).
At \(x_{2}\) we find
\[\Delta_{2}=\frac{\varepsilon_{2}+\Delta_{1}\left(1+h^{2}K_{y}\left(x_{1}, \eta_{1},x_{1}\right)\right)}{1-hf_{y}\left(x_{2},\xi_{2}\right)-\frac{h^{2}} {2}K_{y}\left(x_{2},\eta_{2},x_{2}\right)}\]
and at \(x_{3}\) we find
\[\Delta_{3}=\frac{\varepsilon_{3}+\Delta_{2}\left(1+h^{2}K_{y}\left(x_{2}, \eta_{2},x_{2}\right)\right)+\Delta_{1}h^{2}K_{y}\left(x_{1},\eta_{1},x_{1} \right)}{1-hf_{y}\left(x_{3},\xi_{3}\right)-\frac{h^{2}}{2}K_{y}\left(x_{3}, \eta_{3},x_{3}\right)}.\]
In general (for \(i>1\)), we have
\[\Delta_{i+1} =\frac{\varepsilon_{i+1}+\sum\limits_{j=1}^{i-1}\Delta_{j}h^{2}K _{y}\left(x_{j},\eta_{j},x_{j}\right)+\Delta_{i}\left(1+h^{2}K_{y}\left(x_{i}, \eta_{i},x_{i}\right)\right)}{1-hf_{y}\left(x_{i+1},\xi_{i+1}\right)-\frac{h^ {2}}{2}K_{y}\left(x_{i+1},\eta_{i+1},x_{i+1}\right)}\] \[\Rightarrow\Delta_{i+1} =\widetilde{\widetilde{\varepsilon}}_{i+1}+\widetilde{\widetilde{ \alpha}}_{i}\Delta_{i}, \tag{7}\]
where
\[\widetilde{\widetilde{\varepsilon}}_{i+1} \equiv\frac{\varepsilon_{i+1}+\sum\limits_{j=1}^{i-1}\Delta_{j}h^ {2}K_{y}\left(x_{j},\eta_{j},x_{j}\right)}{1-hf_{y}\left(x_{i+1},\xi_{i+1} \right)-\frac{h^{2}}{2}K_{y}\left(x_{i+1},\eta_{i+1},x_{i+1}\right)}\] \[\widetilde{\widetilde{\alpha}}_{i} \equiv\frac{1+h^{2}K_{y}\left(x_{i},\eta_{i},x_{i}\right)}{1-hf_{y }\left(x_{i+1},\xi_{i+1}\right)-\frac{h^{2}}{2}K_{y}\left(x_{i+1},\eta_{i+1},x_ {i+1}\right)}.\]
### Upper bound
We are most likely to use the implicit method when the IDE is stiff (both \(f_{y}<0\) and \(K_{y}<0\)). Hence, it is instructive to apply (7) to the test equation [2]
\[y^{\prime}\left(x\right) =\lambda\left(y\left(x\right)-1\right)+\gamma\int\limits_{0}^{x}y \left(t\right)dt \tag{8}\] \[y\left(0\right) =2,\ \lambda<0,\ \gamma<0\]
(where \(f=\lambda\left(y\left(x\right)-1\right)\) and \(K=\gamma y\left(t\right)\)), with solution
\[y\left(x\right) =e^{m_{1}x}+e^{m_{2}x}\] \[m_{1} =\frac{\lambda-\sqrt{\lambda^{2}+4\gamma}}{2},\ \ m_{2}=\frac{ \lambda+\sqrt{\lambda^{2}+4\gamma}}{2}\]
when \(m_{1}\) and \(m_{2}\) are real \(\left(\lambda^{2}+4\gamma\geqslant 0\right)\), and
\[y\left(x\right)=2e^{\frac{\lambda x}{2}}\cos\left(\frac{\sqrt{\left|\lambda^{ 2}+4\gamma\right|}}{2}x\right)\]
when \(m_{1}\) and \(m_{2}\) are complex \(\left(\lambda^{2}+4\gamma<0\right)\).
With \(Z\equiv h\lambda\) and \(W\equiv h^{2}\gamma\), we define \(L\) by
\[1+hL \equiv\frac{1+h^{2}K_{y}}{1-hf_{y}-\frac{h^{2}}{2}K_{y}}=\frac{1 +W}{1-Z-\frac{W}{2}}\] \[\Rightarrow L =\frac{Z+\frac{3W}{2}}{h\left(1-Z-\frac{W}{2}\right)}.\]
Since \(Z\) and \(W\) are both negative, \(L\) is negative, too. Also
\[L=\frac{Z+\frac{3W}{2}}{h\left(1-Z-\frac{W}{2}\right)} =\frac{h\lambda+\frac{3h^{2}\gamma}{2}}{h\left(1-h\lambda-\frac{h^ {2}\gamma}{2}\right)}\] \[=\frac{\lambda+\frac{3h\gamma}{2}}{\left(1-h\lambda-\frac{h^{2} \gamma}{2}\right)}.\]
With
\[\widetilde{\widetilde{\varepsilon}}_{\max} \equiv\max_{\left[x_{0},x_{f}\right]}\left|\widetilde{\widetilde {\varepsilon}}_{i}\right|\equiv\max_{\left[x_{0},x_{f}\right]}\left| \widetilde{\widetilde{C}}_{i}\right|h^{2}\equiv\widetilde{\widetilde{C}}h^{2}\] \[\widetilde{\widetilde{\alpha}} \equiv 1+hL\]
we find
\[|\Delta_{i+1}| \leqslant\widetilde{\widetilde{\varepsilon}}_{\max}\left|1+ \widetilde{\widetilde{\alpha}}+\widetilde{\widetilde{\alpha}}^{2}+\ldots+ \widetilde{\widetilde{\alpha}}^{i}\right|\] \[=\widetilde{\widetilde{\varepsilon}}_{\max}\left|\frac{ \widetilde{\widetilde{\alpha}}^{i+1}-1}{\widetilde{\widetilde{\alpha}}-1}\right|\] \[=\frac{\widetilde{\widetilde{\varepsilon}}_{\max}}{h\left|L \right|}\left|\left(1+hL\right)^{i+1}-1\right|\] \[=\frac{\widetilde{\widetilde{\varepsilon}}_{\max}}{h\left|L \right|}\left|\left(1+\frac{\left(i+1\right)hL}{i+1}\right)^{i+1}-1\right|\] \[=\frac{\widetilde{\widetilde{C}}h^{2}}{h\left|L\right|}\left| \left(1+\frac{\left(x_{i+1}-x_{0}\right)L}{i+1}\right)^{i+1}-1\right|\] \[\approx\left|\frac{\widetilde{\widetilde{C}}h}{L}\left(e^{\left(x _{i+1}-x_{0}\right)L}-1\right)\right|\ \text{ for large }i.\]
Since \(L<0\), we note that
\[\left|\frac{\widetilde{\widetilde{C}}h}{L}\left(e^{\left(x_{i+1}-x_{0}\right) L}-1\right)\right|\rightarrow\left|-\frac{\widetilde{\widetilde{C}}h}{L}\right|\]
if \(L\ll 0\), and/or if \(x_{i+1}-x_{0}\) becomes large (similar to the case considered in (6)).
### Order
To analyze the order of the implicit method, we assume that \(h\) is small enough so that
\[\widetilde{\widetilde{\varepsilon}}_{i+1} \approx\varepsilon_{i+1}+\sum_{j=1}^{i-1}\Delta_{j}h^{2}K_{y} \left(x_{j},\eta_{j},x_{j}\right)\] \[\widetilde{\widetilde{\alpha}}_{i} \approx 1.\]
Similar reasoning to the explicit case can now be used to find that the implicit method is expected to be first-order. Furthermore, this implies that the implicit method is convergent.
## 5 Comments
For the explicit method, given that \(\Delta_{1}=\varepsilon_{1}=\varepsilon_{1}^{D}+\varepsilon_{1}^{Q}\), and given that all subsequent global errors are written in terms of local errors and prior global errors, we have that \(\Delta_{i}\) is a function of local errors \(\varepsilon^{D}\) and \(\varepsilon^{Q}\), Jacobians \(f_{y}\) and \(K_{y}\) and the stepsize \(h\). For example, we find
\[\Delta_{4}=\varepsilon_{4}+\widetilde{\alpha}_{3}\varepsilon_{3}+\widetilde{ \alpha}_{3}\widetilde{\alpha}_{2}\varepsilon_{2}+\widetilde{\alpha}_{3} \widetilde{\alpha}_{2}\widetilde{\alpha}_{1}\varepsilon_{1}+h^{2}\left( \varepsilon_{1}K_{y}^{1}+\left(\varepsilon_{2}+\widetilde{\alpha}_{1} \varepsilon_{1}\right)K_{y}^{2}\right)\]
where
\[\widetilde{\alpha}_{k} =1+hf_{y}\left(x_{k},\xi_{k}\right)+\frac{h^{2}}{2}K_{y}\left(x_{k}, \eta_{k},x_{k}\right)\] \[K_{y}^{k} \equiv K_{y}\left(x_{k},\eta_{k},x_{k}\right).\]
Similar expressions obtain for \(\Delta_{5}\),\(\Delta_{6}\) and so on, and also for the case of the implicit method. It is interesting to note that, if the global \(\Delta_{i}\) error is known and the Jacobians can be reliably estimated (such as for the test equation), then the local errors \(\varepsilon_{i}\) (for the explicit method) can be estimated via the sequence
\[\varepsilon_{1} =\Delta_{1},\] \[\varepsilon_{2} =\Delta_{2}-\widetilde{\alpha}_{1}\Delta_{1}, \tag{9}\] \[\varepsilon_{3} =\Delta_{3}-\widetilde{\alpha}_{2}\Delta_{2}-h^{2}\Delta_{1}K_{y }^{1},\] \[\varepsilon_{4} =\Delta_{4}-\widetilde{\alpha}_{3}\Delta_{3}-h^{2}\Delta_{1}K_{y }^{1}-h^{2}\Delta_{2}K_{y}^{2}\]
and so on. A similar sequence can be found for the implicit method.
## 6 Numerical examples
A few simple examples, using the test equation, will serve to illustrate some of the aspects of our analysis.
1. **Figure 1**. Here, we solve (8) with \(\lambda=-100,\gamma=-200\) and \(h=5\times 10^{-3}\) using the explicit method. The stepsize is small enough to ensure a stable solution. We show \(\left|\Delta_{i}\right|\) (the solid red line, labelled E), and the quantity \(\left|\frac{\widetilde{C}_{i}h}{L}\right|\) determined using (5), i.e. \[\left|\frac{\widetilde{C}_{i}h}{L}\right|=\frac{\left|\Delta_{i}\right|}{ \left|e^{\left(x_{i+1}-x_{0}\right)L}-1\right|}.\] We indicate \(\left|\frac{\widetilde{C}_{i}h}{L}\right|\) with the blue dots (labelled C) which appear to be superimposed on the curve for \(\left|\Delta_{i}\right|.\) This is due to the fact that \(L=f_{y}+\frac{h}{2}K_{y}=\lambda+\frac{h}{2}\gamma\ll 0,\) and so \(\left|e^{\left(x_{i+1}-x_{0}\right)L}-1\right|\approx 1.\) From this curve we estimate \(\max\left|\frac{\widetilde{C}_{i}h}{L}\right|=0.0041,\) and we plot \(0.0041\left|e^{\left(x_{i+1}-x_{0}\right)L}-1\right|\) as the upper bound (labelled U) on \(\left|\Delta_{i}\right|.\)
2. **Figure 2**. We solve (8) with \(\lambda=-100,\gamma=-200\) and \(h=5\times 10^{-2}\) using the explicit method. The stepsize is _not_ small enough to ensure a stable solution. The labelling follows that of Figure 1. We estimate \(\max\left|\frac{\widetilde{C}_{i}h}{L}\right|=1.9\times 10^{61}.\) As before, \(L\ll 0\Rightarrow\left|e^{\left(x_{i+1}-x_{0}\right)L}-1\right|\approx 1,\) so that the curve for \(\left|\frac{\widetilde{C}_{i}h}{L}\right|\) is superimposed on the curve for \(\left|\Delta_{i}\right|.\)
3. **Figure 3**. We use the explicit method with \(\lambda=1,\gamma=2\) and \(h=5\times 10^{-3}.\) We do not have \(\left|e^{\left(x_{i+1}-x_{0}\right)L}-1\right|\approx 1,\) and so curve C is different to curve E. We estimate \(\max\left|\frac{\widetilde{C}_{i}h}{L}\right|=2.5\times 10^{-4},\) yielding the upper bound U.
4. **Figure 4**. Here, we solve the test equation with \(\lambda=-1,\gamma=-2\) and \(h=5\times 10^{-3}\) using the implicit method. We show the signed global error \(\Delta_{i},\) and \(\frac{\widetilde{C}_{i}h}{L}\). We see that \(\Delta_{i}=0\) when \(\frac{\widetilde{C}_{i}h}{L}=0,\) as we would expect. We estimate \(\max\left|\frac{\widetilde{C}_{i}h}{L}\right|=1.14\times 10^{-8},\) yielding the upper and lower bounds (U and -U). The oscillatory character of the error is due to the oscillatory nature of the solution.
5. **Figure 5**. We solve the test equation with \(\lambda=-1,\gamma=-2\) and \(h=5\times 10^{-3}\) using the explicit method. The upper plot shows the global error \(\Delta_{i},\) and the lower plot shows the local error \(\varepsilon_{i},\) determined using (9).
## 7 Conclusion
We have investigated error propagation in an explicit and implicit method for solving integro-differential equations of the Volterra type. We have derived upper bounds for the global error, and shown that the global order for both methods is expected to be first-order. With respect to (1), we have considered the case \(n=1\). For \(n>1,\) we would need to solve a system of IDEs, and future work would center around error propagation in such systems - and in systems of IDEs, in general.
|
2308.08993
|
Unveiling the Spatiotemporal Evolution of Liquid-Lens Coalescence:
Self-Similarity, Vortex Quadrupoles, and Turbulence in a Three-Phase Fluid
System
|
We demonstrate that the three-phase Cahn-Hilliard-Navier-Stokes (CHNS3)
system provides a natural theoretical framework for studying liquid-lens
coalescence, which has been investigated in recent experiments. Our extensive
direct numerical simulations (DNSs) of lens coalescence, in the two and three
dimensional (2D and 3D) CHNS3, uncover the rich spatiotemporal evolution of the
fluid velocity $\bf u$ and vorticity $\omega$, the concentration fields $c_1,
\, c_2,$ and $c_3$ of the three liquids, and a generalized Laplace pressure
$P^G_\mathcal{L}$, which we define in terms of these concentrations via a
Poisson equation. We find, in agreement with experiments, that as the lenses
coalesce, their neck height $h(t) \sim t^{\alpha_v}$, with $\alpha_v \simeq 1$
in the viscous regime, and $h(t) \sim t^{\alpha_i}$, with $\alpha_i \simeq 2/3$
in the inertial regime. We obtain the crossover from the viscous to the
inertial regimes as a function of the Ohnesorge number $Oh$, a dimensionless
combination of viscous stresses and inertial and surface tension forces. We
show that a vortex quadrupole, which straddles the neck of the merging lenses,
and $P^G_\mathcal{L}$ play crucial roles in distinguishing between the viscous-
and inertial-regime growths of the merging lenses. In the inertial regime we
find signatures of turbulence, which we quantify via kinetic-energy and
concentration spectra. Finally, we examine the merger of asymmetric lenses, in
which the initial stages of coalescence occur along the circular parts of the
lens interfaces; in this case, we obtain power-law forms for the $h(t)$ with
inertial-regime exponents that lie between their droplet-coalescence and
lens-merger counterparts.
|
Nadia Bihari Padhan, Rahul Pandit
|
2023-08-17T14:03:12Z
|
http://arxiv.org/abs/2308.08993v2
|
Unveiling the Spatiotemporal Evolution of Liquid-Lens Coalescence: Self-Similarity, Vortex Quadrupoles, and Turbulence in a Three-Phase Fluid System
###### Abstract
The coalescence of liquid lenses is an important problem at the intersection of fluid dynamics and statistical physics, particularly in the context of complex multi-phase flows. We demonstrate that the three-phase Cahn-Hilliard-Navier-Stokes (CHNS3) system provides a natural theoretical framework for studying liquid-lens coalescence, which has been investigated in recent experiments. Our extensive direct numerical simulations (DNSs) of lens coalescence, in the two and three dimensional (2D and 3D) CHNS3, uncover the rich spatiotemporal evolution of the fluid velocity \(\mathbf{u}\) and vorticity \(\mathbf{\omega}\), the concentration fields \(c_{1}\), \(c_{2}\), and \(c_{3}\) of the three liquids, and an excess pressure \(P_{\mathcal{L}}^{G}\), which we define in terms of these concentrations via a Poisson equation. We find, in agreement with experiments, that as the lenses coalesce, their neck height \(h(t)\sim t^{\alpha_{v}}\), with \(\alpha_{v}\simeq 1\) in the viscous regime, and \(h(t)\sim t^{\alpha_{i}}\), with \(\alpha_{i}\simeq 2/3\) in the inertial regime. We obtain the crossover from the viscous to the inertial regimes as a function of the Ohnesorge number \(Oh\), a dimensionless combination of viscous stresses and inertial and surface tension forces. We show that a vortex quadrupole, which straddles the neck of the merging lenses, and \(P_{\mathcal{L}}^{G}\) play crucial roles in distinguishing between the viscous- and inertial-regime growths of the merging lenses. In the inertial regime we find signatures of turbulence, which we quantify via kinetic-energy and concentration spectra. Finally, we examine the merger of asymmetric lenses, in which the initial stages of coalescence occur along the circular parts of the lens interfaces; in this case, we obtain power-law forms for the \(h(t)\) with inertial-regime exponents that lie between their droplet-coalescence and lens-merger counterparts.
## I Introduction
Coalescence - of droplets, in general, and liquid lenses, in particular - is a fundamental problem in the fluid dynamics and statistical physics of multi-phase flows [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22]. Such droplet merging is of direct relevance in engineering applications, such as ink-jet printers [23; 24], and atmospheric physics, e.g., the merger of rain drops in a cloud [25; 26; 27]. When two droplets coalesce, a bridge forms and its height \(h\) grows with the time \(t\). Experiments [2; 3; 4; 5; 6], theory, and numerical simulations [7; 8; 9; 10; 11] show that, in the early stage of coalescence of two, initially static, spherical droplets, there is self-similar growth with \(h(t)\sim t\) and \(h(t)\sim t^{1/2}\) in the viscous and inertial regimes, respectively [4; 5; 28]. Three-phase fluid systems, can exhibit the coalescence of two liquid lenses, as we show schematically in Fig. (1); recent experiments have shown that, for such a lens merger [29], \(h(t)\sim t^{1}\) and \(h(t)\sim t^{2/3}\) in the viscous and inertial regimes, respectively. We show that the three-phase Cahn-Hilliard-Navier-Stokes (CHNS3), which couples the fluid velocity \(\mathbf{u}\) with the concentration fields \(c_{1}\), \(c_{2}\), and \(c_{3}\), which distinguish between the three coexisting phases that form the lens, provides a natural theoretical framework for the study of liquid-lens coalescence in the viscous and inertial regimes and in the crossover region from the former to the latter. Our direct numerical simulations (DNSs), in both two and three dimensions (2D and 3D), for the coalescence of two nearby, initially static, liquid lenses in this CHNS3 system uncover the complete spatiotemporal evolution of \(\mathbf{u}\), \(c_{1}\), \(c_{2}\), and \(c_{3}\) during lens mergers. In addition, we obtain a variety of new and interesting results that we summarize qualitatively below. We find, in agreement with experiments, that \(h(t)\sim t^{\alpha_{v}}\), with \(\alpha_{v}\simeq 1\) in the viscous regime, which is followed by a region in which the growth of \(h(t)\) with \(t\) is less steep, and, finally, \(h(t)\sim t^{\alpha_{i}}\), with \(\alpha_{i}\simeq 2/3\) in the inertial regime; we obtain the crossover from the viscous to the inertial regimes as a function of the Ohnesorge number \(Oh\), a dimensionless ratio of viscous stresses to the inertial and surface tension forces [30; 31; 30] [\(Oh\equiv\nu[\rho/(\sigma R_{0})]^{1/2}\), where \(\rho\), \(\nu\), \(\sigma\) and \(R_{0}\) are, respectively, the density, viscosity, surface tension, and initial droplet's radius.] We use the top view of the merger of biconvex lenses in 3D [see the planar section in Fig. (1) (b)] to define the neck width \(w(t)\) and show that \(w(t)\sim t^{\alpha_{v}}\) and \(w(t)\sim t^{\alpha_{i}}\)
Figure 1: Schematic diagrams illustrating liquid-lens coalescence: (a) 2D or in 3D (a planar section containing the principal axes of the coalescing lenticular biconvex lenses) and (b) top view in 3D (a planar section perpendicular to the principal axes of the coalescing lenticular biconvex lenses).
in viscous and inertial regions, respectively. From the spatiotemporal evolution of \(\mathbf{u},\,c_{1},\,c_{2}\), and the vorticity \(\mathbf{\omega}=\nabla\times\mathbf{u}\), we demonstrate the crucial role played by a vortex quadrupole that straddles the neck of the merging lenses: the spatial extent of this quadrupole grows with this neck, uniformly in the viscous regime but with distortions in the inertial case, where we see signatures of turbulence, which we quantify by obtaining kinetic-energy and concentration spectra. Such turbulence, during the coalescence of lenses, has not previously been observed in either experimental or numerical studies. We show that the gradient of an excess pressure \(P_{\mathcal{L}}^{G}\) is also of vital importance in the merger of liquid lenses, just as it is in the coalescence of droplets [1; 21]. Finally, we examine the merger of two asymmetrical, but identical, liquid lenses, whose top parts are more curved than their lower ones. For this asymmetrical case, we exhibit how this proceeds via the coalescence of the upper concave arcs, which is similar to its counterpart for circular droplets, so the growth exponent for \(h(t)\) lies in between its lens- and droplet-merger values. To the best of the authors' knowledge, the geometric dependence of such coalescence phenomena has not been previously documented in the scientific literature.
The remaining part of this paper is organized as follows. In Section II, we define the CHNS3 partial differential equations (PDEs) and the numerical methods we use to solve these PDEs. Section III is devoted to a presentation of our results. We end with concluding remarks in Section IV. Section V is an Appendix that contains additional figures.
## II Model and numerical methods
We define the CHNS3 model in Subsection II.1, discuss the details of our direct numerical simulations (DNSs) in Subsection II.2, and describe the preparation of the lens-merger initial conditions in Subsection II.3.
### Three-phase Cahn-Hilliard-Navier-Stokes model
Phase-field or Cahn-Hilliard models have been used extensively to study multi-phase fluid flows [32; 33; 34; 35; 36]; in particular, they have been employed to study droplet coalescence in binary-fluid mixtures [37; 38; 39; 10]. We show that the following ternary-phase-field (CHNS3) model [40; 41; 42], for three immiscible fluids, provides a natural framework for investigations of liquid-lens coalescence; this model uses the variational free-energy functional, in the domain \(\Omega\):
\[\mathcal{F}(\{c_{i},\nabla c_{i}\})=\int\limits_{\Omega}\!d\Omega\left[\frac{ 12}{\epsilon}F(\{c_{i}\})+\frac{3\epsilon}{8}\sum_{i=1}^{3}\gamma_{i}(\nabla c _{i})^{2}\right]\,, \tag{1}\]
where the concentration fields \(c_{i}(i=1,2,3)\) are conserved order parameters that satisfy the constraint \(\sum_{i=1}^{3}c_{i}=1\), \(\epsilon\) is the thickness of the interface, the variational bulk free energy \(F(\{c_{i}\})=\sum_{i=1}^{3}\gamma_{i}c_{i}^{2}(1-c_{i})^{2}\), and the gradient terms give the surface-tension penalties for interfaces, with \(\sigma_{ij}=(\gamma_{i}+\gamma_{j})/2\) the bare surface (or interfacial) tension for the interface between the phases \(i\) and \(j\); the equilibrium values of \(c_{i}\) follow from the global minimum (or minima) of \(F(\{c_{i}\})\). The equilibrium chemical potential of the fluid \(i\) is \(\mu_{i}\equiv\delta\mathcal{F}/\delta c_{i}+\beta(\{c_{i}\})\), with \(\beta(\{c_{i}\})\) the Lagrange multiplier that ensures \(\sum_{i=1}^{3}c_{i}=1\), whence we get [40]
\[\mu_{i} = -\frac{3}{4}\epsilon\gamma_{i}\nabla^{2}c_{i}+\frac{12}{\epsilon }[\gamma_{i}c_{i}(1-c_{i})(1-2c_{i}) \tag{2}\] \[- \frac{6\gamma_{1}\gamma_{2}\gamma_{3}(c_{1}c_{2}c_{3})}{\gamma_{ 1}\gamma_{2}+\gamma_{1}\gamma_{3}+\gamma_{2}\gamma_{3}}]\,;\]
we _do not_ use the summation convention over repeated indices here. The mean fluid velocity \(\mathbf{u}\) advects the fields \(c_{i}(i=1,2,3)\), which affect the flow, in turn, so that we get [40] coupled CHNS-type equations for \(\mathbf{u}\) and \(c_{1}\) and \(c_{2}\) [\(c_{3}\) follows from the constraint \(\sum_{i=1}^{3}c_{i}=1\)]. We consider low-Mach-number flows, hence we use incompressible fluids. In 2D it is convenient to use the vorticity-stream-function formulation for the incompressible Navier-Stokes equation to obtain
\[\partial_{t}\omega+(\mathbf{u}\cdot\nabla)\omega = \nu\nabla^{2}\omega+\nabla\times\left(\sum_{i=1}^{3}\mu_{i}\nabla c _{i}\right)\,, \tag{3}\] \[\partial_{t}c_{j}+(\mathbf{u}.\nabla)c_{j} = \frac{M}{\gamma_{j}}\nabla^{2}\mu_{j},\;\;j=1\;\text{or}\;2\,, \tag{4}\]
where we assume, for simplicity, that all the fluids have the same density \(\rho=1\), kinematic viscosity \(\nu\), and mobility \(M\), and that \(\sigma_{12}=\sigma_{23}=\sigma_{13}\equiv\sigma\). In 3D we use
\[\partial_{t}\mathbf{u}+(\mathbf{u}\cdot\nabla)\mathbf{u} = \nu\nabla^{2}\mathbf{u}-\nabla P+\left(\sum_{i=1}^{3}\mu_{i}\nabla c _{i}\right)\,, \tag{5}\] \[\nabla\cdot\mathbf{u} = 0\,,\] (6) \[\partial_{t}c_{j}+(\mathbf{u}.\nabla)c_{j} = \frac{M}{\gamma_{i}}\nabla^{2}\mu_{j},\;\;j=1\;\text{or}\;2\,, \tag{7}\]
where \(P\) is the pressure. The terms with \(\sum_{i=1}^{3}\mu_{i}\nabla c_{i}\) yield the stress on the fluid because of the fields \(c_{i}\). In addition to the velocity and concentration fields it is instructive to define and evaluate the following:
A. The excess pressure \(P_{\mathcal{L}}^{G}\):
\[\nabla^{2}P_{\mathcal{L}}^{G}=\nabla\cdot\left(\sum_{i=1}^{3}\mu_{i}\nabla c_{ i}\right)\,. \tag{8}\]
In equilibrium (i.e., no fluid flow) and in the limit of a zero-thickness interface, \(P_{L}^{G}\) reduces to the conventional Laplace pressure [40; 43], which is inversely related to the interface curvature.
B. At time \(t\), the energy and concentration spectra, the integral scale, and the Reynolds number are, respectively, [38; 44; 45; 32]
\[E(k,t) = \frac{1}{2}\sum_{k-1/2<k^{\prime}<k+1/2}[\hat{\mathbf{u}}(\mathbf{k}^ {\prime},t)\cdot\hat{\mathbf{u}}(-\mathbf{k}^{\prime},t)]\ ;\] \[S_{1}(k,t) = \sum_{k-1/2<k^{\prime}<k+1/2}|\hat{c}_{1}(\mathbf{k}^{\prime},t)|^ {2}\,,\] \[S_{2}(k,t) = \sum_{k-1/2<k^{\prime}<k+1/2}|\hat{c}_{2}(\mathbf{k}^{\prime},t)|^ {2}\,,\] \[L_{I}(t) = 2\pi\frac{\sum_{k}k^{-1}E(k,t)}{\sum_{k}E(k,t)}\,,\] \[Re(t) = \frac{U_{rms}(t)L_{I}(t)}{\nu}\, \tag{9}\]
where \(U_{rms}(t)=\left[\sum_{k}E(k,t)\right]^{1/2}\) is the root-mean-square velocity of the fluid; \(\hat{\mathbf{u}}(\mathbf{k}^{\prime},t)\) and \(\hat{c}_{i}(\mathbf{k}^{\prime},t)\) are, respectively, the spatial discrete Fourier transforms (DFT) of \(\mathbf{u}(\mathbf{x},t)\) and \(c_{i}(\mathbf{x},t)\); and \(k\) and \(k^{\prime}\) are the moduli of the wave vectors \(\mathbf{k}\) and \(\mathbf{k}^{\prime}\).
### Numerical Methods
We carry out Fourier-pseudospectral DNSs [38; 46] of the Eqs. (3)-(4) and Eqs. (5)-(7) in square (\(N^{2}\) collocation points) and cubical (\(N^{3}\) collocation points) domains, respectively, with sides \(L=2\pi\), and periodic boundary conditions in all spatial directions. To eliminate aliasing errors, because of the cubic nonlinearity, we use the \(1/2\)-dealiasing scheme [47] at each time step, before we compute the nonlinear terms in physical space. For time integration, we employ the semi-implicit exponential-time-difference ETDRK2 method [48]. In the CHNS3 model, the fluid velocity and the concentrations \(c_{i}\) change smoothly at fluid interfaces, so we do not have to implement boundary conditions at sharp interfaces. To resolve the interface, we take three grid points in the interface region and we choose \(M\simeq\epsilon^{2}\), so that our phase-field description can approach the sharp-interface limit [49; 50; 51]. The Cahn number \(Cn\equiv\epsilon/L\), a non-dimensional measure of the interface width, \(R_{0}/L\), the non-dimensional initial radius of curvature of the lens, and the dimensionless Ohnesorge number \(Oh\equiv\nu[\rho/(\sigma R_{0})]^{1/2}\) are given in Table 1 along with the numbers of collocation points and other parameters for our DNS runs in 2D and 3D.
Despite the global conservation of the phase-field variable, drops spontaneously undergo shrinking while experiencing shifts from their expected bulk phase values, and these alterations are proportionate to the interfacial thickness [52]. The Cahn numbers we used in all our simulations are very small, for the given computer resolutions; this allows us to preserve the mass conservation of lenses and droplets, to three-decimal-place accuracy. We illustrate area preservation in Fig. 9 (see the Appendix) for \(Oh=0.025\) (run 2D-R1), where we plot the ratio \(A(t)/A_{0}\), with \(A(t)\) the area of the lenses at time \(t\) and \(A_{0}\) their area at the initial time \(t=0\).
Figure 2: Pseudocolor plots of \(c_{2}-c_{1}\) showing the three co-existing phases and the interfaces between them for (a) the initial condition given in Eq.( 10) and (b) the final equilibrium configuration. (c) Plot of the temporal evolution of the kinetic energy \(e(t)=\sum_{k}E(k,t)\) during lens formation for \(Oh=0.09\). The energy is normalized with the viscous scale velocity \(u_{\nu}=\sigma/(\rho\nu)\). [We obtain the final equilibrium configuration shown in (b) after the kinetic energy reaches to zero.]
### Initial conditions
To prepare the lens-merger initial condition in 2D for a symmetric and neutrally buoyant lens we start our DNSs with the following configuration for a single circular droplet of fluid 1, with radius \(R_{0}\) and centre \((\pi,\pi)\), placed at the interface between fluids 2 and 3:
\[c_{1}(x,y,0) = \frac{1}{2}\left[1-\tanh\left(\frac{\sqrt{(x-\pi)^{2}+(y-\pi)^{2} }-R_{0}}{2\sqrt{2}\epsilon}\right)\right];\] \[c_{2}(x,y,0) = \frac{1}{2}\left[1-\tanh\left(\frac{y-\pi}{2\sqrt{2}\epsilon} \right)\right]-c_{1}(x,y,0). \tag{10}\]
The initial and equilibrium configurations are similar in 3D. As time evolves in our DNSs, the initial droplet relaxes to its equilibrium-lens (biconvex-lens in 3D) shape as shown in Fig. 2 for 2D, with the angle \(\theta=120^{\circ}\) [Fig. 1 (a)], because we choose \(\sigma_{12}=\sigma_{23}=\sigma_{13}\equiv\sigma\). We then place two such static lenses (biconvex lenses) close to each other and set the velocity field to zero everywhere. The initial distance between the proximate edges of the two lenses is greater than the grid spacing \(dx\) and less than the interface width \(\epsilon\).
## III Results
We illustrate the fascinating spatiotemporal evolution of liquid-lens coalescence by representative pseudocolor plots [Figs. 3 (a) (multimedia view), (b) (multimedia view), (d) (multimedia view), (e) (multimedia view), (g), (h)], from our DNS studies of the symmetrical mergers of two liquid lenses in 2D and of two lenticular biconvex lenses in 3D. In particular, Figs. 3 (a) (multimedia view) and (b) (multimedia view) show, for the viscous and inertial regimes, respectively, pseudocolor plots of \(\mathbf{\omega}\), with overlaid velocity vectors and the magenta \(c_{1}=0.5\) contour, which is a convenient indicator of the lens interface in 2D. In Figs. 3 (d) (multimedia view) and (e) (multimedia view), we show results from our DNS of lens mergers in 3D; we use a green isosurface of \(c_{1}\) and an overlaid brown isosurface of \(|\mathbf{\omega}|\); we present \(z=\pi\) planar sections of the \(c_{1}\) isosurface (black curve) and of \(|\mathbf{\omega}|\) (pseudoodor color plots) in Figs. 3 (g) and (h). We see from these figures and videos that initially static lenses, which are placed close to each other, gradually coalesce by forming a bridge, whose neck height \(h(t)\) [and, in 3D, the width \(w(t)\) also] increases with the time \(t\). This lens coalescence depends on the Ohnesorge number \(Oh\). We find, in agreement with experiments [29], that liquid-lens coalescence is influenced principally by viscous stresses, at high values of \(Oh\) (high \(\nu\)), but by inertial forces, at low values of \(Oh\) (low \(\nu\)), with surface tension forces being the dominant driving factor. We carry out a systematic study of the \(Oh\) dependence of this coalescence process.
In Figs. 3 (c), (f), (i), we quantify the remarkable difference between the growth of \(h(t)\) in the viscous and inertial regimes. In both 2D and 3D, our DNSs yield \(h(t)/l_{\nu}\sim(t/t_{\nu})^{\alpha_{v}}\) and \(h(t)/l_{\nu}\sim(t/t_{\nu})^{\alpha_{i}}\) with distinctly different viscous- and inertial-range exponents \(\alpha_{v}\simeq 1\) and \(\alpha_{i}\simeq 2/3\), respectively; here, \(l_{\nu}=\rho\nu^{2}/\sigma\) and \(t_{\nu}=\rho^{2}\nu^{3}/\sigma^{2}\) are the viscous length and time scales[3]. Our results are in consonance with recent experiments on the coalescence of liquid lenses [29]. Figures 3 (c) and (f) demonstrate clearly that, if we plot the scaled neck height \(h(t)/l_{\nu}\) versus the scaled time \(t/t_{\nu}\), then the curves for different values of \(Oh\) collapse, to a significant degree, onto a single curve, whose asymptotes are the viscous- and inertial-range scaling forms mentioned above; these asymptotes are separated by a broad crossover region. Within the accuracy of our measurements (and those in experiments) the scaling exponents \(\alpha_{v}\) and \(\alpha_{i}\) are universal insofar as they _do not depend on \(Oh\)_ and the linear size and the spatial dimension of the symmetrical lenses [see Fig. 7 in the Appendix]. Furthermore, as we show in Fig. 3 (i), in 3D the scaled width \(w(t)/l_{\nu}\) also shows the collapse, for different values of \(Oh\), and the same scaling forms as \(h(t)/l_{\nu}\).
We find that, in viscous-regime coalescence, neck
Figure 3: **2D DNSs:** Pseudocolor plots of \(\mathbf{\omega}\) with overlaid velocity vectors for the coalescence of lenses in (a) the viscous regime (multimedia view) [from run 2D-R6] and (b) the inertial regime (multimedia view) [from run 2D-R1]; the \(c_{1}=0.5\) contour (magenta line) indicates the lens interface. The field is normalized with its absolute value for ease of visualization. **3D DNSs:** Isosurface plots of \(c_{1}\) (green) and \(|\mathbf{\omega}|\) (brown) for (d) the viscous regime (multimedia view) [from run 2D-P3] and (e) the inertial regime (multimedia view) [from run 2D-P1]. **3D DNSs (top view)** Pseudocolor plots of \(\mathbf{\omega}(x,y,z=\pi)\) overlaid with the \(c_{1}=0.5\) contour line (black line) for (g) the viscous regime [from run 2D-P3] and (h) the inertial regime [from run 2D-P1]. Plots of the scaled neck height \(h(t)/l_{\nu}\) versus the scaled time \(t/t_{\nu}\) for different Ohnesorge numbers \(Oh\) for (c) 2D lenses (Runs 2D-R1 to 2D-R8, 2D-S1 to 2D-S2) and (f) 3D lenses (Runs 3D-P1 to 3D-P4). (i) Plots of the scaled neck width \(w(t)/l_{\nu}\) versus the scaled time \(t/t_{\nu}\) for different values of \(Oh\) for the above 3D lenses (top view). The time and length axes are scaled by the corresponding viscous time and length scales. The plots show a clear crossover from the viscous regime, with exponent \(\alpha_{v}\simeq 1\), to the inertial regime, with exponent \(\alpha_{i}\simeq 2/3\). In 3D, we measure \(h(t)\) and \(w(t)\) in the \(z\) and \(y\) directions, respectively.
growth is guided by the large gradient of \(P_{\mathcal{L}}^{G}\) [see Figs. 4(a) and (c) for 2D and the top view for 3D, respectively]. In contrast, in the inertial regime, the gradient of \(P_{\mathcal{L}}^{G}\) [Eq. 8], in the region of the neck, is smaller than it is in the viscous case [see Figs. 4(b) and (d) for 2D and the top view for 3D, respectively]. This leads to faster neck growth in the viscous case than in the inertial one, with \(\alpha_{v}\simeq 1>\alpha_{i}\simeq 2/3\).
The following heuristic dimensional argument [53] suggests why the exponents \(\alpha_{v}\) and \(\alpha_{i}\) are different from each other. On dimensional grounds, \(\nabla P_{\mathcal{L}}^{G}\sim P_{\mathcal{L}}^{G}/h(t)\). The velocity of growth of the neck height is \(\dot{h}(t)\). In the viscous regime \(\nu\nabla^{2}\mathbf{u}\sim\nu\dot{h}(t)/h^{2}\); if we balance this by \(\nabla P_{\mathcal{L}}^{G}\sim P_{\mathcal{L}}^{G}/h(t)\) and note that \(P_{\mathcal{L}}^{G}\sim\sigma/h\), we obtain \(\nu\dot{h}(t)\sim\sigma\), whence \(h(t)\sim t\) and \(\alpha_{v}=1\). If we equate the inertial term \(\mathbf{u}\cdot\mathbf{\nabla}\mathbf{u}\) with \(\dot{h}^{2}/h\), the balance with \(\nabla P_{\mathcal{L}}^{G}\sim P_{\mathcal{L}}^{G}/h(t)\) yields \(h(t)\sim t^{2/3}\), i.e., \(\alpha_{i}=2/3\). The exponent \(\alpha_{i}=2/3\), for inertial-range liquid-lens coalescence, is distinct from its counterpart in the coalescence of spherical droplets, where \(\alpha_{i}=1/2\) [see, e.g., Refs. [2;
Figure 4: Pseudocolor plots of the excess pressure \(P_{\mathcal{L}}^{G}\): For 2D in (a) viscous and (b) inertial regimes. In 3D top view of \(P_{\mathcal{L}}^{G}\) in (c) viscous and (d) inertial regimes. (e) Plots versus time \(t\) of the ratio of the horizontal width \(Q(t)\) of the vortex-quadrupole and the bridge height \(h(t)\) (see the top-right schematic figure), for different Ohnesorge numbers \(Oh\), showing decay and growth with time (see text) in viscous and inertial regimes.
Figure 5: Time evolution [at \(t/t_{\nu}=160\) (\(\equiv t^{*}\))(red line), \(10t^{*}\)(blue line), \(50t^{*}\)(green line), and \(300t^{*}\)(magenta line)] of the inertial-regime-kinetic-energy spectra \(E(k,t)\) for (a) run 2D-T1 in 2D and (b) run 3D-P1 in 3D; the insets on the top right show the Reynolds number \(Re(t)\). (c) In the inertial regime, plots of \(L_{I}(t)/l_{\nu}\) versus \(t/l_{\nu}\) collapse significantly and indicate power-law scaling with \(L_{I}(t)/l_{\nu}\sim[t/t_{\nu}]^{\alpha_{L}}\) with the scaling exponent \(\alpha_{L}\simeq 2/3\).
5, 7, 8, 9, 10, 11, 39]]. This indicates that the geometry of the coalescing droplets plays a major role in the coalescence process, as has been noted in recent experiments [29].
The superimposition of the vorticity and velocity fields, which we present in Figs. 3(a),(b),(d),(e), (g), and (h), shows clearly that, in the viscous regime, a vortex quadrupole is present in the region of the neck of the vortex. In the inertial case, this quadrupole stretches out with some subsidiary small vortices; the neck of the lens stretches out also in this case. In the inertial case, the presence of numerous vortices, spread over the interface, is indicative of turbulence [44, 45], whose properties we explore below.
We investigate the spreading of the vortex quadrupole and its distortion into a pair of dipoles by computing the ratio \(Q(t)/h(t)\), where \(Q(t)\) is the distance between the vortex and anti-vortex cores [see the top inset of Fig.4(e)]. In the log-log plots of Fig. 4 (e), we show how \(Q(t)/h(t)\) varies with time \(t\) for different values of \(Oh\). In the viscous regime, \(Q(t)/h(t)\) decreases as \(t\) increases; by contrast, in the inertial regime, \(Q(t)/h(t)\) increases with \(t\). At the highest (lowest) value of \(Oh\) that we consider, this decrease (increase) is characterized by a power-law exponent \(\simeq-1/4\) (\(\simeq+1/4\)); for intermediate values of \(Oh\), the ratio \(Q(t)/h(t)\) first decreases and then increases as \(t\) progresses.
Significant turbulence is generated during liquid-lens coalescence in the inertial regime. We quantify this turbulence by considering the temporal evolution of the energy spectrum \(E(k,t)\), which yields the energy distribution across different wave numbers \(k\), the integral length scale \(L_{I}(t)\), which is the typical length scale of energy-containing eddies, and the Reynolds number \(Re(t)\) that characterizes the degree of turbulence [see Eq. 9].
In Fig. 5 (a) [2D run T1] and Fig. 5 (b) [3D run P1] we present log-log plots of \(E(k,t)\) versus \(k\) for several representative times \(t\); the embedded figures on the top right corners show the growth of \(Re(t)\) with \(t\). From these figures we see that the energy is spread over at least two decades over \(k\); this is a clear signature of lens-merger-induced turbulence. The arrows in Fig. 5(a) and (b) indicate the direction of time evolution of the energy spectra during coalescence, suggesting inverse cascades of energy in both 2D and 3D.[The concentration spectra \(S_{1}(k,t)\) and \(S_{2}(k,t)\) are also spread over at least two decades of \(k\) because of this turbulence (see Fig. 8 in the Appendix), but their dependence on \(t\) is less than that of \(E(k,t)\).] The time evolution of the scaled integral length scale \(L_{I}(t)/l_{\nu}\), shown in Fig. 5 (c), indicates power-law scaling with \(L_{I}(t)/l_{\nu}\sim[t/l_{\nu}]^{\alpha_{L}}\), with the scaling exponent \(\alpha_{L}\simeq 2/3\) [this is like the neck-growth exponent shown in Figs. 3(c), (f), and (i)].
In Fig. 6 we compare log-log plots of \(h(t)/l_{\nu}\) versus \(t/t_{\nu}\) for the mergers of (A) symmetric lenses [see Fig. 6A (multimedia view)], (B) asymmetric lenses [Fig. 6B (multimedia view)], and (C) circular droplets [Fig. 6C (multimedia view)]. [See the pseudocolor plots of \(c_{2}-c_{1}\) in the insets. Symmetric lenses, as illustrated in Fig. 6(A), exhibit top-down symmetry; in contrast, asymmetric lenses, as depicted in Fig. 6(B), do not display this symmetry.] These plots demonstrate the geometry dependence of the power-law-growth exponent \(\alpha_{i}\) in the inertial regime. Specifically, we find: for the coalescence of symmetric lenses [run 2D-R1] \(\alpha_{i}\simeq 2/3\) (green line); this value shows a smooth crossover to \(\alpha_{i}\simeq 1/2\) (blue line) for the coalescence of circular droplets [from run 2D-K2]; the coalescence of asymmetric lenses [from run 2D-K1] shows a crossover from \(\alpha_{i}\simeq 2/3\) to \(\alpha_{i}\simeq 1/2\) (red line).
## IV Conclusion and discussions
We have shown that the three-phase Cahn-Hilliard-Navier-Stokes (CHNS3) provides a natural theoretical framework for the study of liquid-lens coalescence in the viscous and inertial regimes and in the crossover region from the former to the latter. By carrying out extensive DNSs, we have shown, in agreement with experiments, that (a) \(h(t)\sim t^{\alpha_{v}}\), with \(\alpha_{v}\simeq 1\) in the viscous regime; (b) in the crossover region the growth of \(h(t)\) with \(t\) is less steep; and (c) \(h(t)\sim t^{\alpha_{i}}\), with \(\alpha_{i}\simeq 2/3\) in the inertial regime. Our study of the viscous, crossover, and inertial regimes as a function of \(Oh\) and \(R_{0}\) have demonstrated that these exponents are universal and do not depend on the sizes of the merging lenses. From the top
Figure 6: Pseudocolor plots of \(c_{2}-c_{1}\) (see insets) illustrating the mergers of (A) symmetric lenses (multimedia view), (B) asymmetric lenses (multimedia view), and (C) circular droplets (multimedia view) (in a two-phase system with, say, \(c_{1}=0\)). Log-log plots of \(h(t)/l_{\nu}\) versus \(t/t_{\nu}\) for the mergers in (A), (B), and (C) illustrating the geometry dependence the power-law-growth exponent \(\alpha_{i}\) in the inertial regime. For the coalescence of symmetric lenses [from run 2D-R1] \(\alpha_{i}\simeq 2/3\) (green line); this value shows a smooth crossover to \(\alpha_{i}\simeq 1/2\) (blue line) for the coalescence of circular droplets [from run 2D-K2]; the coalescence of asymmetric lenses [from run 2D-K1] shows a crossover from \(\alpha_{i}\simeq 2/3\) to \(\alpha_{i}\simeq 1/2\) (red line).
view of the merger of biconvex lenses in 3D [Fig. (1) (b)] we have shown that \(w(t)\sim t^{\alpha_{v}}\) and \(w(t)\sim t^{\alpha_{i}}\) in viscous and inertial regions, respectively. By monitoring the spatiotemporal evolution of \(\mathbf{u}\), \(c_{1}\), \(c_{2}\), and \(\mathbf{\omega}\), we have uncovered the crucial role played by a vortex quadrupole in this merger; and we have characterized the growth and distortion of this quadrupole. In the inertial case, we have unveiled signatures of lens-merger-induced turbulence, which we have quantified via the spectra \(E(k,t)\), \(S_{1}(k,t)\), and \(S_{2}(k,t)\), and \(L_{I}(t)\) and \(Re(t)\). We have shown that the gradient of \(P_{\mathcal{L}}^{G}\) is of importance in lens mergers, just as it is in the coalescence of droplets [1; 21]. Our examination of the merger of two asymmetrical lenses has elucidated how this proceeds via the coalescence of the upper concave arcs, so the growth exponent \(\alpha_{i}\) lies in between its lens- and droplet-merger values. We hope that our detailed study of the spatiotemporal evolution of concentration and velocity fields during liquid-lens mergers will lead to experimental investigations of this evolution and of lens-merger-induced turbulence.
We note that liquid-lens coalescence is often studied for sessile droplets on solid substrates in many experiments [53; 54; 55; 56]. It is possible to study the spatiotemporal evolution of such coalescence by combining our CHNS framework with a volume-penalization scheme as we will show elsewhere.
As we were preparing our paper for publication, we became aware of another paper, which has just been published recently [57], that has carried out a Lattice-Boltzmann study of symmetric liquid-lens mergers in 2D and 3D. This study obtains results that are similar to those that are summarised in our Fig. 3.
###### Acknowledgements.
We thank Jaya Kumar Alageshan and Nairita Pal for valuable discussions and the Science and Engineering Research Board (SERB) and the National Supercomputing Mission (NSM) Grant No. DST/NSM/R&D_ HPC_Applications/2021/34, India for support, and the Supercomputer Education and Research Centre (IISc) for computational resources.
## Data and code availability
Data from this study and the computer scripts can be obtained from the authors upon reasonable request.
## Conflicts of interest
No conflicts of interests, financial or otherwise, are declared by the authors.
## Author contributions
NBP and RP planned the research and analysed the numerical data; NBP carried out the calculations and prepared the tables, figures, and the draft of the manuscript; NBP and RP then revised the manuscript in detail and approved the final version.
## V Appendix
The concentration spectra \(S_{1}(k,t)\) and \(S_{2}(k,t)\) are also spread over at least two decades of \(k\) because of lens-merger-induced turbulence [see Fig. 8], but their dependence on \(t\) is less than that of \(E(k,t)\).
|
2310.02992
|
Kosmos-G: Generating Images in Context with Multimodal Large Language
Models
|
Recent advancements in subject-driven image generation have made significant
strides. However, current methods still fall short in diverse application
scenarios, as they require test-time tuning and cannot accept interleaved
multi-image and text input. These limitations keep them far from the ultimate
goal of "image as a foreign language in image generation." This paper presents
Kosmos-G, a model that leverages the advanced multimodal perception
capabilities of Multimodal Large Language Models (MLLMs) to tackle the
aforementioned challenge. Our approach aligns the output space of MLLM with
CLIP using the textual modality as an anchor and performs compositional
instruction tuning on curated data. Kosmos-G demonstrates an impressive
capability of zero-shot subject-driven generation with interleaved multi-image
and text input. Notably, the score distillation instruction tuning requires no
modifications to the image decoder. This allows for a seamless substitution of
CLIP and effortless integration with a myriad of U-Net techniques ranging from
fine-grained controls to personalized image decoder variants. We posit Kosmos-G
as an initial attempt towards the goal of "image as a foreign language in image
generation." The code can be found at https://aka.ms/Kosmos-G
|
Xichen Pan, Li Dong, Shaohan Huang, Zhiliang Peng, Wenhu Chen, Furu Wei
|
2023-10-04T17:28:44Z
|
http://arxiv.org/abs/2310.02992v3
|
# Kosmos-G: Generating Images in Context
###### Abstract
We propose a novel approach to image image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation. We propose a novel approach to image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image segmentation in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image in context-based image-based image in context-based image in context-based
###### Abstract
Recent advancements in text-to-image (T2I) and vision-language-to-image (VL2I) generation have made significant strides. However, the generation from generalized vision-language inputs, especially involving multiple images, remains under-explored. This paper presents Kosmos-G, a model that leverages the advanced perception capabilities of Multimodal Large Language Models (MLLMs) to tackle the aforementioned challenge. Our approach aligns the output space of MLLM with CLIP using the textual modality as an anchor and performs compositional instruction tuning on curated data. Kosmos-G demonstrates a unique capability of zero-shot multi-entity subject-driven generation. Notably, the score distillation instruction tuning requires no modifications to the image decoder. This allows for a seamless substitution of CLIP and effortless integration with a myriad of U-Net techniques ranging from fine-grained controls to personalized image decoder variants. We posit Kosmos-G as an initial attempt towards the goal of "image as a foreign language in image generation." The code can be found at [https://aka.ms/Kosmos-G](https://aka.ms/Kosmos-G).
## 1 Introduction
In recent studies, advancements in text-to-image (T2I) generation, particularly with diffusion models, have shown remarkable progress in producing highly photorealistic, accurate, and varied images from textual descriptions. Building on the unprecedented success of producing highly accurate images from text descriptions, numerous studies have delved into more sophisticated vision-language-to-image (VL2I) generation techniques. Methods such as DreamBooth [11] and SuTI [12] emphasize subject-driven generation, where they use both subject images and textual descriptions as inputs to render the subject in a newly described context. On the other hand, image editing models like InstructPix2Pix [1] accept original images and editing instructions to produce modified images as outputs. However, how to generate images from generalized vision-language inputs remains under-explored.
Many studies have been undertaken to accomplish this objective. Notably, Re-Imagen [13], Prompt Diffusion [14], and SuTI [12] inject image features into the U-Net of diffusion models. These models integrate images and textual guidance to address specific VL2I tasks. Specifically, Re-Imagen focuses on retrieve-augmented image generation, Prompt Diffusion emphasizes subject-driven generation, and SuTI specializes in in-context generation. However, such injection methods segregate the guidance for text and images, thereby limiting the effectiveness of joint modeling between the two modalities. Additionally, this approach is challenging to extend to scenarios involving multiple entities.
Multimodal Large Language Models (MLLMs) [15, 16, 17, 18, 19, 20, 21, 22] have significantly expanded the capabilities of language models, allowing them to process diverse modalities such as images. This multimodal perception empowers LLMs to undertake tasks previously deemed impossible, including document intelligence and understanding graphical user interfaces. Recent research has utilized MLLMs for Vision-Language-to-Image (VL2I) tasks. This approach presents several advantages: 1) It capitalizes on the inherent vision-language alignment within the MLLM. 2) The MLLM architecture naturally supports interleaved vision-language input, accommodating multiple images. One of the pioneering works in this domain is M-VADER [23], which achieves semantic alignment between the MLLM and the diffusion image decoder by training on image-caption pairs. GILL [14], Emu [26], and DreamLLM [18] focus on interleaved vision-language generation. They effectively align the output space of the MLLM with the diffusion image decoder through CLIP supervision or pre-training on multimodal corpora. However, this alignment predominantly remains at the semantic level, meaning these methods may not be good at detailed, subject-driven image generation. BLIP-Diffusion [11] learns object representations by synthesizing images through the composition of subjects with random backgrounds. This approach effectively endows it with a zero-shot, subject-driven text-to-image generation capability. However, the specific design of its input template and training data restricts its scalability to multiple entities.
To support generalized vision-language inputs across multiple entities, we present Kosmos-G, which leverages the property of MLLM following an "align before instruct" manner. Specifically, we start
from the multimodal language modeling stage, leading to the Kosmos-1 [11] MLLM. It envisions language models as a universal task layer, perceiving free-form interleaved vision-language inputs and consolidating various task predictions into textual formats. Given the aligned vision-language representation, we then use the language modality as an anchor and align the output space of the MLLM with the CLIP text encoder. Finally, we perform instruction tuning on the curated data. Kosmos-G accepts captions as input, where each entity is followed by its segmented image. The model is trained to faithfully reproduce all entities, render the text content, and follow the instructions. In this process, the frozen pre-trained diffusion image decoder serves as a score metric. We distill the learned data distribution to pass the differentiable gradient to the MLLM. This enables Kosmos-G to harness rich features from the image encoder to generate images faithfully reproducing the contents across various contexts (see Figure 1).
Benefiting from general-purpose pre-training, Kosmos-G approaches the objective of "image as a foreign language in image generation." This means Kosmos-G can capture novel concepts from input images and guide personalized creations in a zero-shot setting. Notably, Kosmos-G also stands as the first model to master zero-shot multi-entity subject-driven generation. Owing to the score distillation instruction tuning, Kosmos-G do not need to modify any parameters of the image decoder, i.e., the diffusion U-Net and VAEs. This makes it possible for us to seamlessly substitute CLIP with Kosmos-G in any image generation system. As a result, a plethora of applications can be unlocked in conjunction with U-Net techniques, ranging from fine-grained controls like ControlNet [13] to personalized or stylized image decoder variants like amazing community contributed LoRA [14] checkpoints.
Overall, we propose Kosmos-G as an initial attempt towards the objective of "image as a foreign language in image generation." We summarize our main contributions as follows:
1. We align the output space of MLLM with CLIP using the text modality as an anchor, efficiently leverage the multimodal perception of MLLMs for image generation.
2. We propose a compositional instruction tuning task, leading to amazing zero-shot multi-entity subject-driven generation capability.
3. Score distillation instruction tuning allows Kosmos-G to seamlessly interface with a spectrum of U-Net techniques, indicating broad applicability and potential for integration into various frameworks.
Figure 2: Kosmos-G comprises an MLLM for multimodal perception, coupled with an AlignerNet that bridges the MLLM to the diffusion U-Net image decoder. Kosmos-G can pass the fine concept-level guidance from interleaved input to image decoder, and offer a seamless alternative to CLIP. Orange denotes the trainable modules; Blue denotes the frozen ones.
## 2 Kosmos-G: Image as a Foreign Language in Image Generation
As shown in Figure 2, Kosmos-G is a model that can perceive general modalities, follow instructions, and generate image conditions. Specifically, the backbone of Kosmos-G MLLM is a Transformer-based causal language model, serving as a general-purpose interface to multimodal input. We train Kosmos-G following an "align before instruct" manner, the entire training pipeline can be divided into 3 stages:
1. **Multimodal Language Modeling**: We pre-train the MLLM from scratch on multimodal corpora, including monomodal data, cross-modal paired data, and interleaved multimodal data with language modeling loss following Kosmos-1.
2. **Image Decoder Aligning**: We use the U-Net [14] of Stable Diffusion v1.5 [15] as our image decoder. We trained an AlignerNet on only textual data to align the output space of Kosmos-G to U-Net's input space through CLIP supervision. Here, the language acts as the anchoring modality, ensuring image input is also compatible with the image decoder.
3. **Instruction Tuning**: We further fine-tune Kosmos-G through a compositional generation task on curated data, with the differentiable gradient passed from the frozen U-Net.
In Stage 1, only the MLLM is trained. In Stage 2, AlignerNet is trained with MLLM frozen. During Stage 3, both AlignerNet and MLLM are jointly trained. The image decoder remains frozen throughout all stages.
### Multimodal Language Modeling
Following Kosmos-1, Kosmos-G perceives general modalities in a unified way. To achieve this, we represent the input format as a single sequence using special tokens. Specifically, we use the tokens <s> and </s> to denote start- and end-of-sequence. We also incorporate  tokens to indicate the start and end of any embedded image representations within the sequence.
Our methodology involves encoding both text tokens and images into vectors, which are then fed into the decoder. For text tokens, we use a lookup table to map them into embeddings. To handle the input images, we employ a vision Transformer [16] as the embedding module. Furthermore, Resampler [17] is used as an attentive pooling mechanism to reduce the number of image embeddings. After obtaining the embeddings of an input sequence, we feed them into the Transformer-based decoder. The left-to-right causal decoder processes the sequence in an auto-regressive manner. A \(\mathrm{softmax}\) classifier on the Transformer is used to assign probabilities to each token in the vocabulary.
Kosmos-G is first trained using the next-token prediction task. The training objective is to maximize the log-likelihood of tokens in examples. It's important to note that the training loss only takes into account discrete tokens, specifically text tokens. The MLLM component has 24 layers with 2,048 hidden dimensions, 8,192 FFN intermediate size, and 32 attention heads. For faster convergence, the image representation is obtained from a pre-trained CLIP ViT-L/14 model with 1,024 feature dimensions. The images are preprocessed into 224\(\times\)224 resolution during training. We freeze the parameters of the CLIP model except for the last layer during training. The total number of parameters of the MLLM is about 1.6B.
### Image Decoder Aligning
After undertaking multimodal language modeling, we have successfully aligned vision and language perception within MLLM. To make Kosmos-G capable of image generation, we incorporate diffusion models [23] as our image decoder. Specifically, we adopt the widely accepted Stable Diffusion v1.5 [15]. It's important to note that we only replace the CLIP text encoder [15] with multimodal Kosmos-G, without making any modifications to the U-Net architecture or weight. This setup allows Kosmos-G to effectively collaborate with techniques applied to the U-Net, like ControlNet [18] and various community LoRA [19] variants. In this section, we will provide brief preliminaries of latent diffusion models, and then delve into the process of aligning the output space of Kosmos-G with the image decoder after the aforementioned replacement.
Preliminaries of Latent Diffusion ModelsDiffusion models define a Markov chain of forward diffusion process \(q\), adding Gaussian noise samples to the initial real data \(\mathbf{z}_{0}\sim q(\mathbf{z})\) over \(T\) steps. Here, \(\mathbf{z}\) denotes latent representations rather than pixel values. The efficient, low-dimensional latent space is approximately perceptually equivalent to high-dimensional RGB space, while the redundant semantically meaningless information present in the pixel domain is eliminated. Perceptual compression models (i.e., VQ-VAE) consisting of \(\mathcal{E}\) and \(\mathcal{D}\) encode the real data into the latent space and reverse, such that \(\mathcal{D}(\mathcal{E}(\mathbf{x}))\approx\mathbf{x}\). Latent diffusion models use latent representations \(\mathbf{z}=\mathcal{E}(\mathbf{x})\) instead of working directly with pixel values during the diffusion process. The final output can be decoded back to pixel space via \(D(\mathbf{z})\). The separate mild perceptual compression stage only eliminates imperceptible details, leading to competitive generation results with a much lower cost. The forward process \(q(\mathbf{z}_{t}|\mathbf{z}_{t-1})\) at each time step \(t\) can be expressed as follows:
\[\begin{split} q(\mathbf{z}_{t}|\mathbf{z}_{t-1})& =\mathcal{N}(\mathbf{z}_{t};\sqrt{1-\beta_{t}}\mathbf{z}_{t-1}, \beta_{t}\mathbf{I})\\ q(\mathbf{z}_{1:T}|\mathbf{z}_{0})&=\prod_{t=1}^ {T}q(\mathbf{z}_{t}|\mathbf{z}_{t-1})\end{split} \tag{1}\]
in which \(\beta_{t}\in(0,1)\) denotes the step size. Note \(\beta_{t-1}<\beta_{t}\).
Diffusion models learn a U-Net [14] denoted as \(\boldsymbol{\epsilon}_{\theta}\) to reverse the forward diffusion process, constructing desired data samples from the noise. Let \(\alpha_{t}=1-\beta_{t}\) and \(\bar{\alpha}_{t}=\prod_{i=1}^{t}\alpha_{i}\). We can reparameterize the denoising process \(p(\mathbf{z}_{t-1}|\mathbf{z}_{t})\) also as a Gaussian distribution. This distribution can be estimated by \(\boldsymbol{\epsilon}_{\theta}\) and takes the following form:
\[\begin{split} p_{\theta}(\mathbf{z}_{t-1}|\mathbf{z}_{t})& =\mathcal{N}(\mathbf{z}_{t-1};\boldsymbol{\mu}_{\theta}(\mathbf{z }_{t},t),\boldsymbol{\Sigma}_{\theta}(\mathbf{z}_{t},t))\\ \text{with}\quad\boldsymbol{\mu}_{\theta}(\mathbf{z}_{t},t)& =\frac{1}{\sqrt{\alpha_{t}}}(\mathbf{z}_{t}-\frac{\beta_{t}}{ \sqrt{1-\bar{\alpha_{t}}}}\boldsymbol{\epsilon}_{\theta}(\mathbf{z}_{t},t)) \end{split} \tag{2}\]
The learning objective of diffusion models is to approximate the mean \(\boldsymbol{\mu}_{\theta}(\mathbf{z}_{t},t)\) in the reverse diffusion process. To achieve this, we can utilize the variational lower bound (ELBO) [13] to minimize the negative log-likelihood of \(p_{\theta}(\mathbf{z}_{0})\)[12]. The simplified objective can be expressed as a denoising objective:
\[\mathcal{L}_{diff}=\mathbb{E}_{\mathbf{z}_{0},\boldsymbol{\epsilon}\sim \mathcal{N}(0,1),t}\Big{[}\|\boldsymbol{\epsilon}-\boldsymbol{\epsilon}_{ \theta}(\mathbf{z}_{t},t)\|^{2}\Big{]} \tag{3}\]
During inference, [10] proposes to use classifier-free guidance to obtain more relevant generation results.
\[\hat{\boldsymbol{\epsilon}}=w\cdot\boldsymbol{\epsilon}_{\theta}(\mathbf{z}_{ t},\varphi,t)-(w-1)\cdot\boldsymbol{\epsilon}_{\theta}(\mathbf{z}_{t},t) \tag{4}\]
where \(w\) is guidance scale, \(\varphi\) denotes the condition.
Align Output Space with Diffusion ModelsUpon replacing the previous CLIP text encoder with Kosmos-G, the main focus is to address the misalignment issue between the Kosmos-G and the image decoder. We discovered that simply fine-tuning Kosmos-G using the gradient passed from the image decoder results in both trivial alignment and compromised image quality.
Inspired by [11], we propose the AlignerNet consisting of an encoder \(\mathcal{M}\) and a decoder \(\mathcal{N}\) to learn the alignment between the Kosmos-G source space \(\mathbf{\hat{S}}\) and CLIP text encoder target space \(\mathbf{T}\). Given a single text-only caption \(\mathbf{C}\), Kosmos-G source encoder and CLIP text target encoder encode the caption into embeddings denoted as \(\mathbf{s}\in\mathbb{R}^{l_{x}\times d_{x}}\) and \(\mathbf{t}\in\mathbb{R}^{l_{t}\times d_{t}}\), respectively. Here, \(l\) and \(d\) indicate the length of features and embedding dimensions.
As shown in Figure 2(a), we employ the encoder \(\mathcal{M}\) to minimize the distance between the text source embedding and the target embedding, aiming for a close approximation \(\mathcal{M}(\mathbf{s})\approx\mathbf{t}\) through:
\[\mathcal{L}_{mse}=\mathbb{E}_{\mathbf{s}\sim\mathbf{S},\mathbf{t}\sim \mathcal{N}}\Big{[}\|\mathbf{t}-\mathcal{M}(\mathbf{s}))\|_{2}^{2}\Big{]} \tag{5}\]
To mitigate the reduction in feature discrimination, we also employ a decoder \(\mathcal{N}\) to reconstruct the source embedding \(\mathcal{N}(\mathcal{M}(\mathbf{s}))\approx\mathbf{s}\) through:
\[\mathcal{L}_{rec}=\mathbb{E}_{\mathbf{s}\sim\mathbf{S}}\Big{[}\|\mathbf{t}- \mathcal{N}(\mathcal{M}(\mathbf{s})))\|_{2}^{2}\Big{]} \tag{6}\]
Different from [QYX\({}^{+}\)23], Kosmos-G is a vision-language multimodal encoder. The language modality serves as an anchor throughout the process, aligning the entire Kosmos-G space with the image decoder input space, thus also achieving semantic alignment for the image embeddings.
To efficiently process lengthy sequences consisting of multiple images and minimize memory usage, Kosmos-G encodes the interleaved vision-language input sequence into variable-length embeddings. However, the use of variable length embeddings makes the MLP-based GlueNet [QYX\({}^{+}\)23] unsuitable for learning alignment. To address this, we employ a Transformer-based architecture in AlignerNet, enabling it to effectively align the source and target spaces with mismatched sequence lengths and embedding dimensions.
As shown in Figure 2(b), both \(\mathcal{M}\) and \(\mathcal{N}\) share a similar architecture design, consisting of a Transformer encoder and a Transformer decoder. The Transformer encoder and decoder in both models comprise 12 layers, with an input dimension \(d=768\) and a hidden dimension of 3072. This configuration results in approximately 225M parameters in total. In the cross attention module of Transformer decoder, we use variable length learned latent queries \(\mathbf{Q}_{\mathcal{M}}\in\mathbb{R}^{l_{t}\times d}\) in \(\mathcal{M}\) and \(\mathbf{Q}_{\mathcal{N}}\in\mathbb{R}^{l_{s}\times d}\) in \(\mathcal{N}\) to match sequence length.
### Instruction Tuning
After achieving a semantic alignment between Kosmos-G and the image decoder, our model can successfully generate images following interleaved vision-language guidance. However, the multimodal language modeling and text-only alignment stage only preserve the semantic consistency between the input and output, Kosmos-G still can not leverage rich features extracted from the image encoder to generate images faithfully reproducing the contents in various contexts.
To pursue our objective of "image as a foreign language in image generation," we curate interleaved vision-language data and use the diffusion loss in Equation 3 to further fine-tune Kosmos-G. Specifically, we propose a compositional generation task in which we input captions containing entities, with each of them followed by their corresponding images, like "<s> _A cat_  _and a dog_  _sleeping in the garden_  </s>". Our model is trained to generate images following the input instruction.
To construct the requisite data, we first caption the image, then extract the entities from the caption, and obtain the segmentation results from the image itself. A detailed introduction of the entire pipeline can be found in Section 3.1. Additionally, we leverage the data constructed by [BHE23] for InstructPix2Pix to improve Kosmos-G's image editing capability. This data is structured as: "<s>
Figure 3: Overview of alignment.
_caption_  _edit instruction_ </s>". We also mix some text-to-image data to preserve the language alignment already achieved.
Our goal is to leverage MLLMs to model image distributions through direct latent space sampling. In this setup, the pre-trained frozen Stable Diffusion U-Net serves as a score metric, distilling the learned data distribution. This strategy is similar to Score Distillation Sampling [14]. From the perspective of score distillation, the KL divergence between Kosmos-G and the score function is equivalently minimized for distilling learned probability density in the image decoder. This enables Kosmos-G to leverage rich features from the image encoder to generate an image faithfully reproducing the contents across various contexts.
## 3 Model Training
### Multimodal Training Data
The multimodal language modeling stage in Section 2.1 using the same setting of Kosmos-1 [13], where the models are trained on web-scale multimodal corpora, consisted of text corpora, image-caption pairs, and interleaved data of images and texts. For the image decoder aligning stage in Section 2.2, we only use the caption from image-caption pairs. For the instruction tuning stage in Section 2.3, we use constructed data from Open Images V7 dataset [15], the image-caption pairs, as well as the image editing data from InstructPix2Pix [1].
CaptionsThe image-caption pairs are sourced from multiple datasets, including English LAION-2B [20], LAION-400M [21], COYO-700M [16], and Conceptual Captions [22, 23]. English LAION-2B, LAION-400M, and COYO-700M are collected from Common Crawl web data by extracting images and the corresponding alt-texts. Conceptual captions are also derived from web pages.
Constructed DataWe use approximately 9M images from the Open Images V7 dataset [15] to construct our compositional generation instruction tuning data. As illustrated in Figure 4, we begin by generating captions with BLIP-2-OPT-6.7b [12]. Subsequently, we employ an LLM MPT-7B-Instruct [14] to extract entities from the captions. The original image, along with the text of each entity, is then input into the text-prompted segmentation model CLIPSeg [1] to derive the corresponding image of each entity.
### Training Setup
Our implementation is based on the TorchScale [17] library, which is designed for large-scale model training. Following Kosmos-1 [13], we also use Magneto [17], a Transformer variant, as the backbone architecture of our MLLM and AlignerNet. The whole training process took around four days with 256 NVIDIA V100 GPUs, i.e., one day for image decoder aligning, and three days for instruction tuning. In the instruction tuning stage, we use a blend of constructed data, InstructPix2Pix data, and caption data in a ratio of 2:2:1. For constructed data, to enhance input robustness, we randomly drop the texts of entities with a probability of 0.5 and also maintain the background of the segmented entities with a 0.5 probability.
Figure 4: Overview of our data construction pipeline for compositional generation instruction tuning.
Multimodal Language ModelingWe use a batch size of 1.2 million tokens which is broken down as follows: 0.5 million tokens sourced from text corpora, 0.5 million tokens derived from image-caption pairs, and 0.2 million tokens from interleaved data sets. The MLLM is trained for 300,000 steps, corresponding to about 360 billion tokens in total. We adopt the AdamW optimizer with \(\beta=(0.9,0.98)\). Furthermore, we configure the weight decay at 0.01 and the dropout rate at 0.1. The learning rate is set to escalate to 2e-4 during the initial 375 warm-up steps and decay linearly to 0 for the rest of the training steps. For optimization stability, we initiate using Magneto. We use SentencePiece [14] to tokenize the text. We preprocess the data in the "full-sentence" format [15], where each input sequence is populated with complete sentences consecutively sampled from one or multiple documents.
Image Decoder AligningThe AlignerNet undergoes training using a batch size of 3,584 sentences for 300,000 steps, with a maximum learning rate of 1e-3. This equates to approximately 1 billion sentences overall. The remaining configurations remain consistent with the previous stage.
Instruction TuningThe MLLM and AlignerNet are jointly trained with a batch size of 1,024 images, totaling approximately 200 million images over 200,000 steps. The learning rate peaks at 1e-3. The rest settings are the same as in the previous stage.
## 4 Evaluation
### Main Qualitative Results
As shown in Figure 5, Kosmos-G delivers impressive zero-shot generation results across diverse settings, yielding meaningful and coherent outputs even for highly customized subjects. The visual samples showcase generative capabilities in re-contextualization, stylization, modification, and accessory incorporation. Notably, multi-entity VL2I is very challenging even for fine-tuning methods like DreamBooth [16]. While owing from the novel compositional generation instruction tuning, Kosmos-G is the first model that is capable of achieving this in a zero-shot setting.
Figure 5: Zero-shot image generation examples with multimodal prompts.
### Quantitative Results
We do quantitative evaluations of Kosmos-G on DreamBench [11] for single-entity subject-driven generation and MS-COCO [12] for text-to-image generation.
The DreamBench dataset contains 30 subjects and features 25 prompt templates, resulting in 750 unique prompts covering skills like re-contextualization, modification, accessorization, etc. We follow prior work to generate 4 images for each prompt to form the 3000 images for a comprehensive evaluation. We follow DreamBooth to adopt DINO, CLIP-I to evaluate the subject fidelity, and CLIP-T to evaluate the text fidelity. We use a classifier-free guidance scale of 7.5 and 100 DPM-Solver [11] inference steps for sampling. As shown in Table 1, zero-shot Kosmos-G outperforms Textual Inversion and Re-Imagen and exhibits marginally better performance than DreamBooth and BLIP-Diffusion with only a single image input. Furthermore, Our results are also comparable with SuTI, without requiring expensive apprenticeship learning supervision. Kosmos-G accepts only a single image as input, we select a clear image from the 4-7 provided images for each subject to avoid occlusion. We slightly modify the prompt template to ensure better alignment with the instruction tuning data. The images and prompt used can be found in Appendix A.
For the text-to-image generation, We generate images using 30,000 randomly sampled captions from the MS-COCO (2014) validation set. We use a classifier-free guidance scale of 3.0 and 250 DDIM [12] inference steps for sampling. As shown in Table 1, Kosmos-G surpasses other CLIP-aligned VL2I models, delivering the optimal alignment results.
### Ablation Studies
We conduct ablation studies to find out the importance of the image decoder aligning and instruction tuning. Table 2 demonstrates that direct end-to-end fine-tuning fails to generate meaningful images. Incorporating AlignerNet and CLIP supervision, however, results in outcomes very close to the original SD v1.5. We also compared the generation results from Kosmos-G before instruction tuning and the standard SD v1.5 against our final model. As illustrated in Figure 6, without instruction tuning, Kosmos-G can only generate contents semantically aligned with the vision-language input. SD baseline also remains at the semantic level and fails to faithfully reproduce the entities in the generated images.
### Applications
As highlighted in Section 2.3, Kosmos-G can seamlessly replace CLIP in any image generation system. This remarkable property unlocks a myriad of brand-new applications that have never been possible before. We demonstrate its integration with ControlNet [13] and LoRA vari
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Methods** & **DINO\(\uparrow\)** & **CLIP-I\(\uparrow\)** & **CLIP-T\(\uparrow\)** \\ \hline Real Images (Oracle) & 0.774 & 0.885 & - \\ \hline \multicolumn{4}{c}{_Fine-Tuning_} \\ \hline Textual Inversion [1] & 0.569 & 0.780 & 0.255 \\ DreamBooth [11] & 0.668 & 0.803 & 0.305 \\ BLIP-Diffusion [11] & 0.670 & 0.805 & 0.302 \\ \hline \multicolumn{4}{c}{_Test Time Tuning Free_} \\ \hline Re-Imagen\({}^{\star}\)[1] & 0.600 & 0.740 & 0.270 \\ SuTI [11] & 0.741 & 0.819 & 0.304 \\ BLIP-Diffusion\({}^{\star}\)[11] & 0.594 & 0.779 & 0.300 \\ Kosmos-G\({}^{\star}\) (single image input) & 0.694 & 0.847 & 0.287 \\ \hline \hline \end{tabular}
\begin{tabular}{l c} \hline \hline
**Methods** & **FID\(\downarrow\)** \\ \hline \multicolumn{4}{c}{_T2I Models_} \\ \hline GLIDE [12] & 12.24 \\ Make-A-Scene [1] & 11.84 \\ DALL-E 2 [12] & 10.39 \\ SD v1.5\({}^{\dagger}\)[11] & 9.34 \\ Imagen-3.4B [2] & 7.27 \\ \hline \hline \multicolumn{4}{c}{_CLIP-Aligned VL2I Models_} \\ \hline GLIL-8B [13] & 12.20 \\ Emu-14B [13] & 11.66 \\ Kosmos-G-1.9B & 10.99 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Left**: Quantitative comparisons on DreamBench. \({}^{\star}\) denotes zero-shot methods. **Right**: Zero-shot FID comparisons on MS-COCO. \({}^{\dagger}\) indicates results evaluated by us under same settings and seed with Kosmos-G.
\begin{table}
\begin{tabular}{l c} \hline \hline
**Methods** & **FID\(\downarrow\)** \\ \hline SD v1.5 & 9.34 \\ \hline E2E Fine-Tuning & Failed \\
12-Layers AlignerNet & 9.89 \\
24-Layers AlignerNet & 9.55 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study results for image decoder aligning on MS-COCO.
ants [11] in Figure 7. Kosmos-G works perfectly with these techniques. Building on the CLIP space, we believe our model will push forward the transition from text-conditioned generation toward vision-language generation, paving the way for numerous novel applications.
## 5 Conclusion
We propose Kosmos-G, a model capable of high-fidelity zero-shot image generation from generalized vision-language input that spans multiple images. Our approach hinges on a unique "align before instruct" pre-training strategy. Kosmos-G demonstrates competitive single-entity subject-driven image generation and text-to-image capability, it also stands as the first model to extend zero-shot subject-driven image generation to multi-entity scenarios. Furthermore, Kosmos-G allows seamless replacement of CLIP, unlocking various new applications in conjunction with other U-Net techniques such as ControlNet and LoRA. In general, we present Kosmos-G as a preliminary effort aimed at achieving the objective of "image as a foreign language in image generation."
|
2307.10959
|
Vector Fields and Flows on Subcartesian Spaces
|
This paper is part of a series of papers on differential geometry of
$C^\infty$-ringed spaces. In this paper, we study vector fields and their flows
on a class of singular spaces. Our class includes arbitrary subspaces of
manifolds, as well as symplectic and contact quotients by actions of compact
Lie groups. We show that derivations of the $C^\infty$-ring of global smooth
functions integrate to smooth flows.
|
Yael Karshon, Eugene Lerman
|
2023-07-20T15:31:32Z
|
http://arxiv.org/abs/2307.10959v2
|
# Vector fields and flows on subcartesian spaces
###### Abstract.
This paper is part of a series of papers on differential geometry of \(C^{\infty}\)-ringed spaces. In this paper, we study vector fields and their flows on a class of singular spaces. Our class includes arbitrary subspaces of manifolds, as well as symplectic and contact quotients by actions of compact Lie groups. We show that vector fields (defined as derivations of the \(C^{\infty}\)-ring of global smooth functions) integrate to smooth flows.
###### Contents
* 1 Introduction
* 2 Differential spaces
* 3 Derivations and their flows
* A \(\mathbb{R}\)-algebra and \(C^{\infty}\)-ring derivations of differential structures
## 1. Introduction
This paper is one in a series of papers on differential geometry of \(C^{\infty}\)-ringed spaces. Two other papers in the series are [9] and [10].
Singular spaces, that is, spaces that are not manifolds, arise naturally in differential geometry and in its applications to physics and engineering. There are many approaches to differential geometry on singular spaces, and there is a vast literature which we will not attempt to survey. In this paper we use differential spaces in the sense of Sikorski [15] as our model of singular spaces
Sniatycki's book [18] contains a number of geometric tools that apply to differential spaces. Sniatycki is particularly interested in stratified spaces that arise through symplectic reduction (see [17]); he provides a new perspective by viewing these spaces as differential spaces. In this paper, we respond to, and elaborate on, Sniatycki's treatment of vector fields and flows on differential spaces.
The main result of this paper is Theorem 3.18, which roughly says the following:
**Theorem.** Let \(M\) be a differential space embeddable in some Euclidean space \(\mathbb{R}^{N}\) and \(v\) a vector field on \(M\). Assemble the maximal integral curves of \(v\) into a flow \(\Phi:\mathcal{W}\to M\), where \(\mathcal{W}\) is a subset \(M\times\mathbb{R}\). Then the flow \(\Phi:\mathcal{W}\to M\) is smooth.
Thanks to an analogue of the Whitney embedding theorem for differential spaces, embeddability in a Euclidean space is a fairly mild assumption on a differential space that is _locally_ embeddable in a Euclidean space, i.e., the space that is subcartesian (see Definition 2.39). See [1], [11] or [5] for various versions of the Whitney embedding theorem for subcartesian spaces. On the other hand, if a differential space is not subcartesian then the flow of a vector field may not exist at all -- see Example 2 in Section 32.12 of Kriegl and Michor's book [7, p. 330].
Theorem 3.18 relies on the existence and uniqueness of maximal integral curves. A few years ago Sniatycki gave a proof of existence and uniqueness of integral curves of vector fields on arbitrary
subcartesian differential spaces (see Theorem 3.2.1 of his book [18]). In a later paper [3], Cushman and Sniatycki have a similar theorem, Theorem 5.3, and they say that "Theorem 5.3 replaces [5, theorem 3.2.1], which is incorrect" (their [5] is our [18]). However, it seems to us that there is nothing wrong with Sniatycki's Theorem 3.2.1, certainly not with its statement. To make sure, we provide a self-contained proof of existence and uniqueness of integral curves, under the mild assumptions that imply embeddability -- see Corollary 3.28.
In a later paper [6], we remove the mild assumptions that imply embeddability. These assumptions are indeed mild: "reasonable" subcartesian spaces are embeddable. And removing these assumptions has a price; the proof becomes more involved: for embeddable spaces, we can rely on the integration of vector fields on open subsets of Euclidean spaces; for not-necessarily-embeddable spaces, we need to imitate the proof of integration of vector fields on manifolds.
### Organization of the paper
In Section 2 we recall the definition and some properties of differential spaces. This material is standard. One novelty is that we explicitly mention \(C^{\infty}\)-rings. In Section 3 we prove the existence and uniqueness of integral curves of derivations on embeddable subcartesian spaces and use this result to prove the main theorem of the paper. In Appendix A we provide a proof of a special case of a theorem of Yamashita [19, Theorem 3.1]. Namely we prove that any \(\mathbb{R}\)-algebra derivation of a point-determined \(C^{\infty}\)-ring is is automatically a \(C^{\infty}\)-ring derivation. This fact is used in our proof of the existence and uniqueness of integral curves of derivations.
### Acknowledgements
We thank Jordan Watts and Rui Fernandes for their help. Y.K.'s research is partly funded by the Natural Science and Engineering Research Council of Canada and by the United States - Israel Binational Science Foundation. E.L.'s research is partially supported by the Air Force Office of Scientific Research under award number FA9550-23-1-0337.
### Assumptions
Throughout the paper, "manifold" means "smooth (i.e., \(C^{\infty}\)) manifold". All manifolds are assumed to be second countable and Hausdorff.
## 2. Differential spaces
In this section we recall the definition and some properties of differential spaces in the sense of Sikorski. It will be convenient to recall the notion of a \(C^{\infty}\)-ring first. The definition below is not standard, but it is easier to understand on the first pass. It is equivalent to Lawvere's original definition; see [4].
**Definition 2.1**.: A \(C^{\infty}\)-ring is a set \(\mathscr{C}\), equipped with operations
\[g_{\mathscr{C}}:\mathscr{C}^{m}\to\mathscr{C}\]
for all \(m\in\mathbb{Z}_{\geq 0}\) and all \(g\in C^{\infty}(\mathbb{R}^{m})\), such that the following holds.
* For all \(n,m\in\mathbb{Z}_{\geq 0}\), all \(f_{1},\dots,f_{m}\in C^{\infty}(\mathbb{R}^{n})\) and \(g\in C^{\infty}(\mathbb{R}^{m})\), (2.2) \[(g\circ(f_{1},\dots,f_{m}))_{\mathscr{C}}(c_{1},\dots,c_{n})=g_{\mathscr{C}}( (f_{1})_{\mathscr{C}}(c_{1},\dots,c_{n}),\dots,(f_{m})_{\mathscr{C}}(c_{1}, \dots,c_{n}))\] for all \((c_{1},\dots,c_{n})\in\mathscr{C}^{n}\).
* For every \(m>0\) and for every coordinate function \(x_{j}:\mathbb{R}^{m}\to\mathbb{R}\), \(1\leq j\leq m\), (2.3) \[(x_{j})_{\mathscr{C}}(c_{1},\dots,c_{m})=c_{j}.\]
If \(m=0\) then \(\mathscr{C}^{0}\) is a singleton \(\{*\}\). Similarly \(C^{\infty}(\mathbb{R}^{0})\simeq C^{\infty}(0)\simeq\mathbb{R}\). Thus \(0\)-ary operations on \(\mathscr{C}\) are maps \(g_{\mathscr{C}}:\{*\}\to\mathscr{C}\), one for every \(g\in\mathbb{R}\). Since any map \(h:\{*\}\to\mathscr{C}\) can be identified with
\(h(*)\in\mathscr{C}\), we identify the \(0\)-ary operation corresponding to \(g\in\mathbb{R}\) with an element of \(\mathscr{C}\), which we denote by \(g_{\mathscr{C}}\).
**Example 2.4**.: Let \(M\) be a \(C^{\infty}\)-manifold and \(C^{\infty}(M)\) the set of smooth (real-valued) functions. Then \(C^{\infty}(M)\), equipped with the usual composition operations
\[g_{C^{\infty}(M)}(a_{1},\dots,a_{m}):=g\circ(a_{1},\dots,a_{m}),\]
is a \(C^{\infty}\) ring.
**Example 2.5**.: Let \(M\) be a topological space and \(C^{0}(M)\) the set of continuous real-valued functions. Then \(C^{0}(M)\), equipped with the usual composition operations
\[g_{C^{\infty}(M)}(a_{1},\dots,a_{m}):=g\circ(a_{1},\dots,a_{m}),\]
is also a \(C^{\infty}\) ring.
**Definition 2.6**.: A nonempty subset \(\mathscr{C}\) of a \(C^{\infty}\)-ring \(\mathscr{A}\) is a \(C^{\infty}\)-subring if \(\mathscr{C}\) is closed under the operations of \(\mathscr{A}\).
**Example 2.7**.: If \(M\) is a manifold then \(C^{\infty}(M)\) is a \(C^{\infty}\)-subring of \(C^{0}(M)\).
We also need to recall the notion of an initial topology.
**Definition 2.8**.: Let \(X\) be a set and \(\mathscr{F}\) a set of maps from \(X\) to various topological spaces. The smallest topology on \(X\) making all functions in \(\mathscr{F}\) continuous is called initial.
In particular a collection of real-valued functions \(\mathscr{F}\) on a set \(X\) uniquely defines an initial topology on \(X\) (we give the real line \(\mathbb{R}\) the standard topology, of course).
Next we define differential spaces in the sense of Sikorski. The definition below agrees with the one in [18]. Some papers define differential spaces as ringed spaces; see [14] for example.
**Definition 2.9**.: A differential space (in the sense of Sikorski) is a pair \((M,\mathscr{F})\), where \(M\) is a topological space and \(\mathscr{F}\) is a (nonempty) set of real-valued functions on \(M\), subject to the following three conditions:
1. The topology on \(M\) is the smallest topology making every function in \(\mathscr{F}\) continuous, i.e., it is the initial topology defined by the set \(\mathscr{F}\).
2. For any nonnegative integer \(m\), any smooth function \(g\in C^{\infty}(\mathbb{R}^{m})\), and any \(m\)-tuple \(f_{1},\dots,f_{m}\in\mathscr{F}\), the composite \(g\circ(f_{1},\dots,f_{m})\) is in \(\mathscr{F}\).
3. Let \(g:M\to\mathbb{R}\) be a function. Suppose that for each point \(p\) of \(M\) there exist a neighborhood \(U\) of \(p\) and a function \(a\in\mathscr{F}\) such that \(g|_{U}=a|_{U}\). Then the function \(g\) is in \(\mathscr{F}\).
We refer to \(\mathscr{F}\) as a differential structure on \(M\).
**Remark 2.10**.:
* We think of the set of functions \(\mathscr{F}\) on a differential space \((M,\mathscr{F})\) as "smooth functions by fiat." (Also, see Remark 2.19.)
* We may refer to a differential space \((M,\mathscr{F})\) simply as \(M\).
* Condition (2.9.ii) says that \(\mathscr{F}\) is a \(C^{\infty}\)-ring with the operations \(g_{\mathscr{F}}:\mathscr{F}^{m}\to\mathscr{F}\) given by composition: \[g_{\mathscr{F}}(f_{1},\dots,f_{m}):=g\circ(f_{1},\dots,f_{m}).\] Note that since \(C^{\infty}(\mathbb{R}^{1})\) includes constant functions, (2.9.ii) implies that all constant functions are in \(\mathscr{F}\). Recall that \(0\)-ary operations on a \(C^{\infty}\)-ring are indexed by constants \(g\in C^{\infty}(\mathbb{R}^{0})\simeq\mathbb{R}\). Given \(g\in C^{\infty}(\mathbb{R}^{0})\) we define the operation \(g_{\mathscr{F}}:\mathscr{F}^{0}=\{*\}\to\mathscr{F}\) by
setting \(g_{\mathscr{F}}(*)\) to be the constant function on \(M\) taking the value \(g\) everywhere. We know that such a constant function has to be in \(\mathscr{F}\).
**Remark 2.11**.: In the literature, the term "differential space" is used for a variety of mathematical objects, some of which are related to Sikorski's differential spaces, and some that are not related at all.
**Example 2.12**.: Let \(M\) be a manifold (second countable and Hausdorff). Then the pair \((M,C^{\infty}(M))\), where \(C^{\infty}(M)\) is the set of \(C^{\infty}\) functions, is a differential space in the sense of Definition 2.9. The main point to check is that the topology on \(M\) coincides with the smallest topology making all the functions in \(C^{\infty}(M)\) continuous. This follows from the existence of bump functions on manifolds and from Lemma 2.17 below. Alternatively, it follows from a theorem of Whitney by which any closed subset of a manifold \(M\) is the zero set of a smooth function. See, for example, [8, Theorem 2.29].
**Definition 2.13**.: Given a manifold \(M\) we refer to the \(C^{\infty}\)-ring \(C^{\infty}(M)\) of smooth functions on \(M\) as the standard differential structure.
**Example 2.14**.: Let \(M\) be a manifold. Then the set \(C^{0}(M)\) of _continuous_ function on \(M\) is also a differential structure. Unless \(M\) is discrete, the \(C^{\infty}\)-ring \(C^{0}(M)\) is bigger than \(C^{\infty}(M)\).
**Definition 2.15**.: Let \((M,\mathscr{T})\) be a topological space, \(C\subset M\) a closed set and \(x\in M\smallsetminus C\) a point. A bump function (relative to \(C\) and \(x\)) is a continuous function \(\rho:M\to[0,1]\) so that \((\operatorname{supp}\rho)\cap C=\varnothing\) and \(\rho\) is identically \(1\) on a neighborhood of \(x\).
**Definition 2.16**.: Let \((M,\mathscr{T})\) be a topological space and \(\mathscr{F}\subseteq C^{0}(M,\mathbb{R})\) a collection of continuous real-valued functions on \(M\). The topology \(\mathscr{T}\) on \(M\) is \(\mathscr{F}\)-regular iff for any closed subset \(C\) of \(M\) and any point \(x\in M\smallsetminus C\) there is a bump function \(\rho\in\mathscr{F}\) with \(\operatorname{supp}\rho\subset M\smallsetminus C\) and \(\rho\) identically \(1\) on a neighborhood of \(x\).
**Lemma 2.17**.: Let \((M,\mathscr{T})\) be a topological space and \(\mathscr{F}\subset C^{0}(M,\mathbb{R})\) a \(C^{\infty}\)-subring. Then \(\mathscr{T}\) is the smallest topology making all the functions in \(\mathscr{F}\) continuous if and only if the topology \(\mathscr{T}\) is \(\mathscr{F}\)-regular.
Proof.: Let \(\mathscr{T}_{\mathscr{F}}\) denote the smallest topology making all the functions in \(\mathscr{F}\) continuous. The set
\[\mathscr{S}:=\{f^{-1}(I)\mid f\in\mathscr{F},\ I\text{ is an open interval }\}\]
is a sub-basis for \(\mathscr{T}_{\mathscr{F}}\). Since all the functions in \(\mathscr{F}\) are continuous with respect to \(\mathscr{T}\), \(\mathscr{T}_{\mathscr{F}}\subseteq\mathscr{T}\). Therefore it is enough to argue that \(\mathscr{T}\subseteq\mathscr{T}_{\mathscr{F}}\) if and only if \(\mathscr{T}\) is \(\mathscr{F}\)-regular.
\((\Rightarrow)\) Suppose \(\mathscr{T}\subseteq\mathscr{T}_{\mathscr{F}}\). Let \(C\subset M\) be \(\mathscr{T}\)-closed and \(x\) a point in \(M\) which is not in \(C\). Then \(M\setminus C\) is \(\mathscr{T}\)-open. Since \(\mathscr{T}\subseteq\mathscr{T}_{\mathscr{F}}\) by assumption, \(M\setminus C\) is in \(\mathscr{T}_{\mathscr{F}}\). Then there exist functions \(h_{1},\ldots,h_{k}\in\mathscr{F}\) and open intervals \(I_{1},\ldots,I_{k}\) such that \(x\in\cap_{i=1}^{k}h_{i}^{-1}(I_{i})\subset M\setminus C\). There is a \(C^{\infty}\) function \(\rho:\mathbb{R}^{k}\to[0,1]\) with \(\operatorname{supp}\rho\subset I_{1}\times\ldots\times I_{k}\) and the property that \(\rho=1\) on a neighborhood of \((h_{1}(x),\ldots,h_{k}(x))\) in \(\mathbb{R}^{k}\). Then \(\tau:=\rho\circ(h_{1},\ldots,h_{k})\) is in \(\mathscr{F}\), since \(\mathscr{F}\) is a \(C^{\infty}\)-subring of \(C^{0}(M)\). The function \(\tau\) is a desired bump function.
\((\Leftarrow)\) Suppose the topology \(\mathscr{T}\) is \(\mathscr{F}\)-regular. Let \(U\in\mathscr{T}\) be an open set. Then \(C=M\setminus U\) is closed. Since \(\mathscr{T}\) is \(\mathscr{F}\)-regular, for any point \(x\in U\) there is a bump function \(\rho_{x}\in\mathscr{F}\) with \(\operatorname{supp}\rho_{x}\subset U\) and \(\rho_{x}\) is identically \(1\) in a neighborhood of \(x\). Then \(\rho_{x}^{-1}((0,\infty))\subset U\) and \(\rho_{x}^{-1}((0,\infty))\in\mathscr{T}_{\mathscr{F}}\). It follows that
\[U=\bigcup_{x\in U}\rho_{x}^{-1}((0,\infty))\in\mathscr{T}_{\mathscr{F}}.\]
Since \(U\) is an arbitrary element of \(\mathscr{T}\), \(\mathscr{T}\subseteq\mathscr{T}_{\mathscr{F}}\).
**Definition 2.18**.: A smooth map from a differential space \((M,\mathscr{F}_{M})\) to a differential space \((N,\mathscr{F}_{N})\) is a function \(\varphi:M\to N\) such that for every \(f\in\mathscr{F}_{N}\) the composite \(f\circ\varphi\) is in \(\mathscr{F}_{M}\).
**Remark 2.19**.: Given a differential space \((M,\mathscr{F})\), the set \(\mathscr{F}\) coincides with the set of smooth maps \((M,\mathscr{F})\to(\mathbb{R},C^{\infty}(\mathbb{R}))\).
**Remark 2.20**.: A map between two manifolds is smooth in the usual sense if and only if it is a smooth map between the corresponding differential spaces (when both manifolds are given the standard differential structures).
**Remark 2.21**.: It is easy to see that the composite of two smooth maps between differential spaces is again smooth. It's even easier to see that the identity map on a differential space is smooth. Consequently differential spaces form a category.
**Definition 2.22**.: A smooth map between two differential spaces is a diffeomorphism if it is invertible and the inverse is smooth.
Equivalently a smooth map is a diffeomorphism iff it is an isomorphism in the category of differential spaces.
**Remark 2.23**.: Every smooth map of differential spaces is continuous; this follows from (2.9.i).
**Remark 2.24**.: Any differential structure \(\mathscr{F}\) is an \(\mathbb{R}\)-subalgebra of \(C^{0}(M)\): for any \(f_{1},f_{2}\in\mathscr{F}\), \(\lambda,\mu\in\mathbb{R}\)
\[\lambda f_{1}+\mu f_{2} =g\circ(f_{1},f_{2})\quad\text{ where }g(x,y):=\lambda x+\mu y\in C^{\infty}(\mathbb{R}^{2}),\] \[f_{1}f_{2} =h\circ(f_{1},f_{2})\quad\text{ where }h(x,y):=xy\in C^{\infty}( \mathbb{R}^{2}).\]
**Remark 2.25**.: Any \(C^{\infty}\)-ring is an \(\mathbb{R}\)-algebra (more precisely: has an underlying \(\mathbb{R}\)-algebra structure). The binary operations \(+\) and \(\cdot\) come from the functions \(h(x,y)=x+y\) and \(g(x,y)=xy\) respectively. The scalars come from the \(0\)-ary operations.
We will not notationally distinquish between a \(C^{\infty}\)-ring and the corresponding (underlying) \(\mathbb{R}\)-algebra.
**Definition 2.26**.: A differential structure \(\mathscr{F}\) on a set \(M\) is generated by a subset \(A\subseteq\mathscr{F}\) if \(\mathscr{F}\) is the smallest differential structure containing the set \(A\). That is, if \(\mathscr{G}\) is a differential structure on \(M\) containing \(A\), then \(\mathscr{F}\subseteq\mathscr{G}\).
**Lemma 2.27**.: Given a collection \(A\) of real-valued functions on a set \(M\) there is a differential structure \(\mathscr{F}\) on \(M\) generated by \(A\). The initial topology for \(\mathscr{F}\) is the initial topology for the set \(A\).
Proof.: See [18, Theorem 2.1.7].
**Notation 2.28**.: We write \(\mathscr{F}=\langle A\rangle\) if the differential structure \(\mathscr{F}\) is generated by the set \(A\).
**Definition 2.29**.: Let \((M,\mathscr{F})\) be a differential space and \(N\subseteq M\) a subset. The subspace differential structure \(\mathscr{F}_{N}\) on \(N\), also known as the induced differential structure, is the differential structure on \(N\) generated by the set \(A\) of restrictions to \(N\) of the functions in \(\mathscr{F}\):
\[A=\{g:N\to\mathbb{R}\mid g=f|_{N}\text{ for some }f\in\mathscr{F}\}.\]
**Definition 2.30**.: A smooth map \(f:(M,\mathscr{F}_{M})\to(N,\mathscr{F}_{N})\) between two differential spaces is an embedding if \(f\) is injective and the induced map \(f:(M,\mathscr{F}_{M})\to(f(M),\langle\mathscr{F}_{N}|_{f(M)}\rangle)\) from \(M\) to its image (with the subspace differential structure) is a diffeomorphism.
**Lemma 2.31**.: Let \((M,\mathscr{F})\) be a differential space and \((N,\mathscr{F}_{N})\) a subset of \(M\) with the induced/subspace differential structure. Then the smallest topology on \(N\) making all the functions of \(\mathscr{F}_{N}\) continuous agrees with the subspace topology on \(N\) coming from the inclusion \(i:N\hookrightarrow M\)
Proof.: The initial topology for the set \(\mathscr{F}|_{N}\) of generators of \(\mathscr{F}_{N}\) is the subspace topology. Consequently the initial topology for \(\mathscr{F}_{N}=\langle\mathscr{F}|_{N}\rangle\) is also the subspace topology (cf. Lemma 2.27).
**Remark 2.32**.: The subspace differential structure \(\mathscr{F}_{N}\) can be given a fairly explicit description:
\[\mathscr{F}_{N}=\{f:N\to\mathbb{R}\mid\text{there is a collection of sets }\{U_{i}\}_{i\in I},\text{ open in M, with }\bigcup_{i}U_{i}\supset N\]
\[\text{ and a collection }\{g_{i}\}_{i\in I}\subseteq\mathscr{F}\text{ such that }f|_{N\cap U_{i}}=g_{i}|_{N\cap U_{i}}\text{ for all indices }i\}.\]
**Remark 2.33**.: Let \((M,\mathscr{F})\) be a differential space and \((N,\mathscr{F}_{N})\) a subset of \(M\) with the induced/subspace differential structure. Then the inclusion map \(i:N\hookrightarrow M\) is smooth since for any \(f\in\mathscr{F}\), \(f\circ i=f|_{N}\in\mathscr{F}_{N}\) by definition of \(\mathscr{F}_{N}\).
The subspace differential structure \(\mathscr{F}_{N}\) is the _smallest_ differential structure on \(N\) making the inclusion \(i:N\to M\) smooth. This is because any differential structure \(\mathscr{G}\) on \(N\) making \(i:(N,\mathscr{G})\to(M,\mathscr{F})\) smooth must contain the set \(\mathscr{F}|_{N}\),
**Lemma 2.34**.: Let \((M,\mathscr{F})\) be a differential space and \((N,\mathscr{F}_{N})\) a subset of \(M\) with the induced/subspace differential structure. For any differential space \((Y,\mathscr{G})\) and for any smooth map \(\varphi:Y\to M\) that factors through the inclusion \(i:N\to M\) (i.e., \(\varphi(Y)\subset N\)), the map \(\varphi:(Y,\mathscr{G})\to(N,\mathscr{F}_{N})\) is smooth.
Proof.: We need to show that \(\varphi^{*}h\equiv h\circ\varphi\in\mathscr{G}\) for any \(h\in\mathscr{F}_{N}\). For any \(f\in\mathscr{F}\),
\[\mathscr{G}\ni f\circ\varphi=f\circ i\circ\varphi=\varphi^{*}(f|_{N}).\]
Consequently \(\varphi^{*}(\mathscr{F}|_{N})\subseteq\mathscr{G}\). Since \(\mathscr{F}|_{N}\) generates \(\mathscr{F}_{N}\) and since \(\mathscr{G}\) is a differential structure, we must have \(\varphi^{*}\mathscr{F}_{N}\subseteq\mathscr{G}\) as well.
**Corollary 2.35**.: Let \((M,\mathscr{F})\) be a differential space and \(K\subseteq N\subseteq M\) subsets. Then
\[\langle\mathscr{F}|_{K}\rangle=\langle\langle\mathscr{F}|_{N}\rangle|_{K}\rangle,\]
that is, the differential structure on \(K\) induced by the inclusion \(K\hookrightarrow M\) agrees with the differential structure on \(K\) successively induced by the pair of the inclusions \(K\hookrightarrow N\hookrightarrow M\).
Proof.: Since for any \(f\in\mathscr{F}\), \(f|_{K}=(f|_{N})|_{K}\in\langle\mathscr{F}|_{N}\rangle|_{K}\), we have \(\mathscr{F}|_{K}\subseteq\langle\langle\mathscr{F}|_{N}\rangle|_{K}\rangle\). Therefore \(\langle\mathscr{F}|_{K}\rangle\subseteq\langle\langle\mathscr{F}|_{N}\rangle|_{ K}\rangle\).
On the other hand the inclusion \(K\hookrightarrow M\) factors through the inclusion \(K\hookrightarrow N\). Hence by Lemma 2.34, the map \(j:(K,\langle\mathscr{F}|_{K}\rangle)\hookrightarrow(N,\langle\mathscr{F}|_{N}\rangle)\) is smooth. But the image of \(j\) lands in \(K\). Hence the identity map \(\operatorname{id}:(K,\langle\mathscr{F}|_{K}\rangle)\to(K,\langle\langle \mathscr{F}|_{N}\rangle|_{K}\rangle)\) is smooth. Consequently \(\langle\langle\mathscr{F}|_{N}\rangle|_{K}\rangle=\operatorname{id}^{*} \langle\langle\mathscr{F}|_{N}\rangle|_{K}\rangle\subseteq\langle\mathscr{F}|_{ K}\rangle\).
In the case where the differential space \((M,\mathscr{F})\) is a manifold and \(N\) is a subset of \(M\) the subspace differential structure \(\mathscr{F}_{N}\) has a simple description:
**Lemma 2.36**.: Let \(M\) be a manifold and \(N\) a subset of \(M\), Then \(f:N\to\mathbb{R}\) is in \(C^{\infty}(M)_{N}:=\langle C^{\infty}(M)|_{N}\rangle\) (the subspace differential structure on \(N\)) if and only if there is an open neighbourhood \(U\) of \(N\) in \(M\) and a smooth function \(g:U\to\mathbb{R}\) such that \(f=g|_{N}\). Moreover, if \(N\) is closed in \(M\), we may take \(U=M\).
**Remark 2.37**.: Let \(M\) be a manifold and \(U\subset M\) an open subset. Then \(C^{\infty}(U)=\langle C^{\infty}(M)|_{U}\rangle\). This follows from the existence of bump functions.
Proof of Lemma 2.36.: Let \(U\subset M\) be an open set with \(N\subset U\), and let \(g\in C^{\infty}(U)\).
By Remark 2.37 and Corollary 2.35, for any open set \(U\subset M\) with \(N\subset U\) and any \(g\in C^{\infty}(U)\), the restriction \(g|_{N}\) is in \(C^{\infty}(M)_{N}\).
Conversely suppose \(f\in C^{\infty}(M)_{N}\). Then there is an collection of open sets \(\{U_{i}\}_{i\in I}\) with \(N\subset\bigcup_{i}U_{i}\) and \(\{g_{i}\}_{i\in I}\subset C^{\infty}(M)\) so that \(f|_{U_{i}\cap N}=g_{i}|_{U_{i}\cap N}\) for all \(i\). Let \(U=\bigcup_{i}U_{i}\). There is a partition
of unity \(\{\rho_{i}\}_{i\in I}\) on \(U\) subordinate to the cover \(\{U\}_{i\in I}\). Consider \(g:=\sum\rho_{i}g_{i}\in C^{\infty}(U)\). Then \(g|_{N}=f\).
If \(N\) is closed, then \(\{U_{i}\}_{i\in I}\cup\{M\smallsetminus N\}\) is an open cover of \(M\). Choose a partition of unity \(\{\rho_{i}\}_{i\in I}\cup\{\rho_{0}\}\) subordinate to this cover of \(M\) (with \(\operatorname{supp}\rho_{0}\subset M\smallsetminus N\)). and again set \(g:=\sum_{i\in I}\rho_{i}g_{i}\). Then \(g\) is a smooth function on all of \(M\), and \(g|_{N}=f\).
**Remark 2.38**.: Lemma 2.36 holds in greater generality. The proof does not really used the fact that \(M\) is a manifold, it only needs the existence of the partition of unity. These do exist for second countable Hausdorff locally compact differential spaces; see [18].
**Definition 2.39**.: A differential space \((M,\mathscr{F})\) is subcartesian iff it is locally isomorphic to a subset of a Euclidean (a.k.a. Cartesian) space: for every point \(p\in M\) there is an open neighborhood \(U\) of \(p\) in \(M\) and an embedding \(\varphi:(U,\mathscr{F}_{U})\to(\mathbb{R}^{n},C^{\infty}(\mathbb{R}^{n}))\) (\(n\) depends on the point \(p\)).
**Products.** The domain of a flow of a vector field on a manifold \(M\) is a subset of the product \(M\times\mathbb{R}\). Therefore in order to define and understand flows of derivations on differential spaces we need to understand finite products in the category of differential spaces.
Given two differential spaces \((M_{1},\mathscr{F}_{1})\) and \((M_{2},\mathscr{F}_{2})\) there are many differential structures on their product \(M_{1}\times M_{2}\) so that the projections \(\pi_{i}:M_{1}\times M_{2}\to M_{i}\), \(i=1,2\) are smooth. The smallest such structure is the one generated by the set \(\pi_{1}^{*}\mathscr{F}_{1}\cup\pi_{2}^{*}\mathscr{F}_{2}\). We denote this structure by \(\mathscr{F}_{\text{prod}}\). That is,
\[\mathscr{F}_{prod}:=\langle\pi_{1}^{*}\mathscr{F}_{1}\cup\pi_{2}^{*}\mathscr{F }_{2}\rangle.\]
Since the initial topology for \(\pi_{1}^{*}\mathscr{F}_{1}\cup\pi_{2}^{*}\mathscr{F}_{2}\) is the product topology, the initial topology for \(\mathscr{F}_{\text{prod}}\) is also the product topology (cf. Remark 2.27).
We next check that \((M_{1}\times M_{2},\mathscr{F}_{\text{prod}})\) together with the projections \(\pi_{1},\pi_{2}\) has the universal properties of the product in the category of differential spaces. Note that the projections \(\pi_{1},\pi_{2}\) are smooth.
**Lemma 2.40**.: Let \((M_{1},\mathscr{F}_{1})\), \((M_{2},\mathscr{F}_{2})\) be two differential spaces, \((Y,\mathscr{G})\) another differential space, \(\varphi_{i}:Y\to M_{i}\), \(i=1,2\) two smooth maps. Then there exists a unique smooth map \(\varphi:(Y,\mathscr{G})\to(M_{1}\times M_{2},\mathscr{F}_{\text{prod}})\) with \(\pi_{i}\circ\varphi=\varphi_{i}\), \(i=1,2\).
Proof.: Clearly there is a unique map of _sets_\(\varphi:Y\to M_{1}\times M_{2}\) with \(\pi_{i}\circ\varphi=\varphi_{i}\), \(i=1,2\). Moreover since \(\varphi_{i}\)\((i=1,2)\) are smooth
\[\mathscr{G}\supseteq\varphi_{i}^{*}\mathscr{F}_{i}=\varphi^{*}(\pi^{*}\mathscr{ F}_{i}).\]
Therefore \(\varphi^{*}(\pi_{1}^{*}\mathscr{F}_{1}\cup\pi_{2}^{*}\mathscr{F}_{2})\subseteq\mathscr{G}\). Since \(\mathscr{F}_{\text{prod}}=\langle\pi_{1}^{*}\mathscr{F}_{1}\cup\pi_{2}^{*} \mathscr{F}_{2}\rangle\), \(\varphi^{*}\mathscr{F}_{\text{prod}}\subseteq\mathscr{G}\) as well. Thus \(\varphi\) is smooth.
**Remark 2.41**.: Let \(M_{1}\), \(M_{2}\) be two manifolds with the usual differential structures (i.e., \(C^{\infty}(M_{1})\) and \(C^{\infty}(M_{2})\)). Then \(C^{\infty}(M_{1}\times M_{2})\) is the product differential structure on \(M_{1}\times M_{2}\).
We end the section by proving that taking products commutes with taking subspaces.
**Lemma 2.42**.: Let \((M_{1},\mathscr{F}_{1})\), \((M_{2},\mathscr{F}_{2})\) be two differential spaces, \(N_{1}\subseteq M_{1}\), \(N_{2}\subseteq M_{2}\) two subspaces, \(\mathscr{G}_{1},\mathscr{G}_{2}\) the subspace differential structures on \(N_{1}\), \(N_{2}\) respectively. Then the product differential structure \(\mathscr{G}_{\text{prod}}\) on \(N_{1}\times N_{2}\) is the subspace differential structure \((\mathscr{F}_{\text{prod}})_{N_{1}\times N_{2}}\).
Proof.: It is enough to check that \((N_{1}\times N_{2},(\mathscr{F}_{\text{prod}})_{N_{1}\times N_{2}})\) together with the projections
\[\mathsf{pr}_{i}:(N_{1}\times N_{2},(\mathscr{F}_{\text{prod}})_{N_{1}\times N_{2 }})\to(N_{i},\mathscr{G}_{i}),\]
\(i=1,2\), has the universal properties of the product (in the category of differential spaces).
We first argue that the projections \(\mathsf{pr}_{1},\mathsf{pr}_{2}\) are smooth. The projections
\[\pi_{i}:(M_{1}\times M_{2},\mathscr{F}_{\text{prod}})\to(M_{i},\mathscr{F}_{i})\]
are smooth by definition of the product differential structure \(\mathscr{F}_{\text{prod}}\). Hence their restrictions \(\pi_{i}|_{N_{1}\times N_{2}}:N_{1}\times N_{2}\to M_{i}\) are smooth as well. Since \(\pi_{i}(N_{1}\times N_{2})\subseteq N_{i}\), the maps \(\mathsf{pr}_{i}=\pi_{i}|_{N_{1}\times N_{2}}:(N_{1}\times N_{2},(\mathscr{F}_{ \text{prod}})_{N_{1}\times N_{2}})\to(N_{i},\mathscr{G}_{i})\) are also smooth (by Lemma 2.34).
Now let \((Y,\mathscr{A})\) be a differential space and \(\varphi_{i}:Y\to N_{i}\), \(i=1,2\) be a pair of smooth maps. Since the inclusions \(j_{i}:N_{i}\to M_{i}\) are smooth, the composites \(j_{i}\circ\varphi_{i}:Y\to M_{i}\), \(i=1,2\), are smooth. By the universal property of the product there is a unique smooth map \(\varphi:(Y,\mathscr{A})\to(M_{1}\times M_{2},\mathscr{F}_{\text{prod}})\) so that \(\pi_{i}\circ\varphi=j_{i}\circ\varphi_{i}\). Consequently \((\pi_{i}\circ\varphi)(Y)\subseteq N_{i}\). Hence \(\varphi(Y)\subseteq N_{1}\times N_{2}\). Since \(N_{1}\times N_{2}\) is a subspace of \((M_{1}\times M_{2},\mathscr{F}_{\text{prod}})\) the map \(\varphi:(Y,\mathscr{A})\to(N_{1}\times N_{2},(\mathscr{F}_{\text{prod}})_{N_{ 1}\times N_{2}})\) is smooth (Lemma 2.34). Therefore \((N_{1}\times N_{2},(\mathscr{F}_{\text{prod}})_{N_{1}\times N_{2}})\) together with the projections \(\mathsf{pr}_{1},\mathsf{pr}_{2}\) is the product of \((N_{1},\mathscr{G}_{1})\) and \((N_{2},\mathscr{G}_{2})\). We conclude that \((\mathscr{F}_{\text{prod}})_{N_{1}\times N_{2}}=\mathscr{G}_{\text{prod}}\).
## 3. Derivations and their flows
A vector field \(v\) on a manifold \(M\) can be defined as a derivation \(v:C^{\infty}(M)\to C^{\infty}(M)\) of the \(\mathbb{R}\)-algebra of smooth functions on \(M\): \(v\) is \(\mathbb{R}\)-linear and for any two functions \(f,g\in C^{\infty}(M)\) the product rule holds:
\[v(fg)=v(f)g+fv(g). \tag{3.1}\]
One then proves that the chain rule also holds: for any \(n\geq 1\), any \(g\in C^{\infty}(\mathbb{R}^{n})\) and any \(f_{1},\dots,f_{n}\in C^{\infty}(M)\)
\[v(g\circ(f_{1},\dots,f_{n}))=\sum_{i=1}^{n}\left((\partial_{i}g)\circ(f_{1}, \dots,f_{n})\right)\cdot v(f_{i}). \tag{3.2}\]
Note that (3.2), appropriately interpreted, makes sense for any \(C^{\infty}\)-ring, since any \(C^{\infty}\)-ring is an \(\mathbb{R}\)-algebra (cf. Remark 2.25) and therefore carries the addition and multiplication operations. Namely, we have the following definition.
**Definition 3.3**.: Let \(\mathscr{A}\) be a \(C^{\infty}\)-ring. A \(C^{\infty}\)-ring derivation of \(\mathscr{A}\) is a function \(v:\mathscr{A}\to\mathscr{A}\) so that for any \(n\geq 1\), any \(g\in C^{\infty}(\mathbb{R}^{n})\) and any \(f_{1},\dots,f_{n}\in\mathscr{A}\)
\[v(g_{\mathscr{A}}(f_{1},\dots,f_{n}))=\sum_{i=1}^{n}\left((\partial_{i}g)_{ \mathscr{A}}(f_{1},\dots,f_{n})\right)\cdot v(f_{i}). \tag{3.4}\]
It turns out, thanks to a theorem of Yamashita [19, Theorem 3.1], that any \(\mathbb{R}\)-algebra derivation of a jet-determined \(C^{\infty}\)-ring is automatically a \(C^{\infty}\)-ring derivation. We will not explain what "jet-determined" means, since this will take us too far afield. Suffices to say that all \(C^{\infty}\)-rings arising as differential structures are jet-determined \(C^{\infty}\)-rings. In fact, there is a class of \(C^{\infty}\)-rings that includes differential structures (namely for point-determined \(C^{\infty}\)-rings, see Definition A.2) for which Yamashita's theorem has a short proof. We present the proof in Appendix A. _From now on when talking about derivations of differential structures we will not distinguish between \(\mathbb{R}\)-algebra derivations and \(C^{\infty}\)-ring derivations since they are one and the same._
**Remark 3.5**.: Given a differential space \((M,\mathscr{F})\) we view a derivation \(v:\mathscr{F}\to\mathscr{F}\) as the correct analogue of a vector field on \((M,\mathscr{F})\). Thus "vector fields" in the title of the paper are derivations of differential structures. See also Remark 3.16.
We now define integral curves of a derivation.
**Definition 3.6**.: An interval is a connected subset of the real line \(\mathbb{R}\).
**Remark 3.7**.:
* By Definition 3.6 a single point is an interval.
* The induced differential structure on an interval \(I\subset\mathbb{R}\) is the set of smooth functions \(C^{\infty}(I)\) on \(I\) (note that \(C^{\infty}(I)\) makes sense in all cases: \(I\) is open, closed, half-closed or a single point).
* Unless the interval \(I\) is a singleton, there is a canonical derivation \(\frac{d}{dx}:C^{\infty}(I)\to C^{\infty}(I)\) since we can differentiate smooth functions on an interval.
**Definition 3.8**.: Let \(v:\mathscr{F}\to\mathscr{F}\) be a derivation on a differential space \((M,\mathscr{F})\). An integral curve \(\gamma\) of \(v\) is either a map \(\gamma:\{*\}\to M\) from a \(1\)-point interval or a smooth map \(\gamma:(J,C^{\infty}(J))\to(M,\mathscr{F})\) from an interval \(J\subset\mathbb{R}\) so that
\[\frac{d}{dx}(f\circ\gamma)=v(f)\circ\gamma \tag{3.9}\]
for all functions \(f\in\mathscr{F}\).
The curve \(\gamma\) starts at a point \(p\in M\) if \(0\in J\) and \(\gamma(0)=p\).
**Remark 3.10**.: We tacitly assume that all integral curves contain zero in their domain of definition. Thus any integral curve \(\gamma\) of a derivation starts at \(\gamma(0)\).
**Definition 3.11**.: An integral curve \(\gamma:I\to M\) of a derivation \(v\) on a differential space \((M,\mathscr{F})\) is maximal if for any other integral curve \(\tau:K\to M\) of \(v\) with \(\tau(0)=\gamma(0)\) we have \(K\subseteq I\) and \(\gamma|_{K}=\tau\).
**Remark 3.12**.: Note that maximal integral curves are necessarily unique.
The following examples are meant to illustrate two points:
1. curves that only exist for time \(0\) should be allowed as integral curves of derivations and
2. we should not require that the domain of an integral curve is an open interval.
**Example 3.13**.: Consider the derivation \(v=\frac{d}{dx}\) on the interval \([0,1]\). Then \(\gamma:[-1/2,1/2]\to[0,1]\), \(\gamma(t)=t+1/2\) is an integral curve of \(v\). The curve \(\gamma\) is a maximal integral curve of \(v\) and its domain is a closed interval.
**Example 3.14**.: Let \(M\) be the standard closed disk in \(\mathbb{R}^{2}\): \(M=\{(x,y)\mid x^{2}+y^{2}\leq 1\}\). Then \(M\) is a manifold with boundary and a differential subspace of \(\mathbb{R}^{2}\) (the two spaces of smooth functions are the same!). Consider the vector field \(v=\frac{\partial}{\partial x}\) on \(M\). The curve \(\gamma:\{0\}\to M\), \(\gamma(0)=(0,1)\) is an integral curve of \(v\); it only exists for zero time. The derivation \(v\) does have a flow in the sense of Definition 3.17 below. The flow is
\[\Phi:U\equiv\{((x,y),t)\in\mathbb{R}^{2}\times\mathbb{R}\mid x^{2}+y^{2}\leq 1,(x+t)^{2}+y^{2}\leq 1\}\to M,\qquad\Phi((x,y),t)=(x+t,y).\]
Note that while \(M\) is a manifold with boundary, the flow domain \(U\) is not a manifold with boundary nor a manifold with corners. The domain \(U\) is a differential space, and the flow \(\Phi\) is smooth since it is the restriction to \(U\) of the smooth map \(\Psi:\mathbb{R}^{3}\to\mathbb{R}^{2}\), \(\Psi((x,y),t)=(x+t,y)\).
**Remark 3.15**.: Sniatycki in [18, Definition 3.2.2] defines a derivation \(X\) on a differential space \((M,\mathscr{F})\) to be a vector field only if for every point \(x\in M\) there is an open neighborhood \(U\) of \(x\) and \(\varepsilon>0\) so that for every \(y\in U\) there is an integral curve of \(X\) starting at \(y\) which is defined for all \(t\in(-\varepsilon,\varepsilon)\). Note that by _this_ definition a vector field \(X\) on a manifold with boundary is a vector field in Sniatycki's sense if and only if \(X\) is tangent to the boundary.
**Remark 3.16**.: We were tempted to call all derivations on differential spaces "vector fields." In the end we decided against it to avoid the clash with Sniatycki's terminology.
**Definition 3.17**.: Let \(v:\mathscr{F}\to\mathscr{F}\) be a derivation on a differential space \((M,\mathscr{F})\). A flow of \(v\) is a smooth map \(\Phi:W\to M\) from a subspace \(W\) of \(M\times\mathbb{R}\) with \(M\times\{0\}\subset W\) such that for all \(x\in M\)
* \(\Phi(x,0)=x\);
* The set \(I_{x}:=\{t\in\mathbb{R}\mid(x,t)\in W\}\) is connected;
* The map \(\Phi(x,\cdot):I_{x}\to M\) is a maximal integral curve for \(v\) (see Definition 3.11).
We are now in position to state the main result of the paper.
**Theorem 3.18**.: Let \((M,\mathscr{F})\) be a differential space which is diffeomorphic to a subset of some \(\mathbb{R}^{n}\), and \(v:\mathscr{F}\to\mathscr{F}\) a derivation. Then \(v\) has a unique flow (Definition 3.17).
**Remark 3.19**.: The conditions of Theorem 3.18 are not as restrictive as they may seem at the first glance since there a version of Whitney embedding theorem for subcartesian spaces [1, 5, 11].1
Footnote 1: To precisely state the embedding theorem of Breuer, Marshall, Kowalczyk and Motreanu we need to recall the definition of the structural dimension of a subcartesian space. It proceeds as follows: given a subcartesian space \(M\) its structural dimension at a point \(x\in M\) is the smallest integer \(n_{x}\) so that a neighborhood of \(x\) can be embedded in \(\mathbb{R}^{n_{x}}\). The structural dimension of a subcartesian space \(M\) is the supremum of the set of structural dimensions of points of \(M\). The embedding theorem (see, for example, Theorem 2.2 of [1]) then says:
**Theorem 3.20**.: Any second countable subcartesian space \(M\) of finite structural dimension can be embedded in some Euclidean space.
**Remark 3.21**.: If \(M\) is a subset of \(\mathbb{R}^{n}\) (with the subset differential structure) then \(M\) is subcartesian and second countable, and its structural dimension is bounded above by \(n\). So the conditions of Theorem 3.20 are necessary.
A subset \(M\) of \(\mathbb{R}^{n}\) need not be locally compact. Note that the conditions of Theorem 3.20 do not require local compactness.
**Remark 3.22**.: The disjoint union \(\bigsqcup_{n\geq 0}\mathbb{R}^{n}\) is an example of a subcartesian space that is not embeddable in \(\mathbb{R}^{N}\) for any \(N\): its structural dimension is infinite.
Hence, since \(v\) is a \(C^{\infty}\)-ring derivation,
\[v(h|_{M})=v(k_{\mathscr{F}}(x_{1}|_{M},\dots x_{n}|_{M}))=\sum_{i=1}^{n}(\partial _{i}k)|_{M}\cdot v(x_{i}|_{M})=\sum_{i=1}^{n}(\partial_{i}h)|_{M}\cdot v(x_{i}|_ {M})\]
and (3.26) holds for such a function \(h\).
Otherwise by the Localization Theorem [12] there exist functions \(k,\ell\in C^{\infty}(\mathbb{R}^{n})\) with \(\ell|_{W}\) invertible in \(C^{\infty}(W)\) so that \(h=\frac{k|_{W}}{\ell|_{W}}\). Then \(h|_{M}=k|_{M}(\ell|_{M})^{-1}\) and therefore,
\[v(h|_{M}) =v(k|_{M}(\ell|_{M})^{-1})=v(k|_{M})\cdot(\ell|_{M})^{-1}-k|_{M}( \ell|_{M})^{-2}v(\ell|_{M})\qquad\text{(by Lemma \ref{lem:2.2})}\] \[=\sum_{i}\left((\partial_{i}k)|_{M}(\ell|_{M})^{-1}-k|_{M}(\ell|_ {M})^{-2}(\partial_{i}\ell)|_{M}\right)\cdot v(x_{i}|_{M})\quad\text{ (since \eqref{eq:2.2}) holds for $k|M$ and $\ell|_{M})$}\] \[=\sum_{i}\left.\partial_{i}\left(\frac{k|_{W}}{\ell|_{W}}\right) \right|_{M}\cdot v(x_{i}|_{M})\] \[=\sum_{i=1}^{n}(\partial_{i}h)|_{M}\cdot v(x_{i}|_{M}).\]
**Lemma 3.27**.: Let \(M\subset\mathbb{R}^{n}\) be a subset, \(\mathscr{F}\) the induced differential structure on \(M\) and \(v:\mathscr{F}\to\mathscr{F}\) a derivation. For any point \(p\in M\) there exists a unique maximal integral curve \(\gamma:I\to M\) of \(v\) with \(\gamma(0)=p\).
Proof.: By definition of the induced differential structure \(\mathscr{F}\) on \(M\) the restrictions \(x_{i}|_{M}\), \(1\leq i\leq n\), are in \(\mathscr{F}\). Here as before \(x_{1},\dots,x_{n}:\mathbb{R}^{n}\to\mathbb{R}\) are the standard coordinate functions. Then the functions \(v(x_{i}|_{M})\) are also in \(\mathscr{F}\), so there are open neighborhoods \(U_{i}\) of \(M\) in \(\mathbb{R}^{n}\) and \(b_{i}\in C^{\infty}(U_{i})\) with \(b_{i}|_{M}=v(x_{i}|_{M})\) (cf. Lemma 2.36). Let \(U=\bigcap_{1\leq i\leq n}U_{i}\), \(V:=\sum_{i}^{n}b_{i}\frac{\partial}{\partial x_{i}}\). Then \(V\) is a vector field on \(U\). Let \(\tilde{\gamma}:J\to\mathbb{R}^{n}\) be the unique maximal integral curve of the vector field \(V\) with \(\gamma(0)=p\). Let \(I\) denote the connected component of the set \((\tilde{\gamma})^{-1}(M)\) that contains \(0\). We now argue that \(\gamma:=\tilde{\gamma}|_{I}\) is the desired maximal integral curve of the derivation \(v\). Note that since the image of \(\gamma\) lands in \(M\), the map \(\gamma\) is smooth as a map from \((I,C^{\infty}(I))\) into the differential subspace \((M,\mathscr{F})\) (see Lemma 2.34).
If \(I\) is the singleton \(\{0\}\), there is nothing to prove. So suppose \(I\neq\{0\}\). Given \(f\in\mathscr{F}\) there is an open neighborhood \(W\) of \(M\) in \(\mathbb{R}^{n}\) and a smooth function \(h\in C^{\infty}(W)\) with \(f=h|_{M}\) (Lemma 2.36). By replacing \(W\) with \(W\cap U\) if necessary we may assume that \(W\subset U\). Note that for any \(t\in I\), \(\gamma(t)=\tilde{\gamma}(t)\). We now compute:
\[\frac{d}{dt}(f\circ\gamma)\left(t\right) =\frac{d}{dt}(h\circ\tilde{\gamma})\left(t\right)\] \[=V(h)\left(\gamma(t)\right)\qquad\text{(since $\tilde{\gamma}$ is an integral curve of $V$)}\] \[=\sum_{i}(\partial_{i}h)\left(\tilde{\gamma}(t)\right)\cdot b_{i} (\tilde{\gamma}(t))\qquad\text{( by definition of $V$)}\] \[=\sum_{i}(\partial_{i}h)\left(\gamma(t)\right)\cdot v(x_{i}|_{M}) (\gamma(t))\qquad\text{(by definition of $b_{i}$'s)}\] \[=v(h|_{M})\left(\gamma(t)\right)\qquad\text{(by \eqref{eq:2.2})}\] \[=v(f)\left(\gamma(t)\right).\]
Since \(f\in\mathscr{F}\) is arbitrary, the curve \(\gamma\) is an integral curve of the derivation \(v\).
We now argue that \(\gamma\) is a _maximal_ integral curve of \(v\). Let \(\sigma:K\to M\) be another integral curve of \(v\) with \(\sigma(0)=p\). We first check that \(\sigma\) is an integral curve of the vector field \(V\) on \(W\). Note
that since the inclusion \(M\hookrightarrow W\) is smooth, \(\sigma:K\to W\) is smooth. Consider \(h\in C^{\infty}(W)\). Then for any \(t\in K\)
\[\frac{d}{dt}(h\circ\sigma)\left(t\right) =\frac{d}{dt}((h|_{M})\circ\sigma)\left(t\right)\] \[=v(h|_{M})(\sigma(t))\qquad\text{ (since $\sigma$ is an integral curve of $v$)}\] \[=\sum_{i}(\partial_{i}h)|_{M}\left(\sigma(t)\right)\cdot v(x_{i} |_{M})(\sigma(t))\qquad\text{(by (\ref{eq:2.2}) )}\] \[=\sum_{i}(\partial_{i}h)\left(\sigma(t)\right)\cdot b_{i}(\sigma( t))\qquad\text{(by definition of $b_{i}$'s)}\] \[=V(h)\left(\sigma(t)\right).\]
Hence \(\sigma\) is an integral curve of the vector field \(V\) as claimed.
Since \(\tilde{\gamma}\) is the maximal integral curve of \(V\), \(\sigma=\tilde{\gamma}|_{K}\) and \(K\subset\gamma^{-1}(M)\). Since \(0\in K\), \(K\) is connected and since \(I\) is the connected component of \(0\) in \(\gamma^{-1}(M)\), \(K\subset I\). It follows that \(\sigma=(\tilde{\gamma}|_{I})|_{K}=\gamma|_{K}\) and therefore \(\gamma=\tilde{\gamma}|_{I}\) is the maximal integral curve of the derivation \(v\).
We record two corollaries.
**Corollary 3.28**.: Let \((M,\mathscr{F})\) be a second countable subcartesian space of bounded dimension (so that the assumptions of the Whitney embedding theorem for subcartesian spaces apply, see Remark 3.19 and the footnote). Then for any derivation \(v:\mathscr{F}\to\mathscr{F}\) and for any point \(p\in M\) there is a unique maximal integral curve \(\gamma_{p}\) of \(v\) with \(\gamma_{p}(0)=p\).
The second corollary is really the corollary of the _proof_ of Lemma 3.27.
**Corollary 3.29**.: Let \(M\subset\mathbb{R}^{n}\) be a subset, \(\mathscr{F}\) induced differential structure on \(M\) and \(v:\mathscr{F}\to\mathscr{F}\) a derivation. There exists an open neighborhood \(U\) of \(M\) in \(\mathbb{R}^{n}\) and a vector field \(V\) on \(U\) so that for any \(p\in M\) the maximal integral curve \(\gamma_{p}:I_{p}\to M\) of \(v\) with \(\gamma_{p}(0)=p\) is of the form \(\tilde{\gamma}_{p}|_{I_{p}}\) for the maximal integral curve \(\tilde{\gamma}_{p}:J_{p}\to U\) of \(V\) with \(\tilde{\gamma}(0)=p\).
Proof of Theorem 3.18.: It is no loss of generality to assume that \(M\subset\mathbb{R}^{n}\) and that the differential structure \(\mathscr{F}\) on \(M\) is the subspace differential structure: \(\mathscr{F}=\langle C^{\infty}(\mathbb{R}^{n})|_{M}\rangle\). By Corollary 3.29 there is an open neighborhood \(U\) of \(M\) in \(\mathbb{R}^{n}\) and a vector field \(V\) on \(U\) so that for any \(p\in M\) the maximal integral curve \(\gamma_{p}:I_{p}\to M\) of \(v\) with \(\gamma_{p}(0)=p\) is of the form \(\tilde{\gamma}_{p}|_{I_{p}}\) for the maximal integral curve \(\tilde{\gamma}_{p}:J_{p}\to U\) of \(V\). Let
\[W=\bigcup_{p\in M}\{p\}\times I_{p}\subset M\times\mathbb{R}\subset U\times \mathbb{R}.\]
Note that by definition \(M\times\{0\}\subset W\). Define the map \(\Phi:W\to M\) by
\[\Phi(p,t)=\gamma_{p}(t)\]
for all \((p,t)\in W\). Then \(\Phi\) is a flow of \(v\) modulo the issue of smoothness which we now address.
The vector field \(V\) on \(U\) has the flow
\[\Psi:\widetilde{W}\to U,\qquad\Psi(x,t):=\tilde{\gamma}_{x}(t),\]
where, as above, \(\tilde{\gamma}_{x}:J_{x}\to U\) is the maximal integral curve of \(V\) with \(\tilde{\gamma}(0)=x\) and
\[\widetilde{W}:=\bigcup_{x\in U}\{x\}\times J_{x}.\]
For any point \(p\in M\)
\[\Psi|_{\{p\}\times I_{x}}=\Phi|_{\{p\}\times I_{x}}\]
(since \(\tilde{\gamma}_{p}|_{I_{p}}=\gamma_{p}\)). Therefore
\[\Phi=\Psi|_{W},\]
hence smooth with respect to the differential structure \(\langle C^{\infty}(U\times\mathbb{R})|_{W}\rangle\) on \(W\) induced by the inclusion \(W\hookrightarrow U\times\mathbb{R}\).
It remains to show that \(\langle C^{\infty}(U\times\mathbb{R})|_{W}\rangle=\langle\mathscr{F}_{\text{ prod}}|_{W}\rangle\) where \(\mathscr{F}_{\text{prod}}\) is the product differential structure on \(M\times\mathbb{R}\). Note that \(\mathscr{F}_{\text{prod}}\) depends only on the differential structures on \(M\) and \(\mathbb{R}\).
By Lemma 2.42, \(\mathscr{F}_{\text{prod}}=\langle C^{\infty}(U\times\mathbb{R})|_{M\times \mathbb{R}}\rangle\). By Corollary 2.35, \(\langle\langle C^{\infty}(U\times\mathbb{R})|_{M\times\mathbb{R}}\rangle|_{W} \rangle=\langle C^{\infty}(U\times\mathbb{R})|_{W}\rangle\). Therefore \(\langle\mathscr{F}_{\text{prod}}|_{W}\rangle=\langle C^{\infty}(U\times \mathbb{R})|_{W}\rangle\).
**Example 3.30**.: Let \((M,\omega)\) be a symplectic manifold with a Hamiltonian action of a compact Lie group \(G\) and let \(\mu:M\to\mathfrak{g}^{*}\) denote the corresponding equivariant moment map. Assume that the action of \(G\) on \(M\) has only finitely many orbit types (this is the case, for example, when \(M\) is the cotangent bundle of a compact manifold or when \(M\) itself is compact). Recall that the symplectic quotient at \(0\in\mathfrak{g}^{*}\) is defined to be the subquotient
\[M_{0}:=\mu^{-1}(0)/G.\]
Let \(\pi:\mu^{-1}(0)\to M_{0}\) denote the quotient map. The symplectic quotient \(M_{0}\) can be given the structure of a differential space. Namely let \(C^{\infty}(M)^{G}\) denote the space of \(G\)-invariant functions. It is easily seen to be a \(C^{\infty}\)-subring of \(C^{\infty}(M)\). We define
\[\mathscr{F}:=\{f:M_{0}\to\mathbb{R}\mid f\circ\pi=\tilde{f}|_{\mu^{-1}(0)}\text { for some }\tilde{f}\in C^{\infty}(M)^{G}\}.\]
This idea goes back to the work of Cushman [2]. It is not hard to check that \(\mathscr{F}\) is a differential structure on \(M_{0}\). For instance this follows from the existence of the desired bump functions. See also [18].
By [17, Example 6.6] the differential space \((M_{0},\mathscr{F})\) is embeddable. Consequently any derivation of \(\mathscr{F}\) has a unique smooth flow.
## Appendix A \(\mathbb{R}\)-algebra and \(C^{\infty}\)-ring derivations of differential structures
The goal of this appendix is to prove that for any point-determined \(C^{\infty}\)-ring \(\mathscr{C}\) (see Definition A.2) any \(\mathbb{R}\)-algebra derivation \(v:\mathscr{C}\to\mathscr{C}\) is automatically a \(C^{\infty}\)-ring derivation. We start by defining \(\mathbb{R}\)-points of \(C^{\infty}\)-rings.
**Definition A.1**.: An \(\mathbb{R}\)-point of a \(C^{\infty}\)-ring \(\mathscr{C}\) is a nonzero homomorphism \(\varphi:\mathscr{C}\to\mathbb{R}\) of \(C^{\infty}\)-rings.
**Definition A.2**.: A \(C^{\infty}\)-ring \(\mathscr{C}\) is \(\text{\sf point-determined}\) if \(\mathbb{R}\)-points separate elements of the ring. That is for any \(a\in\mathscr{C}\), \(a\neq 0\) there is an \(\mathbb{R}\)-point \(\varphi:\mathscr{C}\to\mathbb{R}\) with \(\varphi(a)\neq 0\).
**Example A.3**.: Let \((M,\mathscr{F})\) be a differential space and \(x\in M\) a point. Then the evaluation map
\[ev_{x}:\mathscr{F}\to\mathbb{R},\qquad ev_{x}(f):=f(x)\]
is an \(\mathbb{R}\)-point of \(\mathscr{F}\). The \(C^{\infty}\)-ring \(\mathscr{F}\) is point-determined since for any nonzero function \(f\in\mathscr{F}\) there is a point \(x\in M\) with \(0\neq f(x)=ev_{x}(f)\).
We next recall Hadamard's lemma.
**Lemma A.4** (Hadamard's lemma).: For any smooth function \(f:\mathbb{R}^{n}\to\mathbb{R}\) there exist (non-unique) smooth functions \(g_{1},\dots,g_{n}\in C^{\infty}(\mathbb{R}^{n}\times\mathbb{R}^{n})\) such that
\[f(x)-f(y)=\sum_{i=1}^{n}(x_{i}-y_{i})g_{i}(x,y)\]
for any pair of points \(x,y\in\mathbb{R}^{n}\).
Moreover, for \(n\)-tuple of functions \(h_{1},\ldots,h_{n}\in C^{\infty}(\mathbb{R}^{n}\times\mathbb{R}^{n})\) with the property that \(f(x)-f(y)=\sum_{i=1}^{n}(x_{i}-y_{i})h_{i}(x,y)\) for all \((x,y)\in\mathbb{R}^{n}\times\mathbb{R}^{n}\) we have
\[h_{i}(b,b)=\left(\partial_{i}f\right)(b)\]
for all \(b\in\mathbb{R}^{n}\).
Proof.: \[f(x)-f(y)= \int_{0}^{1}\frac{d}{dt}\,f(tx+(1-t)y)\,dt\] \[= \int_{0}^{1}\sum_{i=1}^{n}\partial_{i}f(tx+(1-t)y)(x_{i}-y_{i})\,dt\] \[= \sum_{i=1}^{n}(x_{i}-y_{i})\int_{0}^{1}\partial_{i}f(tx+(1-t)y)(x _{i}-y_{i})\,dt.\]
Define
\[g_{i}(x,y)=\int_{0}^{1}\partial_{i}f(tx+(1-t)y)\,dt\,dt.\]
This proves existence of the desired functions \(g_{1},\ldots,g_{n}\). To prove the second part of the lemma, note that
\[\left(\partial_{i}f\right)(b)=\lim_{s\to 0}\frac{1}{s}(f(b+se_{i})-f(b)),\]
where \(e_{i}\) is the \(i^{th}\) standard basis vector. Therefore if \(f(x)-f(y)=\sum_{i=1}^{n}(x_{i}-y_{i})h_{i}(x,y)\), then
\[\left(\partial_{i}f\right)(b)=\lim_{s\to 0}\frac{1}{s}\sum_{j=1}^{n}((b+se_{i}) _{j}-b_{j})h_{j}(b+se_{i},b)=\lim_{s\to 0}\frac{1}{s}\,sh_{i}(b+se_{i},b)=h_{i} (b,b)\]
We are now in position to prove the main result of the appendix.
**Theorem A.5**.: Let \(\mathscr{A}\) be a point-determined \(C^{\infty}\) ring and \(v:\mathscr{A}\to\mathscr{A}\) an \(\mathbb{R}\)-algebra derivation. Then \(v\) is a \(C^{\infty}\)-derivation.
Proof.: Recall that if \(\mathscr{A}\) is a unital \(\mathbb{R}\)-algebra and \(v:\mathscr{A}\to\mathscr{A}\) is an \(\mathbb{R}\)-algebra derivation then \(v(1_{\mathscr{A}})=0_{\mathscr{A}}\) since \(v(1_{\mathscr{A}})=v(1_{\mathscr{A}}^{2})=1_{\mathscr{A}}v(1_{\mathscr{A}})+ v(1_{\mathscr{A}})1_{\mathscr{A}}=v(1_{\mathscr{A}})+v(1_{\mathscr{A}})\).
Let \(h\in C^{\infty}(\mathbb{R}^{k})\) be a smooth function and \(a_{1},\ldots,a_{k}\in\mathscr{A}\). Let \(x:\mathscr{A}\to\mathbb{R}\) be an \(\mathbb{R}\)-point. Then \(b=(b_{1},\ldots,b_{k}):=(x(a_{1}),\ldots,x(a_{k}))\) is a point in \(\mathbb{R}^{k}\). By Hadamard's lemma (Lemma A.4), there are smooth function \(g_{1},\ldots,g_{k}\in C^{\infty}(\mathbb{R}^{2k})\) such that
\[h(y)=h(b)+\sum_{j=1}^{k}(y_{j}-b_{j})g_{j}(y,b)\]
for all \(y\in\mathbb{R}^{k}\), and \(g_{j}(b,b)=\partial_{j}h(b)\). Let \(\hat{g}_{j}(y):=g_{j}(y,b)\). Then, for any \((a_{1},\ldots,a_{k})\in\mathscr{A}^{k}\),
\[h_{\mathscr{A}}(a_{1},\ldots,a_{k})=h(b)_{\mathscr{A}}+\sum_{j=1}^{k}(a_{j}-(b _{j})_{\mathscr{A}})\cdot\,(\hat{g}_{j})_{\mathscr{A}}(a_{1},\ldots,a_{k}).\]
Applying the algebraic derivation \(v\) to both sides and using the fact that \(v\) applied to a scalar is zero, we get
\[v(h_{\mathscr{A}}(a_{1},\ldots,a_{k}))=\sum_{j=1}^{k}v(a_{j})(\hat{g}_{j})_{ \mathscr{A}}(a_{1},\ldots,a_{k})+\sum_{j=1}^{k}(a_{j}-(b_{j})_{\mathscr{A}})v( (\hat{g}_{j})_{\mathscr{A}}(a_{1},\ldots,a_{k}).\]
Now we apply the \(\mathbb{R}\)-point \(x\) to both sides and use the fact that \(x(a_{j}-(b_{j})_{\mathscr{A}})=x(a_{j})-b_{j}=0\) for all \(j\). We get
\[x(v(h_{\mathscr{A}}(a_{1},\dots,a_{k}))=\sum_{j=1}^{k}x(v(a_{j}))\cdot x\big{(}( \hat{g}_{j})_{\mathscr{A}}(a_{1},\dots,a_{k})\big{)}.\]
Finally, note that for each \(j\),
\[x\big{(}(\hat{g}_{j})_{\mathscr{A}}(a_{1},\dots,a_{k})\big{)} =(\hat{g}_{j})_{C^{\infty}(R)}\big{(}x(a_{1}),\dots,x(a_{k})\big{)} \quad\text{(since $x$ is a homomorphism of $C^{\infty}$-rings)}\] \[=g_{j}(b_{1},\dots,b_{k},b_{1},\dots,b_{k})\] \[=(\partial_{j}h)(b_{1},\dots,b_{k})=(\partial_{j}h)(x(a_{1}), \dots,x(a_{k}))\] \[=x\big{(}(\partial_{j}h)_{\mathscr{A}}(a_{1},\dots,a_{k})\big{)} \qquad\text{(since $x$ is a homomorphism of $C^{\infty}$-rings)}.\]
Therefore
\[x\big{(}v(h_{\mathscr{A}}(a_{1},\dots,a_{k}))\big{)}=\sum_{j=1}^{k}x\big{(}v( a_{j})\big{)}x\big{(}(\partial_{j}h)_{\mathscr{A}}(a_{1},\dots,a_{k})\big{)}=x \big{(}\sum_{j=1}^{k}(\partial_{j}h)_{\mathscr{A}}(a_{1},\dots,a_{k})\,v(a_{ j})\big{)}.\]
Since \(\mathscr{A}\) is point determined and since the \(\mathbb{R}\)-point \(x\) is arbitrary,
\[v\big{(}h_{\mathscr{A}}(a_{1},\dots,a_{k})\big{)}=\sum_{j=1}^{k}(\partial_{j} h)_{\mathscr{A}}(a_{1},\dots,a_{k})\,v(a_{j}),\]
i.e., \(v\) is a \(C^{\infty}\)-ring derivation.
|
2310.13647
|
Open-Loop Control Co-Design of Semisubmersible Floating Offshore Wind
Turbines using Linear Parameter-Varying Models
|
This paper discusses a framework to design elements of the plant and control
systems for floating offshore wind turbines in an integrated manner using
linear parameter-varying models. Multiple linearized models derived from
aeroelastic simulation software in different operating regions characterized by
the incoming wind speed are combined to construct an approximate low-fidelity
model of the system. The combined model is then used to generate open-loop,
optimal control trajectories as part of a nested control co-design strategy
that explores the system's power production and stability using the platform
pitch tilt as a proxy in the context of crucial plant and control design
decisions. The radial distance between the central and outer columns and the
diameter of the outer columns of the semisubmersible platform are the plant
design variables. The platform stability and power production are studied for
different plant design decisions. The effect of plant decisions on subsequent
power production and stability response of the floating wind turbine is
quantified in terms of the levelized cost of energy. The results show that the
inner-loop constraints and the plant design decisions affect the turbine's
power and, subsequently, the cost of the system.
|
Athul Krishna Sundarrajan, Yong Hoon Lee, James T Allison, Daniel Zalkind, Daniel Herber
|
2023-10-20T16:47:56Z
|
http://arxiv.org/abs/2310.13647v1
|
Open-Loop Control Co-Design of Semisubmersible Floating Offshore Wind Turbines Using Linear Parameter-Varying Models
###### Abstract
_This paper discusses a framework to design elements of the plant and control systems for floating offshore wind turbines in an integrated manner using linear parameter-varying models. Multiple linearized models derived from aeroelastic simulation software in different operating regions characterized by the incoming wind speed are combined to construct an approximate low-fidelity model of the system. The combined model is then used to generate open-loop, optimal control trajectories as part of a nested control co-design strategy that explores the system's power production and stability using the platform pitch tilt as a proxy in the context of crucial plant and control design decisions. The radial distance between the central and outer columns and the diameter of the outer columns of the semisubmersible platform are the plant design variables. The platform stability and power production are studied for different plant design decisions. The effect of plant decisions on subsequent power production and stability response of the floating wind turbine is quantified in terms of the levelized cost of energy. The results show that the inner-loop constraints and the plant design decisions affect the turbine's power and, subsequently, the cost of the system._
floating offshore wind turbines; semisubmersible platforms; linear parameter-varying models; control co-design; optimal control; levelized cost of energy
## 1 Introduction
The design of floating offshore wind turbines (FOWTs) has often followed a sequential pattern, where the physical plant parameters are designed first, and a controller is then developed for a particular plant [1, 2, 3, 4]. However, in FOWTs, there are strong interactions between the structural and environmental dynamics and the controller. Unfortunately, a sequential design process can produce conservative designs because it does not account for this coupling [5, 6]. Optimizing both the physical plant and the controller simultaneously enables rapid identification of stable, system-level optimal results. This integrated design approach has been studied extensively under the term control co-design (CCD) [1, 7, 8, 9, 10, 11]. Recently, the importance of these integrated design approaches for energy system design has been recognized by domain experts. References [5, 12, 13, 14, 15] have explored the application of integrated design to offshore wind turbines. Integrated design approaches have also found applications in design of mixed renewable/nonrenewable power generation systems [16, 17].
The primary design goal of any wind-based energy system is to capture as much power from the incoming wind while minimizing the structure's dynamic loads. However, the overarching balance between increasing the annual energy production while minimizing the systems' building and operating costs is essential to producing economical energy solutions. These goals are captured by the levelized cost of energy (LCOE) metric [18]:
\[\text{LCOE}=\frac{\text{Total Lifetime Cost}}{\text{Total Lifetime Energy Output}} \tag{1}\]
The total lifetime costs of the FOWT system are a combination of the initial capital cost needed to build the system and the operation and maintenance costs over its lifetime. The capital costs are often directly linked to some of the plant design decisions [19, 20]. The maintenance costs and the total lifetime energy output are dependent on how the system operates and, consequently, depend on the environment and how it is controlled [21, 22]. Recent studies have shown that advanced control strategies for offshore wind applications can increase the power extracted from the turbine and minimize the levelized cost [23, 24]. Most conventional LCOE estimates have not incorporated detailed dynamic assessments nor the impact of novel control strategies. In the case of highly coupled, highly constraint-sensitive systems, such as FOWTs, such considerations are imperative because of the many challenges making these systems economically viable [1]. Additionally, overlooking the impacts of control decisions on optimal physical design is one of the pitfalls of sequential design approaches.
### _Plant Design for Floating Offshore Wind Turbines_
The plant design for a FOWT involves design decisions for several individual subsystems with considerations of stability, cost, and energy production. The primary elements of a FOWT are the rotor, drivetrain, nacelle, tower, and support structure and are labeled in Fig. 1. Stability of the FOWT about its natural equilibrium is required in all manner of wind, wave, and current excitations that the system might experience [25]. Reference [26] provides information about the current standard industry requirements of an FOWT.
An increase in the power production capacity of an FOWT increases turbine inertial and structural loads [13, 27]. In addition to this concern, the turbine must also withstand the forces and motions induced by the stochastic offshore environment [28, 29, 30]. The design of the substructure is thus a critical aspect of FOWT design. Different substructure designs have been proposed based on ballast, buoyancy, and mooring stability concepts [31, 32, 33]. The focus of this study will be the semisubmersible platform technology, which has been shown to have potential benefits over other alternatives in terms of stability, transportation, and ease of assembly [31, 34]. In this study, the plant variables under consideration are the distance between the central column and the outer columns, also called column spacing (\(c_{s}\)), and the diameter of the outer columns (\(c_{d}\)), as they directly affect the geometry and cost of the platform:
\[\mathbf{x}_{p}=\left[c_{s}\ c_{d}\right]^{T} \tag{2}\]
Generally, increasing the size of the support structure will make the FOWT more stable, but this would also raise the capital and other associated costs. Therefore, it is essential to optimize the system for cost while ensuring stability [35]. The effect of other variables, such as ballast volume and mooring parameters, could also be explored. As the development cycle progresses, additional practical considerations may also be incorporated into the plant design, like assembly costs and procedures, maintenance costs, and ease of transportation.
### _Control Design for Floating Offshore Wind Turbines_
The control system for an FOWT is instrumental in achieving the design goals stated in the previous sections. The power generated by an FOWT and the physical loads on its structure are heavily dependent on the loading conditions induced by the wind, waves, and currents. Operating the system in such a way so that it can remain stable while producing maximal power is the primary goal of the FOWT control system. Similar to the control of land-based wind turbines, the control strategy selected depends heavily on the system's input excitations because these inputs produce the dynamical responses we seek to optimize.
The primary mode of control for any wind turbine depends heavily on the wind, so specific operating regions are often defined based on the wind speed [36, 37]. Typically, there are three
Figure 1: Floating offshore wind turbine. _Illustration courtesy of NREL_.
wind speed-based regions of interest, visualized in Fig. 2. At lower, below-rated wind speeds, the goal is to use the generator torque to change the generator speed that tracks the optimum power coefficient. In the above-rated wind speed (Reg. 3), the turbine is designed to operate at its maximum power level. In between these regions, there is a transition behavior, and, above the cut-out wind speed, the system is shut down because there can be permanent structural damage.
The two primary control inputs for wind turbines are the pitch angle of the turbine blades (commonly called blade pitch) and the torque produced by the generator. In below-rated wind speeds, varying the generator torque is the primary mode of control of the turbine [12, 38]. Above rated wind speeds, the generator torque is held constant and the blade pitch is varied to regulate the generator speed and power to their rated values.
### Modeling Considerations
It is often necessary to conduct early-stage design studies to understand the desired fundamental system properties and behaviors that inform critical decisions that need to be made as the system of interest is realized. The use of high-fidelity modeling tools and methods in early-stage design studies is not always needed to achieve the desired design insights and can be prohibitive due to their complexity and computational expense [39]. In the context of optimization-based studies, depending on the parameterization of the given turbine and platform model, the resulting design space could be broad and complex [40, 41].
To facilitate these design and control (both closed- and open-loop) studies, it is common to develop reduced or lower-order models that capture just the system's essential physics. Results from these reduced-order models are validated against the results from high-fidelity tools to understand their veracity in studying the system's behavior. In some cases, these models are then linearized around predetermined set-point values in distinct operating regions. These linearized models are then either used to understand the system dynamics and design controllers in these operating regions or to develop frequency domain models that enable faster model evaluation [42, 43, 5, 44]. Some recent platform design studies have utilized these linearized models and optimization-based approaches to identify the optimal design [43, 5, 44].
However, there are some challenges in developing these lower-order models. For example, it can be complicated because this process requires extensive subject knowledge of FOWTs and the associated physics/engineering disciplines. Additionally, the lower-order models are developed to study a specific aspect of the system's behavior (e.g., the floating structure response, controller response, and aerodynamic wake). As such, the results from these models cannot be easily generalized to obtain system-level insights. The highly coupled nature of an FOWT can create further complications in modeling the system accurately [42, 45, 46, 5].
One way to mitigate these challenges is by using linearized models obtained directly from high-fidelity modeling tools (e.g., computational fluid dynamics, blade element momentum theory) [47, 48]. These models are obtained by linearizing the nonlinear system around specific operating points, often stationary points where the system exhibits static behavior. A linear time-invariant state-space dynamic model about the static operating point (\(\mathbf{\xi}_{o},\mathbf{u}_{o}\)) typically has the following form:
\[\frac{d\mathbf{\xi}_{\mathbf{\Delta}}(t)}{dt} =\mathbf{A}_{o}\mathbf{\xi}_{\mathbf{\Delta}}(t)+\mathbf{B}_{o}\mathbf{u}_{\mathbf{\Delta }}(t) \tag{3a}\] \[\mathbf{y}(t) =C_{o}\mathbf{\xi}_{\mathbf{\Delta}}(t)+\mathbf{D}_{o}\mathbf{u}_{\mathbf{\Delta}}(t )+\mathbf{g}_{o} \tag{3b}\]
where \(t\) is time, \(\mathbf{\xi}_{\mathbf{\Delta}}(t)\) are the relative states related to the original states \(\mathbf{\xi}\) with \(\mathbf{\xi}(t)=\mathbf{\xi}_{\mathbf{\Delta}}(t)+\mathbf{\xi}_{o}\), \(\mathbf{u}_{\mathbf{\Delta}}(t)\) are the relative inputs related to the original inputs \(\mathbf{u}\) with \(\mathbf{u}(t)=\mathbf{u}_{\mathbf{\Delta}}(t)+\mathbf{u}_{o}\), \(\mathbf{y}(t)\) are the outputs, and the matrices \((\mathbf{A}_{o},\mathbf{B}_{o},\mathbf{C}_{o},\mathbf{D}_{o},\mathbf{g}_{o})\) are associated with the linearization process.
A significant drawback with any kind of linearized model is that its accuracy in capturing the system's dynamic response diminishes quickly as the system's behavior moves away from the initial operating point. Thus, it becomes difficult to work with many diverse design load cases where the wind speed continuously varies. Some studies that have used linearized models have leveraged them in gain scheduling approaches to account for nonlinearities. However, this approach does not guarantee stability and performance for all possible values of the wind speed [49].
In this work, we will discuss the use of linear parameter-varying (LPV) models to help overcome the drawbacks of distinct linear models [49, 50]. These LPV models show good accuracy when capturing the original nonlinear dynamics and can be used to generate open-loop optimal control trajectories. LPV models have also been used to investigate various closed-loop
Figure 2: Stationary operating points for IEA-15-MW turbine.
control solutions for wind turbines [49, 51, 52]. However, these studies have not explored the use of continuous LPV models to approximate the nonlinear system response or its efficient application in early-stage design studies with open-loop optimal control.
### Integrated Design with Control Co-Design
CCD is an integrated design paradigm that enables simultaneous design optimization of the plant and control systems [53, 54, 55, 10]. The CCD approach provides a rigorous framework that can naturally handle the coupling between the plant and control drivers present in FOWTs. A common mathematically equivalent way to decompose a CCD problem is with the nested formulation as a bilevel optimization [53, 54]. The coordination approach defines a first-level, outer-loop problem that optimizes the plant design with information on the best possible performance from the second-level, inner-loop problem that optimizes the dynamics and control for a given plant design (and is sometimes called the control subproblem). In other words, the outer loop generates candidate plant designs, denoted by \(\mathbf{x}_{p}^{\dagger}\); this candidate is then passed to the inner loop. The inner loop then produces an optimal control solution, \(\mathbf{u}\), and system dynamic states, \(\mathbf{\xi}\), for this candidate plant design.
There are certain advantages to using the nested CCD approach (many are discussed in [54]), especially for problems where the inner loop is a linear-quadratic dynamic optimization (LQDO) problem. LQDO problems are characterized by quadratic objectives, linear dynamic systems, general linear constraints, and open-loop control [39, 54]. Such problems can be solved efficiently and accurately using quadratic programming methods [56]. Additionally, nested CCD is often necessary when black-box models of the dynamics are used (as will be the case in this work) [57, 8]. In this article, we demonstrate the use of the nested CCD method in the design of FOWT with the primary goal of minimizing the LCOE. Factors such as power generation and the dynamic stability of the system are incorporated as inner-loop objectives and constraints, respectively.
### Open-Loop vs. Closed-Loop Control
As is true in many domains, various closed-loop control strategies have been used for wind turbine control. While these strategies are providing many practical control solutions, their use in early-stage design studies can limit exploration because a control architecture must be assumed, potentially limiting our understanding of various trade-offs that can inform better wind turbine designs [58]. Since open-loop optimal control (OLOC) does not assume a particular control architecture, it can help identify the maximum achievable performance limits and provide critical insights into the optimal system dynamics and controller behavior in early-stage design studies [54].
In the study of many controlled dynamic systems, simulation or shooting-based approaches have been used where a simulation is performed given a controller (either of the closed-loop or open-loop variety), and its result (e.g., power generated) is used to assess the performance of the proposed control strategy. While implementing a shooting approach is relatively straightforward, there are several challenges when combined with OLOC, such as various efficiency and convergence issues [59, 60]. Therefore, we use the direct-transcription (DT) method to solve the OLOC problem, which discretizes the states and controls, resulting in a large, sparse optimization problem [59, 60, 7].
### Use of OpenFAST and WEIS Models
Wind energy with integrated servo-control (WEIS) is an open-source project that is developed by the National Renewable Energy Laboratory (NREL) and partners that will allow users to perform CCD of FOWT systems [61, 2]. The WEIS toolbox is built on OpenFAST [62], another open-source toolbox developed by NREL, that generates a full-system dynamic response of FOWTs under wind, wave, and current excitations. The OpenFAST tool is built on independent modules that capture the important physical phenomena of the different FOWT subsystems and couplings between them. There are different modules to capture the effects of aerodynamics, hydrodynamics, servodynamics, and mooring dynamics. A variety of plant design decisions can be explored within these tools as well [2]. The Wind-Plant Integrated System & Engineering Model (WISDEM(r)), also part of WEIS, is used to compute the platform geometry, mass, and cost.
In this work, the dynamic models of FOWTs will be generated using the linearization capabilities of the WEIS/OpenFAST tools, with the original nonlinear dynamics simulation capabilities being used for validation of the results. A detailed discussion regarding the linearization capabilities of OpenFAST and the entire tool can be found in [47, 48, 61, 62]. Wind speed is used to select the state and control operating points for this linearized model.
The remainder of the paper is organized as follows: Sections 2 and 3 define LPV modeling theory and validate the specific LPV models used in this work, respectively. Section 4 formulates the CCD problem using the LPV dynamic model. Section 5 presents the results of several studies conducted to better understand the impact of control and plant decisions on the LCOE objective. Section 6 summarizes the results and provides future steps for this work.
## 2 Linear Parameter-Varying Models
As mentioned in Sec. 1.3, linearized models like the one defined in Eq. (3) can accurately describe the system's behavior for small perturbations about the operating point from which they were derived. For the design and optimization activities of an FOWT system, it is essential to understand the system behavior over multiple input excitations. While there are additional drivers for modeling variations, the primary one in wind energy
systems, including FOWTs, is the wind speed in the direction of the turbine-blade system. Under different wind conditions, the stationary operating points for the FOWT system vary greatly, as do the matrices defining the dynamic model in Eq. (3). Therefore, we will consider models dependent on this important parameter, which will be useful in OLOC CCD studies.
### Linear Parameter-Varying Model Derivation
LPV models are a special case of linear time-varying (LTV) systems where the system matrices are continuous and are a function of a set of parameters [50, 52]. Here, we will consider the single parameter case where the parameter \(w\) indicates the current wind speed value. Now consider the following nonlinear parameter-dependent model \(\Sigma\):
\[\Sigma=\left\{\begin{aligned} \frac{d\mathbf{\xi}}{dt}& =\mathbf{f}(\mathbf{\xi},\mathbf{u},w)\\ \mathbf{y}&=\mathbf{g}(\mathbf{\xi},\mathbf{u},w)\end{aligned}\right. \tag{4}\]
Our goal is to linearize this model about the \(w\)-varying operating point functions \((\mathbf{\xi}_{o}(w),\mathbf{u}_{o}(w))\) where stationary or steady-state models characterize their values:
\[\mathbf{f}(\mathbf{\xi}_{o}(w),\mathbf{u}_{o}(w),w)=\mathbf{0},\ \forall w\in[w_{\min},w_{ \max}] \tag{5}\]
where \(w_{\min}\) is the minimum parameter value considered, and \(w_{\max}\) is the maximum parameter value considered.
Now the relationship between the linearization states and the original states depends on the parameter \(w\):
\[\mathbf{\xi}(t)=\mathbf{\xi}_{\Delta}(t)+\mathbf{\xi}_{o}(w),\quad\mathbf{u}(t)=\mathbf{u}_{ \Delta}(t)+\mathbf{u}_{o}(w) \tag{6}\]
Assuming that \(w\) is time varying, the time derivative relationship of the states is:
\[\frac{d\mathbf{\xi}}{dt} =\frac{d\mathbf{\xi}_{\Delta}}{dt}+\frac{d}{dt}\mathbf{\xi}_{o}(w(t)) \tag{7a}\] \[=\frac{d\mathbf{\xi}_{\Delta}}{dt}+\frac{d\mathbf{\xi}_{o}}{\partial w} \frac{dw}{dt} \tag{7b}\]
Now we use the following notation for the derivatives of the nonlinear model:
\[\mathbf{A}(w) \coloneqq\mathbf{J}_{\mathbf{\xi}}^{f}(\mathbf{\xi}_{o}(w),\mathbf{u}_{o}(w),w), \ \mathbf{B}(w)\coloneqq\mathbf{J}_{\mathbf{u}}^{f}(\mathbf{\xi}_{o}(w),\mathbf{u}_{o}(w),w)\] \[\mathbf{C}(w) \coloneqq\mathbf{J}_{\mathbf{\xi}}^{g}(\mathbf{\xi}_{o}(w),\mathbf{u}_{o}(w),w), \ \mathbf{D}(w)\coloneqq\mathbf{J}_{\mathbf{u}}^{g}(\mathbf{\xi}_{o}(w),\mathbf{u}_{o}(w),w)\]
where \(\mathbf{J}_{\mathbf{x}}^{f}\) denotes the Jacobian of \(\mathbf{f}\) with respect to \(\mathbf{x}\), and the values of these functions are dependent on the operating points and are denoted as:
\[\mathbf{f}(w)\coloneqq\mathbf{f}(\mathbf{\xi}_{o}(w),\mathbf{u}_{o}(w),w),\ \mathbf{g}(w)\coloneqq\mathbf{g}(\mathbf{\xi}_{o}(w),\mathbf{u}_{o}(w),w)\]
With this derivative relationship in Eq. (7) and the notation above, the nonlinear system \(\Sigma\) in Eq. (4) is linearized about \((\mathbf{\xi}_{o}(w),\mathbf{u}_{o}(w))\) yielding the following LPV system:
\[\Sigma_{w}=\left\{\begin{aligned} \frac{d\mathbf{\xi}_{\Delta}}{dt}& =\mathbf{f}(\mathbf{\xi}_{o}(\mathbf{\xi}_{o})+\mathbf{A}(w)\mathbf{\xi}_{\Delta}+\mathbf{B }(w)\mathbf{u}_{\Delta}-\frac{d\mathbf{\xi}_{o}(w)}{\partial w}\frac{dw}{dt}\\ \mathbf{y}&=\mathbf{g}(w)+\mathbf{C}(w)\mathbf{\xi}_{\Delta}+\mathbf{D}(w) \mathbf{u}_{\Delta}\end{aligned}\right. \tag{9}\]
If only a single time-invariant value of the parameter denoted \(w_{o}\) is considered, then we have the following system:
\[\frac{d\mathbf{\xi}_{\Delta}}{dt} =\mathbf{A}(w_{o})\mathbf{\xi}_{\Delta}+\mathbf{B}(w_{o})\mathbf{u}_{\Delta}- \frac{\partial\mathbf{\xi}_{o}(w_{o})}{\partial w}\frac{dw}{dt}^{0} \tag{10a}\] \[\mathbf{y} =\mathbf{g}(w_{o})+\mathbf{C}(w_{o})\mathbf{\xi}_{\Delta}+\mathbf{D}(w_{o})\mathbf{u} _{\Delta} \tag{10b}\]
which gives us:
\[\Sigma_{o}=\left\{\begin{aligned} \frac{d\mathbf{\xi}_{\Delta}}{dt}& =\mathbf{A}(w_{o})\mathbf{\xi}_{\Delta}+\mathbf{B}(w_{o})\mathbf{u}_{\Delta}\\ \mathbf{y}&=\mathbf{g}(w_{o})+\mathbf{C}(w_{o})\mathbf{\xi}_{\Delta}+ \mathbf{D}(w_{o})\mathbf{u}_{\Delta}\end{aligned}\right. \tag{11}\]
which is the same LTI system defined in Eq. (3) for a single operating point characterized by the parameter \(w_{o}\).
### Construction Using Multiple Linearized Models
The system \(\Sigma_{w}\) with continuous dependence on the parameter \(w\) generally will not be directly available because linearized models are often realized through numerical methods for specific operating points (i.e., \(\Sigma_{o}\)). Therefore, it may be necessary to construct \(\Sigma_{w}\) from a finite strategic set of \(\Sigma_{o}\) models. To accomplish this goal, the matrix entries of \(\Sigma_{w}\) are determined by element-wise matrix interpolation from a set of given denoted \(\mathbf{\Omega}=[\Sigma_{o1},\Sigma_{o2},\cdots,\Sigma_{on}]\), each created using the parameters values \(\mathbf{W}=[w_{1},w_{2},\cdots,w_{n}]\). The selected interpolation scheme was piecewise cubic Hermite interpolating polynomials. Derivatives of the polynomial interpolating function are directly computed when needed.
There are several properties to consider to ensure such an interpolation scheme has a reasonable chance of meaningfully capturing the nonlinear dynamics, including:
* The structure of the states, inputs, and outputs are unchanging for all considered \(\Sigma_{o}\).
* The sparsity patterns (nonzero entries in the system matrices) are generally similar between analogous matrices.
* The stationary condition in Eq. (5) holds for the given interpolation scheme and \(\mathbf{W}\) (i.e., \((\mathbf{\xi}_{o}(w),\mathbf{u}_{o}(w))\)) can be found through interpolation such that the condition holds.
* The element-wise relationships between different matrices can be reasonably interpolated using a selected \(\mathbf{W}\); however, this is hard to quantify because errors in these coefficients might not result in large errors in the key outputs.
* At various validation points not in \(\mathbf{W}\), the error between the actual linearized system at \(w_{o}\) and the interpolated system \(\Sigma_{w}\), quantified by the \(H_{ww}\) norm, is below a tolerance \(\epsilon\): \[\left\|\mathbf{G}_{o}(s)-\mathbf{G}_{w}(s)\right\|_{H_{w}}\leq\epsilon\] (12) where \(\mathbf{G}_{o}(s)\) and \(\mathbf{G}_{w}(s)\) are the transfer function matrices for \(\Sigma_{o}\) and \(\Sigma_{w}\), respectively. This error metric better captures the input/output error between the interpolated and original systems.
* Time-domain simulations between the nonlinear \(\Sigma\) and LPV \(\Sigma_{w}\) should be similar. At this time, the selection of \(\mathbf{W}\) was informed by expert in
tution and figures such as Fig. 2 that characterize the different regions of operation and their transition points. Future work will consider automated sampling strategies that try to optimally sample points for constructing an accurate LPV using the condition in Eq. (12).
## 3 LPV Model Validation for IEA-15 MW Turbine
The International Energy Agency (IEA) 15-MW offshore wind turbine is a reference turbine model jointly developed by NREL and Danish Technical University (DTU) [63, 12], visualized in Fig. 1. The turbine is supported by a floating semisubmersible platform and a chain catenary mooring system. The details of the support structure are available in [64]. This is the system under consideration in this work.
There are five states, namely the platform pitch \(\Theta_{p}\), the first time derivative of platform pitch \(\dot{\Theta}_{p}\), the tower-top fore-aft displacement \(\delta_{T}\), first time derivative of the tower-top fore-aft displacement \(\delta_{T}\), and the generator speed \(\omega_{g}\). The order of the states is as follows:
\[\mathbf{\xi}(t)=\begin{bmatrix}\Theta_{p}&\Theta_{p}&\delta_{T}&\delta_{T}&\omega_{ g}\end{bmatrix}^{T} \tag{13}\]
In the rest of this article, we restrict our focus to two key states, namely the generator speed \(\omega_{g}\) and platform pitch \(\Theta_{p}\), but all are included in the LPV models. In its current form, the model is excited by wind inputs only; wave and current disturbances are not considered. Correspondingly, the total inputs to the system are the wind speed \(w\), the generator torque \(\tau_{g}\), and the blade pitch \(\beta\):
\[\mathbf{u}(t)=\begin{bmatrix}\tau_{g}&\beta\end{bmatrix}^{T} \tag{14}\]
For the considered system, OpenFAST can provide accurate simulations of the system's nonlinear dynamics (i.e., the outputs of \(\Sigma\)). However, due to the concerns expressed in previous sections, an LPV model is considered a less computationally expensive and structured alternative to these expensive simulations. The natural choice for the parameter needed to construct the LPV model \(\Sigma_{w}\) is the wind speed. The operating region of a wind turbine is between the cut-in wind speed (\(w_{\text{min}}=3\) [m/s] in this study) and the cut-out wind speed (\(w_{\text{max}}=25\) [m/s]). To understand the accuracy of the LPV modeling approach for this system, several validation studies were carried out.
### State-Space Model Comparisons
With a selected \(\mathbf{W}\) (23 distinct wind speeds), the set of linearized state-space models \(\Sigma_{o}\) at each of the wind speed values are obtained. To construct the continuous \(\Sigma_{w}\) using \(\mathbf{W}\) and \(\Sigma_{o}\), direct element-wise interpolation of the matrices \((\mathbf{A}_{o},\mathbf{B}_{o},\mathbf{C}_{o},\mathbf{D}_{o})\) was used. To reduce the interpolation costs, matrix sparsity patterns were considered. Only entries with nonzero values were interpolated (and the sparsity pattern remained similar (P2)).
To understand the predictive accuracy of this approach and check if these models satisfy (P4), the following test is carried out. Every alternate point in \(\mathbf{W}\) was chosen as training data for the interpolation procedure, and the values in between were selected as validation points. This allows us to assess if the interpolation approach can predict matrix properties by comparing to the validation systems1. In Fig. 2(a), several key \(\xi_{o}(w)\) and \(\mathbf{u}_{o}(w)\) values are shown, and there is good agreement between the interpolated LPV system and the validation points, even in the transition region. In Fig. 2(b), one of the eigenvalues of \(\mathbf{A}(w)\) that changes with the wind is shown. Again, the eigenvalues generally are well predicted, with the largest errors in the transition region. Finally, the normalized nonzero entries of \(\mathbf{B}(w)\) are shown in Fig. 2(c). There are some validation points with high errors in the transition region but good agreement in the other regions.
Footnote 1: All points in \(\mathbf{W}\) are used in the studies in Sec. 5.
### Frequency-Domain Verification
The transfer function matrix of the interpolated linear models was studied to understand better if the input/output relationship is accurately predicted and compute the error in Eq. (12) in (P5). Here, we consider the four relationships between the two key states (\(\omega_{g}\) and \(\Theta_{p}\)) and the inputs \(\mathbf{u}\). The results for the input/output combination with the highest error (\(\omega_{g}\) and \(\beta\)) are shown in Fig. 4.
The \(H_{\infty}\) norm error between the training and validation systems and the interpolated systems is shown in Fig. 3(a). The errors at the training points are near zero, as expected using interpolation. However, the systems derived from the transition region (8-12 [m/s]) have the highest error compared to the other regions. This figure shows how advanced sampling strategies could be used to better sample from regions of high error. Additionally, the transfer functions between \(\beta\) and \(\omega_{g}\) are shown in Figs. 3(b) and 3(c) with a close prediction and largest \(H_{\infty}\) error, respectively.
### Time-Domain Verification
The final comparisons were based on (P6) using OpenFAST to determine the nonlinear response of \(\Sigma\). Using the same input trajectory, three different models (\(\Sigma\), \(\Sigma_{w}\), and \(\Sigma_{o}\) using the average wind speed \(w_{\text{avg}}\)) are simulated, then the resulting state trajectories are compared. A step-like wind input is considered for this study and is shown in Fig. 4(a) (and the nonzero trajectories for \(\tau_{g}\) and \(\beta\) are not shown).
From the results, we see that \(\Sigma_{w}\) captures the nonlinear response from OpenFAST more accurately that \(\Sigma_{o}\) using \(w_{\text{avg}}\). For this study, \(w_{\text{avg}}=12.8\) [m/s]. Early in the simulation, when the wind speed value is significantly different from \(w_{\text{avg}}\), we see that the \(\Sigma_{o}\) using \(w_{\text{avg}}\) produces inaccurate results for \(\Theta_{p}\) in Fig. 4(b) and \(\omega_{g}\) in Fig. 4(c). Using all the different comparisons, it was concluded that the LPV model \(\Sigma_{w}\) can, with reasonable accuracy, capture the dynamics of the considered FOWT.
### Interpolation Based on Plant Variables
The model \(\Sigma_{w}\) just presented was obtained using a particular instance of the system's plant design, denoted by \(\mathbf{x}_{p}\) in Eq. (2). However, we also want to consider the design impacts of the plant variables over the full range of their allowable values. For such an investigation, a complete set of linear models \(\mathbf{\Sigma}_{w}\), corresponding to multiple plant designs, are obtained. Because only two plant variables are considered in the study, a full-factorial grid was constructed. A regular-grid interpolation scheme is then used to interpolate the individual elements of \(\Sigma_{w}\) over the entire range of the plant variables. From (P2) the sparsity information can be used to construct the interpolation scheme for the nonzero elements in the linear system matrices, making the process more efficient. The samples were generated between bounds \(\mathbf{L}_{p}=[36,6]^{T}\) [m] and \(\mathbf{U}_{p}=[78,24]^{T}\) [m] considered for \(c_{s}\) and \(c_{d}\) dimensions, respectively. The column spacing dimension was sampled for \(n_{cs}=7\) different values, while the column diameter was sampled at \(n_{cd}=7\) different values, yielding a total of \(n=49\) samples. The nominal platform specifications are available at [64].
Similar tests to those outlined in Secs. 3.1-3.2 were carried out to check the predictive accuracy of the interpolation scheme based on \(\mathbf{x}_{p}\). For the state-space model comparison, the interpolation scheme was set up for the linear models with the highest \(H_{\infty}\) error from Fig. 3(c) at \(w=12\) [m/s]. The corresponding state matrix \(\mathbf{A}(12)\) and key states and control operating points from \(\{\mathbf{\xi}_{o}(12),\mathbf{u}_{o}(12)\}\) were interpolated individually for both column spacing (\(c_{s}\)) and column diameter (\(c_{d}\)) dimensions as shown in Fig. 6. The frequency domain validation outlined for interpolation based on \(\mathbf{W}\) was carried out for interpolation based on \(\mathbf{x}_{p}\). The \(H_{\infty}\) norm error for 25 different plant variable samples was evaluated between the interpolated and actual models at \(w=12\) [m/s], and the average error was found to be \(\sim 10^{-5}\) [dB] for all 25 samples. Therefore, we conclude that interpolation based on \(\mathbf{x}_{p}\) is generally well-behaved, potentially more so than the wind speed dimension. Since the \(H_{\infty}\) error was so low, these results
Figure 4: Transfer function-based comparisons using the validation wind speed values for the IEA 15-MW wind turbine.
Figure 3: Select stationary points, eigenvalues, and input matrix entries for \(\Sigma_{w}\) for the IEA 15-MW wind turbine.
are not shown graphically.
## 4 Control Co-design Problem Formulation
This section describes the nested CCD problem constructed using the LPV models from Sec. 2 to study the impact of various stability constraints on the LCOE for the considered single-device FOWT.
### Outer-Loop Plant Design Problem Formulation
The outer-loop plant optimization problem in the nested CCD approach employed here is centered around the LCOE calculation in Eq. (1). In this calculation, the total lifetime cost is estimated as:
\[C_{\text{capital}}(\mathbf{x}_{p}) =C_{\text{turbine}}(\mathbf{x}_{p})+C_{\text{bos}}(\mathbf{x}_{p}) \tag{15a}\] \[C_{n} =r_{\text{fc}}C_{\text{capital}}(\mathbf{x}_{p})+C_{\text{opex}} \tag{15b}\]
where \(C_{\text{turbine}}(\mathbf{x}_{p})\) and \(C_{\text{bos}}(\mathbf{x}_{p})\) are the turbine cost and the balance of system cost for the turbine that depends on the plant design. \(C_{\text{opex}}\) is the annual operating costs, and \(r_{\text{fc}}\) is the fixed charge rate, which, as used in this study, captures the amortization of \(C_{\text{capital}}\) in Eq. (15a) across the project lifetime. More details about \(r_{\text{fc}}\) can be found in [65, 66]. For this study, we used the cost and scaling models and LCOE equation discussed in detail in [67, 68, 69, 70, 71, 72].
The total energy generated in a year is determined as:
\[E=\text{AEP}=\int_{\mathcal{W}_{o}}P^{*}(\mathbf{w}(t,W_{o}),\mathbf{x}_{p})\mathbf{f}_{ \mathcal{W}_{o}}(W_{o})\text{d}W_{o} \tag{16}\]
where \(\mathcal{W}_{o}\) is the entire operating region, \(\mathbf{w}(\cdot)\) is a given load case with an average wind speed of \(W_{o}\), \(P^{*}(\cdot)\) is the average power produced for a given plant design and design load case (DLC), and \(\mathbf{f}_{\mathcal{W}_{o}}\) is the Weibull probability density function that describes the wind speed distribution. Eleven wind profiles from the IEA-specified DLCs with the normal turbulence model in [73] (i.e., 'DLC 1.1') with average wind speed values between 3 and 25 [m/s] are used to approximate the distribution \(\mathbf{f}_{\mathcal{W}_{o}}\).
Finally, the annual energy production (AEP) is calculated as:
\[E_{n}=(1-f_{wi})E \tag{17}\]
where \(0\leq f_{wi}\leq 1\) is the wake loss factor. Both \(C_{n}\) and \(E_{n}\) are normalized with respect to the machine rating, which is 15 [MW]. This operation does not change the value of the LCOE, but it changes the units of \(C_{n}\) to [S/MW] and \(E_{n}\) to [h]. Therefore, LCOE = \(C_{n}\)/\(E_{n}\), and the complete outer-loop optimization prob
Figure 5: Model validation simulations between nonlinear \(\Sigma\), LPV \(\Sigma_{w}\), and LTI \(\Sigma_{o}\), using \(w_{\text{avg}}\) models.
Figure 6: Interpolation of select stationary points and eigenvalues for \(\Sigma_{o}\) with \(w=12\) [m/s] based on \(\mathbf{x}_{p}\).
lem is:
\[\min_{\mathbf{x}_{p}} \text{LCOE}(\mathbf{x}_{p})\] (18a) subject to: \[\mathbf{L}_{p}\leq\mathbf{x}_{p}\leq\mathbf{U}_{p} \tag{18b}\]
where only simple upper and lower bounds on the plant variables are considered at this time (although more complex plant-only constraints can be readily incorporated). Note that for a fixed plant \(\mathbf{x}_{p}^{\dagger}\), the solution for each \(\tilde{P}^{*}(\mathbf{x}_{p}^{\dagger},\mathbf{w})\) can be determined through independent minimization problems. Therefore, the control subproblems can be solved in parallel.
### Control Subproblem for a Specific Design Load Case
The control subproblem's goal is to understand the impact of the control decisions on system response, power production, and ultimately the LCOE design objective. An open-loop optimal control problem is constructed to maximize the power produced for a given operational scenario or DLC. The optimization formulation is presented using the original notation for states and controls \((\mathbf{\xi},\mathbf{u})\), but the linear time-varying transformation in Eq. (6) based on the wind-dependent operating point is applied so that \((\mathbf{\xi}_{\Delta},\mathbf{u}_{\Delta})\) are the states and controls for this subproblem.
The energy produced by the turbine is:
\[\int_{0}^{t_{f}}P(t)\text{d}t=\int_{0}^{t_{f}}\eta_{g}\tau_{g}(t)\omega_{g}(t) \text{d}t \tag{19}\]
where \(\eta_{g}\) is the generator efficiency. Note, the control term \(\tau_{g}\) appears linearly in the objective term Eq. (19). The presence of linear control terms in the objective function with linear dynamics can give rise to singular arcs [59] as the control trajectory cannot be uniquely determined. To help mitigate this issue, a quadratic penalty term is introduced in the objective term:
\[\Pi_{c}(t)=\mathbf{u}^{T}\begin{bmatrix}10^{-16}&0\\ 0&10\end{bmatrix}\mathbf{u} \tag{20}\]
where values in this penalty matrix were identified according to the method discussed in [74]. In addition to this, a penalty is added to limit the fluctuation of the platform pitch:
\[\Pi_{p}(t)=\Theta_{p}^{2} \tag{21}\]
The linear dynamic constraints included using \(\Sigma_{w}\) from Eq. (9) with plant dependence are:
\[\frac{d\mathbf{\xi}_{\Delta}}{dt}=\mathbf{A}(\mathbf{x}_{p},w)\mathbf{\xi}_{\Delta}+\mathbf{B}( \mathbf{x}_{p},w)\mathbf{u}_{\Delta}-\frac{\partial\mathbf{\xi}_{o}(\mathbf{x}_{p},w)}{ \partial w}\frac{dw}{dt} \tag{22}\]
and the initial state values correspond to the state operating points for \(w(0)\):
\[\mathbf{\xi}(0)=\mathbf{\xi}_{o}(w(0)),\text{ or equivalently }\mathbf{\xi}_{\Delta}(0)=\mathbf{0} \tag{23}\]
To protect the generator components from excess electrical loads and the nacelle from the dynamic loads, an upper bound for generator speed \(\omega_{g}\) is set restricting the speed to the rated speed of the turbine:
\[0\leq\omega_{g}(t)\leq\omega_{g,\max} \tag{24}\]
As a proxy for the stability and safety of the FOWT system, an upper bound on the platform pitch tilt \(\Theta_{p}\) is included:
\[\Theta_{p}(t)\leq\Theta_{p,\max} \tag{25}\]
Maximum and minimum value constraints are placed on the controls blade pitch \(\beta\) and the generator torque \(\tau_{g}\), according to the values prescribed in [12]:
\[0\leq\tau_{g}(t)\leq\tau_{g,\max} \tag{26a}\] \[0\leq\beta(t)\leq\beta_{\max} \tag{26b}\]
Using the model for outputs from Eq. (9), we include additional output constraints on tower base fore-aft shear force and tower base side-to-side moment, respectively:
\[F_{s}\leq F_{s,\max} \tag{27a}\] \[M_{s}\leq M_{s,\max} \tag{27b}\]
The complete control subproblem formulation is presented in Problem (28), and solved with weight \(k=10^{-8}\) to normalize the objective function value to be approximately unity magnitude:
\[\min_{\mathbf{u}_{\Delta},\mathbf{\xi}_{\Delta}} \int_{0}^{t_{f}}\left(-kP(t)+\Pi_{c}(t)+\Pi_{p}(t)\right) \text{d}t\] (28a) subject to: \[\text{Eqs.~{}\eqref{eq:eq
the average input wind speed shown in Fig. 7. Extrapolation is used to find the values of the LPV model \(\Sigma_{w}\) outside the 3-25 m/s range, because the models are readily predictable in these regions.
The LQDO problems of the form in Problem (28) are solved using DTQP, an open-source MATLAB-based toolbox using the DT method and quadratic programming [75, 9]. Each problem was discretized using 2,500 equidistant mesh points, with an observed relative objective function error bound of approximately \(10^{-4}\).
A sensitivity approach was used to explore how the plant design decisions impact the system's cost and performance. Although a hybrid-optimization scheme could be used to identify the single optimal design as shown in [54], a sensitivity study was utilized to better understand the different trade-offs. To understand the impact of plant variables on the system stability, power production, and, subsequently, the LCOE, several constraint bounds on the platform pitch tilt \(\Theta_{p}\) were explored. More specifically, an exhaustive sensitivity study was conducted where \(\Theta_{p}\) was constrained to five different values between \(3^{\circ}\) and \(7^{\circ}\). A \(60\times 60\) grid was used to sample the plant design space. Although no wave/current forces are included as disturbances at this time, these different constraint values on \(\Theta_{p}\) will roughly indicate performance in more dynamic wave and current conditions.
### Notes on the Computational Time
A desktop workstation with an AMD 3970X CPU, 128-GB DDR4 2,666-MHz RAM, Matlab 2021b update 2, and Windows 10 build 17763.1790 was used to obtain all the linear models and perform the different CCD studies. The linear models were obtained using the WEIS toolkit available at [61]. Approximately 90 hours are required to obtain the complete set of linear models discussed in Sec. 3.4, the most computationally expensive operation in this study. Once the linear models corresponding to the full-factorial scheme are available, 0.8 seconds are required to construct and evaluate the surrogate model. The average solution time for constructing and solving a single inner-loop subproblem shown in Eq. (28) is 0.74 seconds, which includes determining physically accurate trajectories with respect to the linear model. The average time for solving the different subproblems for all 11 load cases shown in Fig. 7 in parallel is 8.2 seconds. The computational cost to obtain the results for a single value of \(\Theta_{p,\max}\) was, on average, 8.2 hours. Overall, there were \(3,600\times 11\times 5=198,000\) inner-loop control subproblems solved for different values of plant variables \(\mathbf{x}_{p}\), wind case, and \(\Theta_{p,\max}\).
All the studies discussed in this paper are formulated and solved using Matlab. However, the code to run the inner-loop studies using the LPV models is also available in Python and is published as part of the WEIS tool [61]. The code for inner-loop studies mentioned in the previous sections is available in Matlab at [75].
### Results for a Single-Control Subproblem
Figure 8 summarizes the optimal control results for one of the 198,000 problems with nominal plant dimensions (\(\mathbf{x}_{p,\mathrm{nominal}}=[51.75,12.50]\)), load case 7, maximum generator speed value of \(\omega_{\mathrm{g,max,1}}=0.7850\) [rad/s], and \(\Theta_{p}\leq 4^{\circ}\). The optimal trajectories for the generator speed and platform pitch are shown in Fig. 7(a). We see that the constraint \(\Theta_{p}\leq 4^{\circ}\) and others in Table 1 are satisfied. Load case 7 is in the rated region, so we might expect the blade pitch to be the primary mode of control and the generator torque to be held roughly constant [36]. As shown in Fig. 7(b), these trends are reflected in the optimal control results. In addition to these, from Fig. 7(c), we see that the constraints placed on the tower base fore-aft shear force in Eq. (27) are satisfied. The constraint placed on the tower base side-to-side moment is also satisfied, but it is not shown. To satisfy the platform pitch constraints, we see that the generator speed does need to decrease when the pitch constraint becomes active. Consequently, from Figs. 7(a) and 7(c), we can see how the generator power is affected
Figure 7: Input wind profiles from DLC 1.1. based on the average wind speed for the trajectory.
by the pitch constraint because it is a function of the generator speed.
To better understand the optimal control results in other operating regions, Fig. 9 was constructed to show the behavior of a system with nominal plant values \(\mathbf{x}_{p,\text{nominal}}\) and the pitch constraint \(\Theta_{p}\leq 6^{\circ}\). The constraint \(\omega_{g,\text{max},1}\) was relaxed by 20% to be \(\omega_{g,\text{max},2}=0.9424\) [rad/s] to explore solutions that can generate more power while satisfying the constraints. In Figs. (b)b and (c)c, the results generally follow the expected trends when compared to the operating point schedule from Fig. 2. Overall, the optimization-based approach seems to favor larger torque and generator speed values to maximize power production. As a consequence of relaxing the maximum generator speed from \(\omega_{g,\text{max},1}\) to \(\omega_{g,\text{max},2}\), we see that the optimizer favors lower blade pitch values in the rated region. The results from the load cases in the below-rated and transition regions are encouraging, as a combination of torque and pitch control is utilized. In some regions, the pitch control is active, while torque is held constant and vice versa. Therefore, the optimizer identifies results for all regions in agreement with traditional wind turbine controls. Overall, these results, in combination with the model validation in Sec. 3, demonstrate the validity of the considered LPV models in FOWT open-loop control studies.
### _Average Output Power vs. Plant Design Space_
In Fig. 10, the trends between the average power \(\bar{P}^{*}(\mathbf{x}_{p})\) for load case 7 are shown for three of the five tested values of \(\Theta_{p,\text{max}}\). The primary method used to control the platform pitch is the blade pitch \(\beta\), but \(\beta\) is also tightly coupled to the generator speed. To satisfy smaller, more challenging values of \(\Theta_{p,\text{max}}\), the optimal control solution has higher values of blade pitch, sacrificing generator speed. Thus, for these more challenging constraint values, the power produced is lower on average. Additionally, the platform design has a significant effect on the average power production. Larger values of column spacing \(c_{s}\) and column diameter \(c_{d}\) yield platforms that satisfy the pitch constraints with little to no compromise on power generation. In comparison, designs with smaller values of \(c_{s}\) and \(c_{d}\) must sacrifice power generation in some regions. In addition to these trends for the average output power, we briefly looked at how, for the same DLC, the optimal trajectories of \(\tau_{g}\) and \(\beta\) change with \(\mathbf{x}_{p}\). The mean value of \(\tau_{g}\) does not change as \(\mathbf{x}_{p}\) changes, as the optimizer seeks to maximize the power generated. This trend holds for all three windspeed regions. However, for the same DLC, the mean value of \(\beta\) is higher for designs with lower values of \(c_{s}\) and \(c_{d}\). The mean value is disproportionately higher in the transition region as a higher control effort is needed to satisfy the constraint on \(\Theta_{p}\)
Figure 8: Optimal control results with nominal plant dimension \(\mathbf{x}_{p,\text{nominal}}\), case 7, \(\omega_{g,\text{max},1}\),and \(\Theta_{p}\leq 4^{\circ}\).
Figure 11: AEP vs. plant design space for different platform pitch (\(\Theta_{p}\)) values.
Figure 12: LCOE vs. plant design space for different platform pitch (\(\Theta_{p}\)) values.
Figure 10: Average power for case 7 with a mean wind speed of 14 [m/s] vs. plant design space for different platform pitch (\(\Theta_{p}\)) values.
for these designs.
For some combination of platform pitch constraints and plant design considered in this study, the inner-loop optimizer returns an infeasible result. These infeasible cases happen primarily for designs with lower values of \(c_{s}\) and \(c_{d}\) and load cases in the transition region, because the system tends to have higher values of platform pitch in this region. In addition to the cases from the transition region, some load cases in the rated region fail for these plant designs. For these cases that fail in the rated region, the upper limit on the control blade pitch considered in these studies is insufficient for the optimizer to find feasible solutions.
### _LCOE vs. Plant Design Space_
Combining the average power produced for each load case using the scheme in Eq. (16), we can determine the total energy output. In addition, utilizing the total cost model mentioned in Sec. 4.1, the system LCOE can be estimated. As mentioned previously, some values of the constraints are infeasible, and the infeasible results are included with zero generated energy. The summarized LCOE and AEP results are shown in Figs. 11 and 12, respectively. The Weibull distribution used in the AEP calculation in Eq. (16) (and shown in Fig. (a)a) weights the power produced by the wind cases in the below-rated and transition regions higher than the power produced in the rated region. Therefore, the transition region will be critical to reducing LCOE, and designs with fewer infeasible cases here would be strongly preferred.
From these results, we see that the optimal value for LCOE depends on the platform pitch constraint. The capital cost increases monotonically as \(\mathbf{x}_{p}\) increases, with a minimum cost of \(4,740.7\) [S/kW] at \(\mathbf{L}_{p}=[36,6]^{T}\), and a maximum of \(5,407.2\) [S/kW] at \(\mathbf{U}_{p}=[78,24]^{T}\), as shown in Fig. 13. Similarly, the AEP increases as \(\mathbf{x}_{p}\) increases.
For the IEA 15-MW reference turbine described in [64, 12], \(\Theta_{p}\) was constrained to \(6^{\circ}\) using the nominal platform dimensions. From Fig. (c)c, we see there is a region that balances the capital cost and power production and, consequently, has lower LCOE values. While keeping the other plant parameters constant, the design with the lowest LCOE of \(86.27\) [S/MWh] can be obtained using a platform with \(\mathbf{x}_{p,\text{opt},6^{\circ}}=[36.0,20.9]^{T}\). Additionally, the lowest LCOE values across all constraints can be found in the neighborhood of this point. For comparison, the LCOE value for the nominal platform with dimensions \(\mathbf{x}_{p,\text{nominal},6^{\circ}}\) evaluated using this approach is \(89.30\) [S/MWh].
To explore the sensitivity of the optimization result to variations in the cost model, we consider a variability of \(\pm 20\%\) of the capital cost for both \(c_{s}\) and \(c_{d}\). Assuming the capital costs of \(c_{s}\) and \(c_{d}\) are independent, the capital cost from Eq. (15a) can be represented as:
\[C_{\text{capital}}(\mathbf{x}_{p})=C_{s}(c_{s})+C_{d}(c_{d}) \tag{29}\]
The variations in the cost can be represented through a scaling factor \(\mathbf{F}\) as: The variations in the cost can be represented through a scaling factor \(\mathbf{F}\) as:
\[C_{\text{capital}}(\mathbf{x}_{p})=\mathbf{F}^{T}\begin{bmatrix}C_{s}(c_{s})\\ C_{d}(c_{d})\end{bmatrix}^{T} \tag{30}\]
with \(\mathbf{F}=[1,1]^{T}\) for Eq. (29). Figure 14 shows how the LCOE subspace varies at the four extremities of this uncertainty set. The optimal LCOE values for these four cases are:
1. LCOE = 84.79 [S/MWh] at \(\mathbf{x}_{p}=[36.0,23.6]^{T}\) for \(\mathbf{F}=[0.8,0.8]^{T}\), shown in Fig. (a)a.
2. LCOE = 86.53 [S/MWh] at \(\mathbf{x}_{p}=[37.4,23.3]^{T}\) for \(\mathbf{F}=[1.2,0.8]^{T}\), shown in Fig. (b)b.
3. LCOE = 85.97 [S/MWh] at \(\mathbf{x}_{p}=[36.0,20.9]^{T}\) for \(\mathbf{F}=[0.8,1.2]^{T}\), shown in Fig. (c)c.
4. LCOE = 87.71 [S/MWh] at \(\mathbf{x}_{p}=[37.4,20.9]^{T}\) for \(\mathbf{F}=[1.2,1.2]^{T}\), shown in Fig. (d)d.
Since the AEP does not vary with the cost model, the optimal point is still in the neighborhood of \(\mathbf{x}_{p,\text{opt},6^{\circ}}\), shown in Fig. (c)c, as this is the region with maximum AEP and minimum cost. Because of this, the optimal value of \(c_{s}\) does not change much. However, the optimal design is more sensitive towards changes in the capital cost associated with \(c_{d}\). By reducing the cost of \(c_{d}\), the optimum design has a higher value of \(c_{d}\) as shown in Figs. (a)a and (b)b.
The results presented in this study are subject to modeling assumptions, optimal control operation, and lack of safety factors, but it can still help guide the final design. Additionally, the hydrodynamic and hydrostatic stability of the different platforms have not been evaluated in this study, along with other DLCs that are meant to test the turbine under fatigue and extreme loading conditions. These investigations will also limit the bounds on the plant design variables and impact the final design.
## 6 Conclusion
In this work, we discussed the use of LPV models for CCD of FOWTs. High-fidelity models of FOWTs are described by highly
Figure 13: Capital cost vs. plant design space.
complex and nonlinear models. Unfortunately, these models are often too costly to use in early-stage system design and evaluation. Using linearized models based on these nonlinear systems is a popular method to offset the computational costs. Here, we describe a class of LPV models that realize more accurate predictions of a system's dynamic behavior over a large range of operating points and are shown to be useful for early-stage CCD studies of FOWTs.
The specific FOWT system considered was the IEA 15-MW reference turbine [12] on a semisubmersible platform [64]. The LPV models based on the wind speed parameter showed good general agreement in both nonlinear simulation comparisons and general optimal control trends. The primary study investigated the system's pitching motion as a proxy of its dynamic stability, power production, and, ultimately, the LCOE. The plant decisions in this study were the distance between the central and outer columns of the platform, along with the diameter of the outer columns, and the results indicated that a system with lower column spacing and higher column diameter values has optimal LCOE values. The optimal platform design obtained through the proposed approach can satisfy the platform pitch constraints while providing a lower LCOE value. However, several additional factors should be investigated before making a specific recommendation.
It remains to future work to incorporate more detailed and sophisticated outer-loop plant design optimization, including the impact of plant decisions, such as tower hub height, blade length, and the mooring system on the platform stability, and power production in the context of the LCOE. More scalable and efficient strategies for sampling and interpolation must be explored to support the expanded plant model. Leveraging the LPV model structure for uncertainty propagation in the time domain would support future CCD studies that directly incorporate uncertainties and reliability constraints. The performance and trade-offs of the LPV approach presented in this article should be compared with approaches for nonlinear derivative function surrogate models [57]. Additionally, we hope to study the effect of wave and current excitations. Finally, to address the realizability of the open-loop optimal control solutions, work is needed to realize robust, implementable control systems, which may be informed by the optimal operation identified in this study [2, 58].
## Acknowledgment
This work was authored in part by the National Renewable Energy Laboratory, operated by Alliance for Sustainable Energy, LLC, for the U.S. Department of Energy (DOE) under Contract No. DE-AC36-08GO28308. Funding provided by the U.S. Department of Energy Office of Energy Efficiency and Renewable Energy Wind Energy Technologies Office. The views expressed in the article do not necessarily represent the views of the DOE or the U.S. Government. The U.S. Government retains and the publisher, by accepting the article for publication, acknowledges that the U.S. Government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this work, or allow others to do so, for U.S. Government purposes. The authors would like to thank Alan Wright, Gartret Barter from NREL, Saeed Azad from CSU, and John Jasa from NASA Glenn Research Center for their feedback and suggestions.
|
2303.04203
|
Toward a Geometric Theory of Manifold Untangling
|
It has been hypothesized that the ventral stream processing for object
recognition is based on a mechanism called cortically local subspace
untangling. A mathematical abstraction of object recognition by the visual
cortex is how to untangle the manifolds associated with different object
category. Such a manifold untangling problem is closely related to the
celebrated kernel trick in metric space. In this paper, we conjecture that
there is a more general solution to manifold untangling in the topological
space without artificially defining any distance metric. Geometrically, we can
either $embed$ a manifold in a higher dimensional space to promote selectivity
or $flatten$ a manifold to promote tolerance. General strategies of both global
manifold embedding and local manifold flattening are presented and connected
with existing work on the untangling of image, audio, and language data. We
also discuss the implications of untangling the manifold into motor control and
internal representations.
|
Xin Li, Shuo Wang
|
2023-03-07T19:47:01Z
|
http://arxiv.org/abs/2303.04203v1
|
# Toward a Geometric Theory of Manifold Untangling
###### Abstract
It has been hypothesized that the ventral stream processing for object recognition is based on a mechanism called cortically local subspace untangling. A mathematical abstraction of object recognition by the visual cortex is how to untangle the manifolds associated with different object category. Such a manifold untangling problem is closely related to the celebrated kernel trick in metric space. In this paper, we conjecture that there is a more general solution to manifold untangling in the topological space without artificially defining any distance metric. Geometrically, we can either \(embed\) a manifold in a higher dimensional space to promote selectivity or \(flatten\) a manifold to promote tolerance. General strategies of both global manifold embedding and local manifold flattening are presented and connected with existing work on the untangling of image, audio, and language data. We also discuss the implications of untangling the manifold into motor control and internal representations.
Machine Learning, Manifold Untangling, Manifold Untangling
## 1 Introduction
Is dimensionality a curse or a blessing? The term "curse of dimensionality" was coined by Richard Bellman when studying dynamical programming in the 1960s (Bellman, 1966). It refers to various phenomena that arise from the analysis and organization of data in high-dimensional spaces. Specifically, all objects tend to become sparse and dissimilar in many ways as the dimensionality increases, which prevents common data organization strategies from being efficient. To overcome such a curse of dimensionality, various nonlinear dimensionality reduction techniques such as IsoMAP (Tenenbaum et al., 2000) and locally linear embedding (LLE) (Roweis & Saul, 2000) have been developed to reveal the low-dimensional structure embedded in high-dimensional observation data.
The blessing of dimensionality (Donoho et al., 2000) is a more counter-intuitive concept. To illustrate this concept, we start by considering a classical toy example of XOR decision for the linear perceptron (Rosenblatt, 1958). There is no linear classifier in 2D that can separate the two different classes of XOR decision. However, with an additional dimension \(z=x\oplus y\), it is straightforward to linearly separate two classes in a 3D space \((x,y,z)\) (e.g., hyperplane \(z=\frac{1}{2}\) will do). Another example of so-called two-circle data is shown in Fig. 1. Again, there exists no linearly classifier that can separate red from blue in 2D; while linear separability can be easily satisfied in 3D by taking into account the third and redundant dimension \(r=\sqrt{x^{2}+y^{2}}\) into account.
We note that the issue of dimensionality is often tangled with that of linearity. For example, Kernel's trick (Scholkopf, 2000) in support vector machine (SVM), which allows linear learning algorithms to learn a nonlinear function or decision boundary, can be interpreted as a special class of techniques exploiting the blessing of dimensionality. In face verification (Chen et al., 2013), linear feature dimension as large as 100K has been reported to improve performance due to the blessing of dimensionality. More recently, the class of convolutional neural networks, equipped with nonlinear rectifying linear units (ReLU), have shown excellent performance in various vision tasks from image classification to object recognition. Between nonlinearity and dimensionality, which plays a more fundamental role?
In this paper, we advocate for the blessing of dimensionality from a manifold untangling perspective. The problem of manifold untangling (a.k.a. disentanglement (Brahma et al., 2015)) can be formulated as an extension of the manifold embedding and knotting problem (Skopenkov, 2008) in differential topology. Originating from Whitney's original
Figure 1: Mapping 2D data points \((x,y)\) to 3D \((x,y,r),r=\sqrt{x^{2}+y^{2}}\) facilitates the task of linear separability.
work in 1930 (Whitney, 1936), blessing-of-dimensionality related results include the embedding of the \(n\)-manifold in \(R^{2n}\) and unknotting in \(R^{2n+1}\)(Wu, 2008). These classical results in the theory of differential topology inspire us to tackle the problem of manifold untangling by iteratively constructing over-parameterized direct-fit models (Hasson et al., 2020) in a higher-dimensional space.
The main contributions of this paper are summarized below.
* Manifold untangling without a distance metric. In topological space, we show how to improve the manifold capacity by a unified untangling approach.
* Two general strategies for untangling manifolds: global embedding vs. local flattening. We show how embedding and flattening jointly improve manifold capacity by promoting selectivity and tolerance.
* Model-agnostic for multimodal data. We apply the theory of manifold untangling to several recent works on multiview image recognition, invariant audio recognition, and perceptual video straightening.
* Biological connection with the hypothesis of cortically local subspace untangling in ventral stream processing and trajectory untangling in motor control.
## 2 Manifold Untangling: What and Why?
### Problem Formulation
The problem of manifold untangling originated from the modeling of ventral stream processing in neuroscience (DiCarlo and Cox, 2007). To explain how object recognition works, a major challenge is the form of high-dimensional visual representations. An object manifold (e.g., the image projected onto the retina) is characterized by the variations of its pose, position, and size, which can be mathematically abstracted as a low-dimensional curved surface inside the retinal image space. It follows that different objects, such as varying face identities, correspond to different manifolds. The term "object manifold" specifically refers to low-dimensional subspaces underlying population activities embedded in high-dimensional neural state space according to (Chung and Abbott, 2021). The manifolds embedded in ambient neural state space (called neural population geometry in (Chung and Abbott, 2021)) include both sensory/motor and cognitive brain regions. To illustrate the problem of manifold untangling more vividly, we use an analogy with tangled shoelaces, as shown in Fig. 2. The task of object recognition is analogous to untangle these shoelaces, but in a higher-dimensional space of visual representations. In the literature, manifold untangling (a.k.a. disentanglement (Brahma et al., 2015)) has also been studied for other data modalities, such as image (Cohen et al., 2020), speech (Stephenson et al., 2019), video (Henaff et al., 2019), and language (Mamou et al., 2020).
### Motivation: Avoiding Distance Metric
One of the long-standing open problems in manifold discovery is how to calculate the geodesic distance between two points on a manifold. Unlike the Euclidean distance, the geodesic distance is intrinsically tangled with the low-dimensional locally curved geometry of the manifold. Without knowledge of local geometry, calculating the geodesic distance or building a Mercer kernel becomes a chicken-and-egg problem like manifold learning (Ma and Fu, 2012). Can one solve the problem of untangling a manifold without discovering its local low-dimensional structure? Does there exist a universal solution to manifold untangling by global operations such as homotopy (Hatcher, 2005)?
We argue that the answer is affirmative. Our basic intuition is based on the observation that it is easier to untangle a manifold in a higher-dimensional space (Fusi et al., 2016). A simple justification is based on the observation that a knot in three dimensions can be untied when placed in a four-dimensional space (Crowell and Fox, 2012). More generally, in higher dimensions than four, there is enough "space" to untie any knot by smoothly transforming it into a circle. Recent studies on unsupervised disentanglement of manifold (Horan et al., 2021) show that local isometry (related to embedding) and non-Gaussianity (required by linear generative models) make disentanglement possible. Both conditions are more easily satisfied in higher-dimensional spaces.
To quantify the effectiveness of manifold untangling, the manifold capacity (Mamou et al., 2020) has been derived from the mean field theoretic analysis. The basic idea is to find the maximum number of dichotomies that are linearly separable in a high-dimensional space, as shown in Fig. 3. Conceptually, memory capacity can be enhanced by promoting selectivity (i.e. pushing decision boundaries) or tolerance (i.e. straightening curved surfaces). More rigorously, there are two complementary approaches to maximize memory capacity: manifold flattening (promoting tolerance) to facilitate linear separability and manifold embedding (pro
Figure 2: An analogy for the illustration of manifold untangling problem using shoelaces of varying colors.
moting selectivity) into a higher-dimensional space. The main question lies in the construction of embedding or flattening functions to increase the manifold capacity.
## 3 Manifold Embedding and Flattening
### Manifold Embedding and Unknotting Theory
**Theorem 1.** **Whitney Embedding Theorem (1936)**
Any smooth manifold \(\mathbf{M}\) of dimension \(m\geq 2\) can be embedded into \(R^{2m+1}\).
In 1958, W.T. Wu proved that every connected \(n\)-manifold unknots in \(R^{2n+1}\) for \(n>1\)(Wu, 2008). The theory of differential manifold was extended into surgery theory by J. Milnor in the 1960s, which became a major tool in high-dimensional topology. An important class of smoothing manifolds was to use obstruction theories (Hirsch, 1963). Obstruction theory is concerned with when a topological manifold has a piecewise-linear structure and when a piecewise-linear manifold has a differential structure.
The basic ideas behind our approach to maximize the manifold capacity are as follows. On the one hand, we want to increase the number of distinct manifolds (the value \(P\) in Fig. 3a) by promoting the selectivity of data representations. This objective can be achieved by embedding the manifold into a higher-dimensional space using the generalized kernel trick. On the other hand, we want to increase the number of separable dichotomies by promoting the tolerance of data representations. This is aligned with the idea of manifold flattening by constructing identity-preserving transformations. Both embedding and flattening contribute to manifold untangling in a complementary manner.
### Global Manifold Embedding
#### 3.2.1 Generalized kernel method
A well-known method, named the kernel trick, is to generalize distance-based algorithms to operate in the feature space (Scholkopf, 2000). The key idea is to construct a nonlinear mapping function \(\phi:\mathbf{X}\rightarrow\mathbf{Y}\) where \(\mathbf{x}\in\mathbf{X}\) and \(\phi(\mathbf{x})\in\mathbf{Y}\) denote the input and feature spaces, respectively. Then the kernel trick is implemented by considering the dot product in the feature space, i.e., \(k(\mathbf{x},\mathbf{x}^{\prime})=<\phi(\mathbf{x}),\phi(\mathbf{x}^{\prime})>\). For the class of positive definite kernels, rigorous results, such as Mercer's theorem (Vapnik, 1999) guarantees the generalization of distance metric for a wide range of kernel constructions (e.g., radial basis function and neural tangent kernel). As a concrete example, Fig. 4 illustrates the idea behind the kernel trick for a toy example of separating points within a circle from those outside.
The effectiveness of the kernel trick is often attributed to its nonlinearity related to the input space. However, dealing with nonlinearity is always challenging - e.g., despite the conceptual simplicity of the kernel trick, it is often much more difficult to reason with the optimality of different approaches to kernel construction. More importantly, as shown in Fig. 4, the blessing of dimensionality offers a refreshing perspective to understand the kernel trick. The new dimension introduced by the kernel geometrically warps the data points in such a way that they can be more easily separated by a linear classifier. Such a simple observation inspires us to tackle the manifold untangling by recursively applying the kernel trick.
More specifically, we propose to generalize the nonlinear mapping function \(\phi:\mathbf{X}^{n}\rightarrow\mathbf{X}^{n+1},n\in N\), where \(\mathbf{x}^{n}\in\mathbf{X}^{n}\) and \(\phi^{n}(\mathbf{x}^{n})\in\mathbf{X}^{n+1},dim(\mathbf{X}^{n+1})>dim(\mathbf{X}^ {n})\) denote the input and output spaces at the \(n\)-th layer, respectively. Our intuition is that manifold untangling is closely
Figure 4: Kernel trick in the inner product space (left: input space, right: feature space). The kernel is given by \(\phi((a,b))=(a,b,a^{2}+b^{2})\) and \(K(\mathbf{x},\mathbf{y})=\mathbf{x}\cdot\mathbf{y}+\parallel\mathbf{x}\parallel^{2}+\parallel \mathbf{y}\parallel^{2}\). Training points are mapped to a 3-dimensional space, where a separate hyperplane can be easily found.
Figure 3: Illustration of manifold untangling via a) global embedding into a higher-dimensional space (redrawn from (Mamou et al., 2020)), where the manifold capacity measures the linear separability of object manifolds; and b) local flattening where the number of directions with significant variance is reduced by identity-preserving transformations or decision boundary smoothing.
related to the approximation by nonlinear sigmoid functions (Cybenko, 1989).
**Theorem 2. Universal Approximation Theorem**.
For any continuous function \(f(x)\) and sigmoidal function \(\sigma\), there exists a universal approximation by \(g(x)=\sum_{j=1}^{N}\alpha_{j}\sigma(y_{j}^{T}x+\theta_{j})\) such that \(|f(x)-g(x)<\epsilon|\) for all \(x\in I_{n}\), where \(I_{n}\) denotes an n-dimensional unit cube.
The above approximation result can be interpreted as the untangling of the nonlinear function \(f(x)\) by successive concatenation of \(N\) sigmoid uint in a single hidden layer. Each unit partially untangles the nonlinear function until the input function is straightened into a linear one. Connecting this result with our manifold untangling intuition, we can interpret multilayer feedforward networks as universal approximators (Hornik et al., 1989) which recursively untangling a nonlinear function (decision region) until reaching the linear separable regime.
#### 3.2.2 Hierarchical sparse coding
The equivalence relationship between the kernel method in a support vector machine (SVM) (Bartlett & Shawe-Taylor, 1999) and sparse coding (Olshausen & Field, 1997) has been well studied in the literature (Girosi, 1998). An important new insight brought about by this work is the generalization of kernel trick by hierarchical sparse coding. As advocated in (DiCarlo et al., 2012), the organized hierarchy forms a closed loop from primary visual cortex (V1) to inferior temporal cortex (IT) and then back to V1. The hierarchical organization is reflected by the increasing field-of-view, as well as improved tolerance of IT population to object recognition. An intuitive explanation for such hierarchical organization is that it leads to a redundant but sparse representation that promotes the selectivity of visual stimuli.
More rigorously, we consider the class of hierarchical and redundant sparse representations (e.g., steerable pyramids (Simoncelli & Freeman, 1995) and overcomplete dictionaries (Olshausen & Field, 1997)) from the perspective of manifold embedding. They map the retinal image space to a much higher dimensional space with sparse coefficients. Unlike the nonlinearity argument supplied by (Olshausen & Field, 1997)), we argue that exploiting the blessing of dimensionality plays a more fundamental role in not only V1 but also the entire ventral stream processing. Note that this is consistent with H. Barlow's redundancy exploitation hypothesis (Barlow, 2001) because the strategy of sparse coding maximizes the capacity of associative memory (Olshausen & Field, 2004).
Under the framework of manifold capacity, we claim that hierarchical sparse coding increases the number of manifolds (\(P^{*}\)) while keeping the feature dimension (\(N\)) constant. A mathematical analysis of why sparse coding increases the capacity of associative memory can be found in (Okada, 1996). It was shown that the sparsely coded associative memory model achieves an extremely large storage capacity that diverges as the mean-firing rate decreases. Despite the increase in the total number of coefficients in redundant sparse representation, it is easy to observe that the \(ratio\) of significant coefficients (effective dimensionality of salient features corresponding to the mean firing rates) does not change due to the good localization properties of bases.
To show how improved sparsity increases the capacity of associative memory, we have the following result. We consider a non-holographic associative memory model in (Willshaw et al., 1969) which consists of \(N_{A}\times N_{B}\) grid points on a square lattice. Let \(r_{A}=\frac{M_{A}}{N_{A}}\) and \(r_{B}=\frac{M_{B}}{N_{B}}\) denote the ratio of active grid points responsible for the associative recall of \(R\) cross-link patterns. Then the memory capacity of such an associative net is given by
\[C=N_{c}\cdot log(p)\cdot log(1-p), \tag{1}\]
where \(N_{c}=N_{A}\times N_{B}\) and the collision probability \(p\) can be calculated by
\[1-p=exp(-R\cdot r_{A}\cdot r_{B}), \tag{2}\]
It is easy to observe that to maintain a low collision probability \(p\), both \(r_{A}\) and \(r_{B}\) need to be small, implying a small percentage of active grid points along the horizontal and vertical directions. The improvement in sparsity in the representation of the data helps reduce the probability of collision (less crosstalk) (Olshausen & Field, 2004) by promoting the selectivity of the associative representations. High-dimensional representations with mixed selectivity (Fusi et al., 2016) are known to allow for linear separable decision regions to support different potential responses.
### Local Manifold Flattening
#### 3.3.1 Identity-preserving transformations
The other important new insight deals with the discovery of local geometry on a manifold to promote tolerance within the same class/identity. The importance of tolerance to object recognition can be mathematically justified by flattening the manifold with identity-preserving transformations (see Fig. 2B in (DiCarlo et al., 2012)). More specifically, consider the curved surface of an object manifold (e.g., projection onto a subspace) associated with position or scale; achieving tolerance (i.e., translation or scale invariance) is conceptually equivalent to unfurl the curved surface such that the flattened manifolds can be more easily separated by hyperplanes. Some quantitative evidence to validate the flattening hypothesis in deep learning has been reported in (Brahma et al., 2015).
The manifold untangling framework offers a refreshing perspective on the well-studied binding problem (Treisman, 1996). After manifold flattening, each untangled subspace is characterized by the neural population geometry whose representation simultaneously conveys explicit information about not only object identity but also tangled subspace attributes such as position, size, pose, and context. Even when multiple objects are present, one can imagine that identity-preserving transformations can flatten their corresponding manifolds to improve the manifold capacity. There is no need for rebinding those subspace attributes because they are implicitly embedded into identity-preserving transformations.
To better illustrate the concept of manifold flattening, we can think of the three pairs of legs in jacks as an analogy to the identity, position, and scale subspaces. Mathematically, these jacks can be interpreted as a 1D manifold embedded into a 3D Euclidean space. The problem of packing object manifolds is challenging because the legs of those jacks interfere with each other. Identity-preserving transformations facilitate the packing task by flattening the two subspaces of position and scale (we will discuss the biological implementation of this strategy later). In the transformed space after manifold untangling (i.e., conditioned on the knowledge about the position and scale), jacks are flattened to ellipsoids suitable for packing or linear separation.
#### 3.3.2 Decision boundary smoothing
An alternative approach to achieve the objective of manifold flattening is via smoothing the decision boundary among different classes/identities. Along this line of reasoning, several closely related ideas for such as manifold mixup (Verma et al., 2019), manifold charting (Mangla et al., 2020), and embedding propagation (Rodriguez et al., 2020) have been proposed recently and shown to be effective for few-shot classification.
The objective of manifold flattening is to reduce the number of directions with significant variance (refer to Fig. 3b). Following the notations in (Verma et al., 2019), we use \(\mathcal{X},\mathcal{H},\mathcal{Y}\) to denote input space, representation space, and output space, respectively. The representation space can be the hidden states of DNN or support vectors of SVM or sparse coefficients in hierarchical sparse coding. We can obtain the following theoretical result.
**Theorem 3. Manifold Flattening Theorem**.
Let \(\mathcal{H}\) be a space with dimension \(dim(\mathcal{H})\), and let \(d\) to represent the number of classes/identities in the dataset. If \(dim(\mathcal{H})\geq d-1\), then there exists a linear function/dichotomy that can separate the \(d\) different classes.
The proof of the above result for the hidden state of DNN representations can be found in (Verma et al., 2019). Generally speaking, if the dimensionality of representation \(dim(\mathcal{H})\) is greater than the number of classes \(d\), then the resulting representations for that class will fall into a subspace of dimension \(dim(\mathcal{H})-d+1\).
It is enlightening to contrast the decision boundary smoothing strategy with that of identity-preserving transformations. The former improves the performance of the classifier in the presence of distribution shifts, outliers, and adversarial examples with few-shot learning constraint (i.e., it does not require much training data). The latter requires more training data to achieve the desired objective of X-invariant recognition (X refers to environmental uncertainty factor) by learning identity-preserving transformations. These two approaches are complementary to each other because they flatten the manifold from different (inter-class vs. intra-class) perspectives.
## 4 Model-Agnostic Manifold Untangling
### Multi-view Visual Object Recognition
Visual object recognition has been extensively studied by the community of computer vision (Zhang et al., 2013), (Bakry and Elgammal, 2014). The three subspaces associated with object category, instance, and viewpoint/pose are often tangled in the observation of multiview image data. Conventional wisdom to achieve an untangled representation of the view-object manifold is to formulate a joint reconstruction problem with unknown category/instance and viewpoint. Through parameterization of the visual manifold by a mapping coefficient matrix and a nonlinear kernel map, one can formulate either a continuous inference problem (Zhang et al., 2013) or a discrete discrimination problem (Bakry and Elgammal, 2014). The objective of manifold untangling is therefore implicitly implemented by projecting onto the target subspace of category, instance, and viewpoint.
A fundamental weakness of those conventional approaches is their lack of generalization property. It is often assumed as a priori that the topology of the viewpoint manifold of individual objects is known. The derived manifold untangling solution easily breaks down when such assumption becomes invalid (e.g., due to tangling of other uncertainty factors such as scale, illumination, and clutter (Johnson and Hebert, 1999)). Meanwhile, the computational complexity of manifold reconstruction in both continuous and discrete settings can be prohibitive because of the required Monte-Carlo Markov-Chain (MCMC) sampling and exhaustive search of subspace indexes (the curse of dimensionality). One cannot help wondering if there exists an explicit solution to manifold untangling without reconstruction.
This work offers attractive alternative solutions to multiview visual object recognition. On several challenging datasets with the presence of both pose and expression variations, it
has been shown in (Chen et al., 2013) that high-dimensional features (as large as 100K) can dramatically boost the performance of face verification. Such blessing of dimensionality has been empirically verified for various local descriptions from local binary patterns (LBP) (Ahonen et al., 2004) to Gabor filters (Liu and Wechsler, 2002). Our manifold embedding strategy offers a plausible theoretical interpretation - namely, as the dimensionality increases, the concatenation of features with varying landmark numbers and sampling scales promotes selectivity by offering complementary descriptions of the object category.
Identity-preserving transformations are often applied to generalize the performance of deep learning models to previously unseen data (Connor et al., 2021). They can be either constructed from a set of data augmentation tools (e.g., rotation, flipping, and scaling) or learned through a set of Lie group operators that define directions of motion on the manifold. Both classes can be unified into motion-induced identity-preserving transformations by generalizing the untangling factor from viewpoint only to motion-related variations. Broadly speaking, based on the observation that the identity of an object is temporally stable, identity-preserving transformations should include both micro-scale (e.g., saccadic-driven image translations) and macro-scale (e.g., egomotion-driven clutter variability). Additionally, deformable objects such as faces and bodies pose additional challenges to invariant recognition, which calls for recursive application of identity-preserving transformations (e.g., reentrant signaling (Edelman, 1993)).
A closely related idea to manifold untangling is the learning of disentangled representations. For example, the GAN for disentangled representation learning (DR-GAN) (Tran et al., 2017) can take one or multiple images as input and explicitly output the pose code along with an arbitrary number of synthetic images. Such GAN-based deep generative model cleverly combines the pose code in the generator and the pose estimation in the discriminator into a closed loop. It can be interpreted as achieving tolerance by simultaneously resolving the uncertainty of identity and pose. It is mathematically equivalent to maximum a posterior (MAP) estimation in the joint space of object identity and identity-preserving transformations (refer to Fig. 4D in (DiCarlo et al., 2012)).
### Invariant Speech and Language Recognition
Unlike image data, speech signals are characterized by dynamic patterns in the temporal domain. Since language is unique to humans, language models serve as a strong supervisor in speech recognition. From words and phrases to paragraphs and part-of-speech, the principle of hierarchical organization has been widely studied in natural language processing. Computational maps in the auditory cortex share an organizational principle similar to that in the visual cortex. Therefore, it is enlightening to understand invariant speech and language recognition from a manifold untangling perspective.
Compared with images, speech and language data are arguably less tangled due to the varying physical origin. From a manifold untangling perspective, embedding plays a more important role than flattening for speech and language data than images. Such difference is supported by the popularity of word embedding models (e.g., word2vec (Goldberg and Levy, 2014) and GloVE (Pennington et al., 2014)). Even without any flattening, it is relatively easy to untangle the word manifold by embedding alone, as shown in recent work using two models of automatic speech recognition (ASR) models (Stephenson et al., 2019): convolutional neural network (CNN)-based (Kell et al., 2018) and Deep Speech 2 (DS2) (Amodei et al., 2016). The untangling of the word manifold has been clearly demonstrated by the increase in manifold capacity of both the ASR and the DS2 models in later layers. A similar observation has been made for the popular language model (BERT) that is transformer-based (Mamou et al., 2020).
### Perceptual Straightening of Video Data
By contrast, video data has been much less studied than image or speech. Depending on the definition of object category, we can revisit several classical video processing tasks from a manifold untangling perspective. First, the class of natural video defines a manifold that is related to visual quality. The amount of perturbation (e.g., jittering artifacts) from the manifold of natural video is often correlated with the degradation of visual quality. One of recent works (Henaff et al., 2019) has proposed a predictive coding hypothesis (Rao and Ballard, 1999) - i.e., temporal trajectories of visual input are perceptually straightened to make them more predictable. This hypothesis is consistent with the theory of manifold untangling because temporal straightening can be interpreted as a strategy of flattening the object manifold associated with the subspace of viewpoint. A key experimental finding from (Henaff et al., 2019) is that natural motion in video sequences corresponds to a flattened trajectory in the perceptual space. Such a manifold flattening viewpoint seems to offer a quantitative framework for evaluating the performance of video stabilization techniques (Roberto and Souza et al., 2022).
Second, the concept of probabilistic appearance manifold has been introduced for video-based face recognition (FR) (Lee et al., 2003). In (Lee et al., 2003), the local geometry of nonlinear appearance manifold (associated with the varying poses) is approximated by standard PCA-based hyperplanes. Such linear approximation of the pose manifold is conceptually simple but its optimality is often question
able. The theory of manifold untangling offers a refreshing new perspective toward video-based FR - that is, one can flatten the pose manifold in the latent space (e.g., \(W+\) in StyleGAN (Shen et al., 2020)). After straightening the video of a given identity, one can interpret the warped video as the augmented image observation by pose normalization. It follows that even simple fusion strategy, such as sum-rule, can be applied to the untangled video data. Note that such an idea of untangling manifolds can be easily generalized from the pose manifold to other facial attributes (e.g., age and expression).
Third, a dual problem to image-based object recognition is dynamic scene classification (Theriault et al., 2013) where the object category is semantically defined by the scene of video data. By learning the slowest feature with slow feature analysis (SFA) (Wiskott and Sejnowski, 2002), one can untangle the classes for different semantic categories. The key idea behind SFA is to learn invariant representations from transformation sequences, which is closely related to Laplacian eigenmaps (Sprekeler, 2011). From the perspective of manifold untangling, SFA can be interpreted as an alternative to selectivity and tolerance for learning invariance (Franzius et al., 2008). A similar idea has also found a successful application in the untangling of the manifold of motion for human action recognition (Zhang and Tao, 2012). One possible extension of SFA inspired by manifold embedding is to concatenate the learned SFA features from multiple modalities (e.g., color, SIFT, HOG); when motion information is represented by gait or skeleton, manifold flattening can be easily implemented by deformable shapes (Palafox et al., 2021).
## 5 Biological Connections with Perceptual Memory
### Cortically Local Subspace Untangling in Ventral Stream
How is manifold untangling achieved by the ventral stream of the visual cortex? In (DiCarlo et al., 2012), it was hypothesized that the task is recursively implemented by a meta job description at different layers. At each layer, the objective for a local group of neuronal population is to ensure that the output representation becomes less tangled than the input one, which gives the term "cortically local subspace untangling". Two general classes of mechanism are conceived to be relevant to the task of manifold flattening: nonlinear network architecture (Serre et al., 2007; Riesenhuber and Poggio, 1999) and identity-preserving transformations (Mocz et al., 2021; Pagan et al., 2013), which we will briefly review here.
In the hierarchical model HMAX for object recognition (Riesenhuber and Poggio, 1999), two classes of cells (simple vs. complex) are responsible for selectivity and tolerance operations, respectively. There exists a canonical circuit for modeling simple and complex cells in V1 (Kouh and Poggio, 2008) based on nonlinear divisive normalization. Generally speaking, simple cells are modeled by AND-like or summation operators which constructs some selective tuning for combinations of visual features; complex cells are modeled by OR-like or max-pooling operators which achieve invariance/tore lance to variations in the visual stimuli (e.g., pose, location, and scale). HMAX model and convolutional neural networks (CNN) consist of several layers of alternating simple and complex cells, which can be interpreted as gradually untangling object manifolds (Brahma et al., 2015). However, unlike convergent architecture in HMAX or CNN, the visual cortex is known for its divergent topology (Barlow, 2001) (consistent with the blessing of dimensionality).
Temporal continuity hypothesis states that "input patterns that occur close together in time tend to lead to similar output responses" (DiCarlo et al., 2012). Since an object's identity is temporally stable/continuous, retinal images of the same object naturally serve as the training data for learning identity-preserving transformations. For example, it is well known that inferotemporal cortex (IT) neurons are capable of responding similarly to the same object regardless of its retinal positions. Such tolerance of spatial location can be explained away from the perspective of getting bootstrapped by the large number of saccadic-driven translation experiences of retinal images. Similar observations can be made with respect to the tolerance of object's rotation but up to a certain angle. Meanwhile, the perirhinal cortex (PRH) is responsible for item memory, especially in representing familiar items; such familiarity of items can be interpreted as finer-grained untangling than position and rotation. Indeed, experimental results have confirmed that along with the flow of information from IT to PRH, representation about the visual object becomes more untangled (Pagan et al., 2013).
### Trajectory Untangling in Motor Control
J. Gibson says, "we move because we see; we see because we move." The dual view toward perception and motion inspires us to consider the problem of manifold untangling for the motor cortex as the dual for the visual cortex. It has been observed in (Russo et al., 2018) that unlike muscle activity, neural activity is structured in a such a manner to avoid tangling - i.e., similar neural activity patterns lead to dissimilar action patterns in the future (an object action-related counterpart of object recognition). How does motor cortex encoder muscle-like commands? Hypothesis about encoding of movement velocity or direction exists in the literature; however, sophisticated tasks such as reaching and cycling (or more extended movements) suggest that neural activities are dominated by signals that are not muscle-like (therefore cannot be explained by velocity/direction coding) at the population level.
Based on the premise that the present network state strongly influences the future state, we conjecture that the objective of _trajectory untangling_ is also recursively (though via hierarchical timescale instead of spatial scales) achieved by the motor cortex. Conceptually similar to the tangling in object recognition, the principle of trajectory untangling implies that two similar patterns of neural activity, observed as different moments, should not produce highly dissimilar action patterns in the near future. Violation of such principle often leads to trajectory tangling - a potential instability in the network dynamics of motor control. A key finding from the cycling experiment from (Russo et al., 2018) is that "muscle-like signals are present but are relatively modest 'ripples' riding on top of larger signals that confer minimal tangling."
The perspective of trajectory untangling is consistent with the closed-loop theory of motor learning (Adams, 1971). For closed-loop optimization, error feedback plays the role of reinforcement learning of simple movements. Trajectory untangling facilitates this task by decomposing the movement into the knowledge of result (trends) and the withdrawal of reinforcement (ripples). The learning procedure of motor skills is then abstracted as gradual untangling of trajectories in the latent space of motor control. More recently, the problem of motor control is studied more rigorously using the theory of dynamical system. It was shown that motor learning at the scale of neuronal population dynamics involves multiple learning mechanisms operating at different timescales (Vyas et al., 2020).
### From Perceptual Untangling to Internal Representation
According to Helmholtz (Lee, 2015), the fundamental role of the neocortex is to construct an internal representation of the external environment. The mirroring of the physical world in the primate brain is achieved by the constant interaction between sensory and motor cortex. It has been suggested that the organizational principle of the cortex, regardless of object recognition or motor control, shares a similar association mechanism at the cellular level (Larkum, 2013). As shown in Fig. 5, pyramidal neurons play the role of coupling feed-forward with feedback streams that are driven by external stimuli and internal representation, respectively. This association mechanism at the cellular level succinctly explains the advantage of the cortical hierarchy, with its structured terminations at different layers. It also offers a plausible explanation for how neuronal populations in various areas can be 'bound' instantaneously to represent tangled features.
Thalamo-cortical interaction has to occur simultaneously in both feed-forward and feedback streams to support the predictive coding hypothesis in the visual cortex (Rao and Ballard, 1999). In feedforward visual stream, manifold untangling conveys the external stimuli information to higher cortical areas; pyramidal neurons serve as the associative elements responsible for coincidence detection between the present stimuli and the experience (internal representation). Then the feedback stream serves as the prediction coding scheme (Rao and Ballard, 1999) of the cortex determining the firing of pyramidal neurons. Given the fact that 90% of the synaptic inputs to layer-1 (L1) are from long-range feedback connections, it has been shown that backpropagation-activated coupling (BAC) (Larkum, 2013) firing mechanism of pyramidal neurons bridges the feed-forward (manifold untangling) and feedback (manifold projection) streams.
## 6 Conclusions
It has been hypothesized that through neuronal population dynamics, the neocortex solves the problem of object recognition via perceptual untangling. We formulate the problem of manifold untangling as an abstraction of object recognition in this paper. Two complementary approaches to untangle an object manifold are presented: embedding (selectivity-promoting) and flattening (tolerance-promoting). We have discussed two classes of embedding strategies (generalized kernel method and hierarchical sparse coding) as well as flattening strategies (identity-preserving transformation and decision boundary smoothing). Under the framework of manifold unfolding, we present a unified interpretation of multiview image recognition, invariant audio/language recognition, and perceptual straightening of video. Finally, the theory of manifold unfolding is connected with the literature of neuroscience, demonstrating biologically plausible implementation of perceptual untangling.
Figure 5: Long-range architecture of the cortex (cited from (Larkum, 2013)). The feed-forward stream (marked by the color blue) is driven by external information influencing the sensory apparatus. The feedback stream (marked by the color red) is driven by an internal representation built from previous experiences. We conjecture that the feed-forward and feedback streams can be geometrically interpreted as manifold embedding and projection, respectively.
|
2310.12559
|
Application of quantum neural network model to a multivariate regression
problem
|
Since the introduction of the quantum neural network model, it has been
widely studied due to its strong expressive power and robustness to
overfitting. To date, the model has been evaluated primarily in classification
tasks, but its performance in practical multivariate regression problems has
not been thoroughly examined. In this study, the Auto-MPG data set (392 valid
data points, excluding missing data, on fuel efficiency for various vehicles)
was used to construct QNN models and investigate the effect of the size of the
training data on generalization performance. The results indicate that QNN is
particularly effective when the size of training data is small, suggesting that
it is especially suitable for small-data problems such as those encountered in
Materials Informatics.
|
Hirotoshi Hirai
|
2023-10-19T08:10:12Z
|
http://arxiv.org/abs/2310.12559v1
|
# Application of quantum neural network model to a multivariate regression problem
###### Abstract
Since the introduction of the quantum neural network model, it has been widely studied due to its strong expressive power and robustness to overfitting. To date, the model has been evaluated primarily in classification tasks, but its performance in practical multivariate regression problems has not been thoroughly examined. In this study, the Auto-MPG data set (392 valid data points, excluding missing data, on fuel efficiency for various vehicles) was used to construct QNN models and investigate the effect of the size of the training data on generalization performance. The results indicate that QNN is particularly effective when the size of training data is small, suggesting that it is especially suitable for small-data problems such as those encountered in Materials Informatics.
## 1 Introduction
Recently, there has been a surge of interest in quantum machine learning (QML), a technique to tackle machine learning problems through the use of
quantum computing and quantum information processing [1; 2]. The Harrow-Hassidim-Lloyd (HHL) method [3] is capable of solving the linear equation \(Ax=b\) for an \(N\times N\) matrix \(A\) on a time scale of \(O(poly(\log N))\), which is expected to be exponentially faster than the conjugate gradient method (\(O(N)\)), which is the most efficient classical algorithm currently available. Quantum linear regression [4; 5] and quantum support vector machines [6] have been suggested as QML techniques that capitalize on this. However, these methods necessitate a great deal of quantum gates and cannot be executed on a noisy intermediate scale quantum (NISQ) device, thus we must wait for the advent of a fault-tolerant quantum computer (FTQC). On the other hand, a quantum neural network (QNN) method [7], also known as quantum circuit learning (QCL) [8], has been proposed as an algorithm for a NISQ device. It is a quantum-classical hybrid algorithm based on the variational quantum algorithm. The QNN attempts to reduce the discrepancy between the output of the quantum circuit and the labeled data by adjusting the circuit parameters to their optimal values. The advantage of QNN is that it is able to utilize high-dimensional quantum states as trial functions, which are difficult to generate on a classical computer [8]. Another advantage is that the unitarity of quantum circuits serves as a regularization to prevent overfitting [8]. In a conventional neural network model, a regularization term is added to the cost function to limit the norm of the learning parameter and reduces the expressibility of the model to prevent overfitting. On the other hand, in a QNN model, the norm of parameters is automatically limited to 1 due to unitarity, so it can be said that the regularization function is inherently provided. The exploration of QNN models is a relatively new field, and previous studies on the development of regression models have been restricted to basic single-dimensional functions [8; 9; 10]. To clarify the performance of QNN models in practical multivariate regression analysis problems, we constructed QNN models on the Auto-MPG data set (mileage per gallon data for various vehicles, 392 valid data excluding missing data) [11] and investigated the effect of the training data size on generalization performance.
Method
### Auto-MPG dataset
In order to evaluate the effectiveness of QNN models in multivariate regression, the Auto-Mpg data set [11] from the UCI Machine Learning Repository was used. This data set consists of 392 instances related to the city-cycle fuel consumption in miles per gallon (MPG). It includes seven attributes that can be used to predict the MPG: cylinders (number of cylinders), displacement (displacement), horsepower (horsepower), weight (vehicle weight), acceleration (time required for acceleration 0-60 mph), model year (year of manufacture), and origin (country of production). Although some attributes are discrete-valued, they were not encoded using one-hot encoding (encoded as separate binary variables), but they were standardized and normalized (-1,1) in the same way as continuous-valued attributes and used as input values to the model. We conducted three experiments with different sizes of the training data set (1/5\(\times\)392=78, 2/5\(\times\)392=156 and 4/5\(\times\)392=312, and the remaining data were used as test data) to study the effect of size on generalization performance [12].
### QNN models
The QNN architecture is composed of three components: an encoder which transforms classical data into a quantum state, an ansatz which is a quantum circuit with learning parameters, and a decoder which converts the quantum state into an output value. Here, the Ry rotation gate (the Ry gate acting on each qubit initialized to \(|0\rangle\)) was used as the encoder. The rotation angle was set to \(\theta\)=arctan(\(x\))+\(\pi/2\) for the scaled attribute \(x\). The arctangent allows the scaled attribute to be uniquely converted to a rotation angle even if the value is outside the range (-1,1) when the scaler is used for the test data. In this study, we constructed the 7-qubit model with each attribute encoded in one qubit (circuit width \(w=1\)) and the 14-qubit model with each attribute encoded in two qubits (\(w=2\)), as shown in Fig. 1. For an ansatz, we used the circuit with Ry rotation gates and CNOT gates in linear configuration (see Fig. 1). The learning parameters are the rotation angles of the Ry rotation gates. We built the model by connecting \(d\) of the depth 1-blocks (\(d\) is referred to as the depth of the circuit). The sum of the Z-axis projections of each qubit, \(\sum_{i}\sigma_{z}^{i}\), was used as a decoder.
The QNN models were implemented with Pytket [13], a Python module for quantum computing, and the quantum circuit calculations were performed using state vector calculations with the Qulacs [14] backend, a quantum computing emulator. The mean squared error (MSE) between the teacher data and the predictions was used as a cost function. The Powell method was used to optimize the learning parameters.
### Classical NN models
A conventional neural network (NN) model was also constructed for comparison. We employed the NN model consisting of an input layer with 7 nodes, two hidden layers with 100 nodes each, and an output layer with a single node. The ReLU was used for the activation functions. PyTorch [15] was used to build and train the NN model. The Adam optimizer [16], an extended version of stochastic gradient descent, was used with a learning rate of 0.02 over 10000 epochs. L2 regularization was applied to prevent overfitting. The training data were divided into two parts, with 80% used for training and the remaining 20% used for validating the L2 regularization weight parameter.
Figure 1: Quantum circuits (ansatz) used for QNN models, the left side: 7-qubit model (\(w=1\)), the right side: 14-qubit model (\(w=2\)).
Results and discussion
The \(R^{2}\) (coefficient of determination) for the training data (expressivility) for each QNN model are shown in Fig. 2.
Figure 2 also shows the results of the classical NN model) for comparison. The QNN model with a deeper (larger \(d\)) or wider (larger \(w\)) circuit has stronger expressivility. The \(R^{2}\) of QNN models decreases as the size of the training data increases. This is the same behavior as the classical NN model with regularization. On the other hand, the classical NN model without regularization shows a perfect fit (\(R^{2}=1\)) at any data size. These results indicate that the automatic regularization worked in the QNN models.
Figure 2: \(R^{2}\) values for the training data.
The \(R^{2}\) for the test data (generalization performance) for each QNN model and the classical NN models are shown in Fig. 3.
QNNs have been shown to have superior generalization performance compared to classical NN models when the amount of training data is limited. As the size of the training data increases, the gap between the QNN and the classical NN models narrows, and the smallest QNN model (\(d=3\), \(w=1\)) performs worse than the classical NN model when the largest data size. The smallest QNN model has limited expressibility due to its limited number of learning parameters. Conversely, a QNN model with a deeper or wider circuit has enough expressibility, and its generalization performance is superior to that of classical NN models. This distinction is particularly evident when the amount of training
Figure 3: \(R^{2}\) values for the test data.
data is limited and the QNN model is considered to be especially effective for small datasets. This advantage has also been confirmed in classification problems [17]. Thus, QNNs are considered to be particularly promising for Materials Informatics (MI) problems with a small number of data [18, 19]. The use of QNNs offers a great advantage in that they do not need to be regularized and can still achieve excellent generalization performance without the need for hyperparameter tuning. Circuit width and depth may be adjustable parameters, but it appears that they should be set to the same level as the number of attributes. Since the problems handled by typical MI do not require a large number of attribute variables and can be adequately computed by emulation on a classical computer without using an actual quantum computer, it may be used as a quantum-inspired algorithm. As a challenge, QNNs have been thought to require more time to train than classical NN models. However, a recent study [7] has shown that, with the use of a specific ansatz, QNNs can be more trainable than classical NNs, raising high expectations for their future progress.
## 4 Conclusion
We constructed QNN models on the Auto-MPG data set to clarify the performance of QNN models in practical multivariate regression analysis problems. Compared to classical NN models, QNNs showed better generalization performance when the size of the training data was limited. The results suggest that QNN is particularly effective when the data size is small, suggesting that it is especially suitable for small data problems such as those encountered in Materials Informatics.
|
2310.05025
|
Synslator: An Interactive Machine Translation Tool with Online Learning
|
Interactive machine translation (IMT) has emerged as a progression of the
computer-aided translation paradigm, where the machine translation system and
the human translator collaborate to produce high-quality translations. This
paper introduces Synslator, a user-friendly computer-aided translation (CAT)
tool that not only supports IMT, but is adept at online learning with real-time
translation memories. To accommodate various deployment environments for CAT
services, Synslator integrates two different neural translation models to
handle translation memories for online learning. Additionally, the system
employs a language model to enhance the fluency of translations in an
interactive mode. In evaluation, we have confirmed the effectiveness of online
learning through the translation models, and have observed a 13% increase in
post-editing efficiency with the interactive functionalities of Synslator. A
tutorial video is available at:https://youtu.be/K0vRsb2lTt8.
|
Jiayi Wang, Ke Wang, Fengming Zhou, Chengyu Wang, Zhiyong Fu, Zeyu Feng, Yu Zhao, Yuqi Zhang
|
2023-10-08T06:05:55Z
|
http://arxiv.org/abs/2310.05025v1
|
# Synslator: An Interactive Machine Translation Tool
###### Abstract
Interactive machine translation (IMT) has emerged as a progression of the computer-aided translation paradigm, where the machine translation system and the human translator collaborate to produce high-quality translations. This paper introduces Synslator, a user-friendly computer-aided translation (CAT) tool that not only supports IMT, but is adept at online learning with real-time translation memories. To accommodate various deployment environments for CAT services, Synslator integrates two different neural translation models to handle translation memories for online learning. Additionally, the system employs a language model to enhance the fluency of translations in an interactive mode. In evaluation, we have confirmed the effectiveness of online learning through the translation models, and have observed a 13% increase in post-editing efficiency with the interactive functionalities of Synslator. A tutorial video is available at: [https://youtu.be/K0vRsb2lTt8](https://youtu.be/K0vRsb2lTt8).
## 1 Introduction and Related Works
We have witnessed consistent advancements made in the field of machine translation Lopez (2008); Koehn (2009); Sutskever et al. (2014); Bahdanau et al. (2014); Vaswani et al. (2017), which progressively enhance the quality of translations. These continuous advancements have prompted a transformation in the translation industry, with a shift from exclusive reliance on human translators to the integration of computer-aided translation (CAT) methods Bowker (2002); Bowker and Fisher (2010); Green et al. (2013); Laubli et al. (2013); Bowker (2014). For CAT, instead of translating from scratch, human translators engage in post-editing tasks, refining machine translation outcomes to yield the final approved results, and thus considerably improving the translation quality.
The post-editing process used to be generally static, wherein machines would cease to respond to human modifications as soon as human post-editing began Bowker and Fisher (2010); Green et al. (2013). Recent studies have explored interactive procedures and algorithms Green et al. (2015); Knowles and Koehn (2016); Santy et al. (2019); Chatterjee (2019); Wang et al. (2020); Ba et al. (2022); Wang et al. (2022), enabling a more collaborative process between humans and machines, where machines can dynamically adjust translations in line with the edits made by humans.
Translation Memory (TM) is a key component that can be optimally leveraged within the realm of CAT Green et al. (2014). As human translators undertake post-editing with CAT tools, incremental online TMs can be invariably accumulated. Hence, the capability to use TMs for online learning emerges as a critical attribute for CAT Wang et al. (2022). In fact, there are different environment settings for the deployment of CAT services. In environments where the deployment of CAT allows for authorized usage of TMs, it is feasible to utilize translation memories for model training Bulte and Tezcan (2019); Xu et al. (2020); Bapna and Firat (2019). While in different settings such as public cloud solutions for CAT services, translation memories are usually introduced online by users, and we are not authorized to train models using them. Then, it becomes more beneficial to have a translation model capable of handling online TMs during the inference phase. For instance, Khandelwal et al. (2020) predicts target words with a \(k\)-nearest-neighbor (\(k\)NN) classifier over a datastore of cached TM examples during inference.
In this paper, we present Synslator, a tool designed for computer-aided translation that supports interactive machine translation through the application of a subword-prefix decoding algorithm. This tool allows human translators to receive automated translation suggestions in real-time while
editing machine translations as needed and grants users the flexibility to make edits at the character/subword level. To cater diverse deployment requirements of CAT services as aforementioned, Synslator employs two distinct models to handle translation memory for online learning. These include an adaptive neural machine translation model named _adaptive_-TM-MT, and a nearest-neighbor-retrieval based machine translation model, which we call _simplified_-\(k\)NN-MT. Moreover, Synslator additionally produces suggestions that are purely grounded by a GPT-based language model (LM) from the perspective of monolingual fluency and styling, which serve as supplementary references for human translators. The subword-prefix decoding algorithm can accommodate all of these models given human's subword-prefix inputs.
## 2 Synslator: The Proposed System
In this section, we will introduce the major functionalities of Synslator and the algorithm implementations behind this tool. The fundamental feature of Synslator is to allow users to create a translation project, configure its respective settings, and perform post-editing based on machine translation results 1. As depicted in the screenshots in Figure 1, there are two user interfaces that human translators utilize to finish a translation project. The interface (a) allows adjustments for project settings, while the interface (b) supports human post-editing.
Footnote 1: A tutorial video, available at this link: [https://youtu.be/K@vRsb2lTt8](https://youtu.be/K@vRsb2lTt8), demonstrates how to create a new project. The example project shown in the video focuses on legal translation from Chinese to English. We presume in this case that the CAT tool is set up as a public cloud service, and the user has already uploaded a suitable TM dataset.
### Project Setting Interface
Users are presented with choices for file parsing, selection of translation memory, selection of termbase, and the option to choose different machine translation engines. The file parsing functionality is employed to segment sentences if a document is uploaded for translation. We will focus on functions of translation memory, termbase and machine translation engines.
#### 2.1.1 Translation Memory and Termbase
After creating a translation project, users can upload related TMs and bilingual termbase. For each source sentence to be translated, Synslator will
Figure 1: The user interfaces of Synslator: (a) the project setting interface, (b) the post-editing interface.
present the most relevant TM including its source and target translation for human reference in the post-editing interface. The process of searching for similarity is initially carried out by an open-sourced distributed search engine, ElasticSearch 2, which retrieves as most as 64 bilingual sentence pairs from the source side that exhibit the highest relevance scores. Subsequently, from these bilingual sentence pairs, we select the one demonstrating the most similarity based on the computation of the edit distance on the source side as well. The minimum threshold for the edit distance is denoted as the Minimum Match Rate, and its value can be set in the Project Settings interface. When it comes to translating terms present in the source sentence, Synslator simply utilizes an exact match strategy to locate their respective translations from the bilingual termbase. If multiple matches are found, all of them will be displayed in the post-editing interface.
Footnote 2: [https://github.com/elastic/elasticsearch](https://github.com/elastic/elasticsearch)
#### 2.1.2 Machine Translation Engines
As discussed, the process of post-editing with the CAT tool results in incremental online TMs. In response to the varying deployment environments associated with CAT services, Synslator utilizes two distinct models for online learning accordingly.
adaptive-TM-MTWhen the usage of training TMs is granted, such as in establishing CAT services for private deployment, it becomes particularly advantageous to utilize TMs to boost the model performance in domain-specific translation via fine-tuning. For instance, beyond using parallel bilingual training data, it is feasible to employ related TMs as an additional input to the model for further enhancement. Bapna and Firat (2019) retrieves neighbors from TMs and incorporates them into the model through Conditional Source Target Memory. Inspired by their work, we propose the _adaptive_-TM-MT as illustrated in Figure 2.
Given a pre-trained Transformer model, we fine-tune it with TMs as domain-specific training data. For each parallel sentence pair of TMs, we first use the pre-trained encoder and decoder to encode the TM's source and target sequences; afterwards, we execute a retrieval process to locate the nearest neighbor from the remaining TMs in the same way as in Section 2.1.1. The retrieved source-target pair is subsequently encoded with additional Transformer layers. More specifically, the retrieved source is encoded within one Transformer encoder layer, and we integrate information from the encoder representation of the source sequence by using a cross-attention with it. The retrieved target is then encoded in a similar fashion, attending to the encoded retrieved source memory. Instead of using Gated Multi-Source Attention from Bapna and Firat (2019), we simply add one Transformer decoder layer upon the original decoder module, attending the encoded retrieved target memory. Once the _adaptive_-TM-MT is trained offline with historical TMs, it would obtain the capability of handling incremental TMs for online learning in CAT.
simplified-\(k\)nn-MTWhen training with TMs is not authorized, such as for public cloud solutions for CAT services where translation memories are commonly loaded online, it would be more effective to develop a model capable of handling plug-in TMs during inference. Motivated by \(k\)NN-MT (Khandelwal et al., 2020), we propose the _simplified-\(k\)_NN-MT, a pared-down variant of \(k\)NN-MT, with its architecture displayed in Figure 3.
In comparison with \(k\)NN-MT, we simplify the process of datastore construction with all TMs, and instead, we adopt the TM matching approach detailed in Section 2.1.1 to obtain a smaller amount of TMs for datastore construction. For each source sentence, we collect up to 16 of the most pertinent neighboring TMs after ElasticSearch, identified through edit distance, to create a condensed datastore for \(k\)NN search. This optimized procedure effectively alleviates computational complexity and storage requirements, thereby enhancing its appli
Figure 2: The _adaptive_-TM-MT framework, featuring dashed lines to represent cross-attention.
cability in practical scenarios. It is important to note that every source sentence due for translation is associated with a unique datastore. Once the datastore is constructed, the model can predict target words by interpolating the distribution of \(k\)NN predictions in the same manner as \(k\)NN-MT.
However, the condensed datastore could introduce noise when retrieving \(k\)NN based on representation distance for each target prediction. To mitigate the potential impact of such noise on translation performance, we set a maximum distance threshold, represented by \(\tau\). Only those neighbors with a distance smaller than \(\tau\) will be selected. If none meets this condition, there will be no \(k\)NN interpolation for generation. In scenarios of public cloud solutions for CAT services, all hyper-parameters including the newly introduced \(\tau\) can be tuned on the domain-specific development sets.
### Post-Editing Interface
Upon proper configuration of the project settings, we can enter the workbench to access the post-editing interface, which is depicted in the screenshot (b) in Figure 1. This interface displays all static machine translation results before modifications by human translators. As soon as human post-editing begins, the translation model will provide refined translation results based on human edits. The human translator can continually make adjustments, while the translation model also consistently refines its outputs based on human inputs. This iterative process of interaction endures until the translation meets the quality standard.
#### 2.2.1 Workflow of Post-Editing
Online Learning with TMThe corresponding translation memory and termbases, as described in Section 2.1.1, are displayed on the right side of the post-editing interface. Human translators can double-click these resources for immediate inclusion in the translation result, and make any modifications to them as required. More importantly, incremental online TMs gathered through human post-editing will be merged in real-time with previous memories, which establishes favorable conditions for the _adaptive_-TM-MT and _simplified_-\(k\)NN-MT models to facilitate online learning. Regarding the _adaptive_-TM-MT, the source sentence and the highest-ranking matching TM from the current total memories are fed into the model for generations. In the case of the _simplified_-\(k\)NN-MT, for every source sentence, Synslator will gather at most 16 relevant TMs to construct a condensed datastore for \(k\)NN retrievals during \(k\)NN-MT inference.
Translation RefinementHuman adjustments can be made at a character/subword level. The translation model, guided by the subword input from human translators and all previously generated target words, automatically completes the current target word and generates adjusted subsequent words to finalize a translation. For example, as shown in (b) of Figure 1, given the source sentence with id 14 and its original machine translation result, the human translator deletes the original words following "flush" and enters two characters "fo". Immediately, the translation model automatically completes the word "for" and generates subsequent target words. This mechanism is facilitated by our proposed subword-prefix decoding algorithm, which will be introduced in Section 2.2.2.
Prediction Highlighting with LikelihoodWhen human translators verify the correctness of the translation up to a certain target word, they can
Figure 3: The _simplified_-\(k\)NN-MT framework with sequentially numbered workflow steps.
click on the spot to lock in the preceding translated text. Besides, word predictions will be sequentially highlighted with a separate color as long as their translation probability remains high (e.g., above 0.6), indicating the confidence of the model. In the same example in (b) of Figure 1, the "O" following "for" is highlighted, which the translation model believes to be a highly likely correct word prediction. If human translators agree with the correctness of the highlighted predictions, they can use the TAB key to swiftly secure them.
Suggestion BoxBeneath each translated sentence, a suggestion box is featured. It furnishes the next 3-best translations generated by the translation model, excluding the highest-ranked one which is already displayed. The box also includes a suggestion derived from a GPT-based LM. This acts as a supplementary reference, offering insights into monolingual fluency and stylistic nuances. However, the precision of a GPT model's next-word prediction depends on the preceding context [11, 13]. The LM only provides a suggestion when the target prefix composes of more than ten translated words. In our design, 3-best translation suggestions are limited to three-word predictions, while the LM predicts the next four words. The 3-best translation suggestions are de-duplicated for clarity.
#### 2.2.2 Subword-Prefix Decoding
During post-editing, when the last input from the human translator is a space, it indicates the presence of a fully-formed word preceding the space character. In this case, both the translation model and the GPT-based LM can anticipate the ensuing words through the application of a forced decoding mode. Otherwise, given the subword prefix from human inputs, we build a binary vector with the target vocabulary size, called Hit Vector, by looking up the subword prefix in the vocabulary using exact matching. In this vector, any index with a value of 1 represents a match with the subword prefix, denoting a successful "hit" by the subword prefix. An example of Hit Vector is illustrated in Figure 4. Among the words that the subword prefix hits, our model selects the one with the highest generation probability as the current word prediction. Subsequently, our model autoregressively generates all the following words using a forced decoding mode. The subword-prefix decoding algorithm not only facilitates the beam search decoding in translation models, but also supports the "top-\(k\) sampling" decoding strategy prevalent in the GPT-based LM 3. Please note that due to varied lengths of target prefix sequences, we employ a strategy of left-padding for batch decoding to enable parallel computation.
Footnote 3: As an example, the pseudo codes of the subword-prefix decoding algorithm for translation models are displayed in Algorithm 1 of Appendix A.
## 3 Evaluation
We implement assessment protocols to measure the performance of Synslator from two perspectives: (1) evaluating the effectiveness of online learning for domain-specific translation tasks; and (2) assessing real-time post-editing efficiency with interactive functionalities.
### Evaluation for Online Learning
Datasets and Model SettingsWe utilize an in-house pre-trained Chinese-English neural machine translation (NMT) model to facilitate the implementation of the _adaptive_-TM-MT and the _simplified_-\(k\)NN-MT. This NMT model is based on the Transformer architecture [20]. For the GPT-based LM, we follow the same architecture as the decoder of the pre-trained NMT model and train it with in-house English monolingual data from scratch.
We employ an in-house Chinese-English IT-domain training set to train the _adaptive_-TM-MT. Its encoder and decoder are initialized with the pre-trained NMT model, while other layers are trained from scratch. Once the training reaches convergence, we proceed to evaluate online learning using the corresponding test set, where the training data serve as TMs for retrievals. In evaluating the _simplified_-\(k\)NN-MT, we use open-sourced Chinese-English Law and Subtitle in-domain training sets [23] as TMs for building condensed datastores and conducting \(k\)NN retrievals. We equip the pre-trained NMT model with the \(k\)NN search functionality during inference on the test
Figure 4: An example of Hit Vector.
sets, with all hyper-parameters tuned on the development set, as shown in Table 3 of Appendix A. The statistics of the in-domain datasets are presented in Table 4 of Appendix A.
Evaluation MetricsThe translation outcomes are assessed through both static and dynamic methods. Static translation results are evaluated over the BLEU score using "multi-bleu.perl" script of the Moses4. Moreover, we introduce a metric called N-gram Accuracy to assess prediction accuracy given dynamic target prefix inputs. In detail, we enumerate target prefix sequences from the golden references which are assumed as inputs from human translators and enable the translation model to produce predictions for the subsequent N words. N-gram Accuracy is computed by determining the proportion of correct N-gram predictions relative to the total count of N-gram references, i.e.,
Footnote 4: [http://statmt.org/moses](http://statmt.org/moses)
\[\small Acc_{N-gram}=\frac{Count(Pred_{N-gram}=Ref_{N-gram})}{Count(Ref_{N-gram})}, \tag{1}\]
where \(Pred_{N-gram}\) and \(Ref_{N-gram}\) are the N-gram prediction and the N-gram reference given the target prefix input. A higher value of \(Acc_{N-gram}\) signifies better performance.
Experimental ResultsThe experimental results are presented in Table 1. We compare the _adaptive_-TM-MT with the pre-trained NMT model, indicated as Vanilla NMT, and the model with classic fine-tuning Freitag and Al-Onaizan (2016), labeled as FT-NMT. It is clear that the _adaptive_-TM-MT demonstrates superior performance. It is also evident that the provision of the next 3-best translation suggestions, along with the LM suggestion, further boosts the N-gram Accuracy, demonstrating the importance of the translation box. On the other hand, it is noteworthy that the _simplified_-\(k\)NN-MT significantly outperforms the pre-trained NMT model in both of the Law and Subtitle domains. In short, both the _adaptive_-TM-MT and the _simplified_-\(k\)NN-MT can enable online learning effectively.
### Evaluation of Interactive Functionalities
We conduct real-time post-editing experiments to assess the efficiency of the interactive functionalities, which mainly include the sub-word prefix decoding and the Suggestion Box. Ten independent translators, each with an 8-year experience in translating Chinese-English in IT-related fields, are randomly split into two groups. Both groups participate in post-editing on identical IT-domain translation projects over a span of three weeks, utilizing the _adaptive_-TM-MT model for online learning. Projects are assigned randomly among the translators in either group. For comparison, one group (referred to as the MT-PE group) performs static post-editing only with matching TMs and termbase displayed, while the other group undertakes post-editing with Synslator. The statistics of the projects and the total elapsed time of human labor are summarized in Table 2. It has been observed that for a total of 213,976 words translated, the efficiency of post-editing is improved by 13% with the interactive functionalities of Synslator.
## 4 Conclusion
We have presented Synslator, a user-friendly IMT tool. In different deployment environments, it utilizes distinct translation models for online learning with real-time translation memories, and provides multiple translation suggestions through a subword-prefix decoding algorithm. In practical applications, Synslator assists human translators to perform efficient post-editing interactively, enhancing the overall translation workflow.
\begin{table}
\begin{tabular}{l|c|c c c} \hline \hline Model & BLEU & \(Acc_{1-gram}\) & \(Acc_{2-gram}\) & \(Acc_{3-gram}\) \\ \hline \multicolumn{5}{c}{In-house IT domain test set} \\ \hline Vanilla NMT & 25.47 & 46.38 & 30.47 & 21.13 \\ FT-NMT & 28.28 & 51.63 & 35.64 & 25.59 \\ \hline _adaptive_-TM-MT & **29.05** & 52.25 & 36.31 & 26.30 \\ +Net3 3-best & - & 57.18 & 42.13 & 31.87 \\ +Net3 3-best and LM & - & **62.60** & **44.07** & **32.45** \\ \hline \hline \multicolumn{5}{c}{Open-sourced Law domain test set} \\ \hline Vanilla NMT & 33.73 & 53.66 & 38.11 & 28.06 \\ _simplified_-\(k\)NN-MT & **37.21** & **56.45** & **42.43** & **33.17** \\ \hline \hline \multicolumn{5}{c}{Open-sourced Subtitle domain test set} \\ \hline Vanilla NMT & 18.81 & 38.90 & 21.95 & 12.97 \\ _simplified_-\(k\)NN-MT & **20.04** & **40.61** & **24.30** & **15.34** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Evaluation of translation results using BLEU Scores and N-gram Accuracy.
\begin{table}
\begin{tabular}{l|c|c c|c c|c} \hline \hline \multicolumn{5}{c}{MT-PE} & \multicolumn{3}{c}{Synslator} \\ \hline Week & \#Word & Time (h) & Avg. & Time (h) & Avg. & Impr. \\ \hline
1 & 90,344 & 147.85 & 611.05 & 136.67 & 661.04 & +8\% \\
2 & 78,882 & 148.51 & 531.16 & 124.97 & 631.21 & +19\% \\
3 & 44,750 & 80.05 & 559.03 & 71.36 & 627.10 & +12\% \\ \hline Total & 213,976 & 376.41 & 568.46 & 333.00 & 642.57 & +13\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: The efficiency of real-time post-editing. The symbols #Word and Avg. represent the total number of source words in the translation projects and the average number of words completed per hour, respectively.
|
2308.00317
|
A new class of nonparametric tests for second-order stochastic dominance
based on the Lorenz P-P plot
|
Given samples from two non-negative random variables, we propose a family of
tests for the null hypothesis that one random variable stochastically dominates
the other at the second order. Test statistics are obtained as functionals of
the difference between the identity and the Lorenz P-P plot, defined as the
composition between the inverse unscaled Lorenz curve of one distribution and
the unscaled Lorenz curve of the other. We determine upper bounds for such test
statistics under the null hypothesis and derive their limit distribution, to be
approximated via bootstrap procedures. We then establish the asymptotic
validity of the tests under relatively mild conditions and investigate finite
sample properties through simulations. The results show that our testing
approach can be a valid alternative to classic methods based on the difference
of the integrals of the cumulative distribution functions, which require
bounded support and struggle to detect departures from the null in some cases.
|
Tommaso Lando, Sirio Legramanti
|
2023-08-01T06:28:51Z
|
http://arxiv.org/abs/2308.00317v3
|
A new class of nonparametric tests for second-order stochastic dominance based on the Lorenz P-P plot
###### Abstract
Given samples from two non-negative random variables, we propose a family of tests for the null hypothesis that one random variable stochastically dominates the other at the second order. Test statistics are obtained as functionals of the difference between the identity and the Lorenz P-P plot, defined as the composition between the inverse unscaled Lorenz curve of one distribution and the unscaled Lorenz curve of the other. We determine upper bounds for such test statistics under the null hypothesis and derive their limit distribution, to be approximated via bootstrap procedures. We then establish the asymptotic validity of the tests under relatively mild conditions and investigate finite sample properties through simulations. The results show that our testing approach can be a valid alternative to classic methods based on the difference of the integrals of the cumulative distribution functions, which require bounded support and struggle to detect departures from the null in some cases.
**Keywords**: Bootstrap, Lorenz curve, Stochastic order.
## 1 Introduction
The theory of stochastic orders deals with the problem of comparing pairs of random variables, or the corresponding distributions, with respect to concepts such as size, vari
ability (or riskiness), shape, aging, or combinations of these aspects. The main notion in this context is generally referred to as the _usual stochastic order_ or _first-order stochastic dominance_ (FSD), which expresses the concept of one random variable being _stochastically larger_ than the other (Shaked and Shantikumar, 2007). For this reason, FSD has important applications in all those fields in which "more" is preferable to "less", clearly including economics. However, FSD is a restrictive criterion, and is rarely satisfied in real-world applications. This has pushed economic theorists to develop finer concepts, which formed the theory of _stochastic dominance_ (SD), taking into account variability and shape, in addition to size (Hadar and Russell, 1969; Hanoch and Levy, 1969; Whitmore and Findlay, 1978; Fishburn, 1980; Muliere and Scarsini, 1989; Wang and Young, 1998). In this regard, the most commonly used SD relation is the _second-order SD_ (SSD), expressing a preference for the random variable which is stochastically larger or at least less risky, therefore combining size and dispersion into a single preorder. This has applications in economics, finance, operations research, reliability, and many other fields in which decision-makers typically prefer larger or at least less uncertain outcomes.
Given a pair of samples from two unknown random variables of interest, statistical methods may be employed to establish whether such variables are stochastically ordered. In particular, we focus on a major problem in nonparametric statistics, that is testing the null hypothesis of dominance versus the alternative of non-dominance. About SSD, several procedures are available in the literature, some of which are described in the book of Whang (2019). We will now recall a few of these approaches. Davidson and Duclos (2000) proposed a test for SSD based on the distance between the integrals of the cumulative distribution functions (CDF). The problem with this test is that dominance is evaluated only on a fixed grid, which may lead to inconsistency. Barrett and Donald (2003) employed a similar approach, combined with bootstrap methods, to formulate a class of tests that are consistent under the assumption that the distributions under analysis are supported on a compact interval. Donald and Hsu (2016) leveraged a less conservative approach to determine critical values compared to Barrett and Donald (2003), avoiding the use of the least favourable configuration. Note that all the aforementioned papers deal more
generally with finite-order SD, and then obtain SSD as a special case. Alternatively, other works focused on tests for the so-called _Lorenz dominance_, which is a scale-free version of SSD that applies to non-negative random variables. For example, Barrett et al. (2014) proposed a class of consistent tests for the Lorenz dominance that rely on the distance between empirical Lorenz curves. In this case, supports may be unbounded. Critical values are determined by approximating the limit distribution of a stochastic upper bound of the test statistic, similar to Barrett and Donald (2003). Sun and Beare (2021) used a different and less conservative bootstrap approach to improve the power of such tests, and established asymptotic properties under less restrictive distributional assumptions.
The main idea of this paper follows from noticing that some stochastic orders, including FSD, can be expressed and tested via the classic P-P plot, also referred to as the _ordinal dominance curve_(Hsieh and Turnbull, 1996; Schmid and Trede, 1996; Davidov and Herman, 2012; Beare and Moon, 2015; Tang et al., 2017; Beare and Clarke, 2022). Following a similar approach, we propose a new class of nonparametric tests for SSD between non-negative random variables, in which the test statistic is based on what we refer to as the _Lorenz P-P plot_ (LPP), a kind of P-P plot based on _unscaled Lorenz curves_. More precisely, the LPP is obtained as the functional composition of the inverse unscaled Lorenz curve of one distribution and the unscaled Lorenz curve of the other. The key property of the LPP is that, under SSD, it does not exceed the identity function on the unit interval. Therefore, the LPP stands out as a promising tool for detecting deviations from the null hypothesis of SSD. Namely, any functional that quantifies the positive part of the difference between the identity and the LPP can be used to construct a test statistic. This gives rise to a whole class of tests, depending on the choice of the functional. The \(p\)-values of such tests can then be computed via bootstrap procedures. In particular, we use a similar idea as in Barrett et al. (2014) to asymptotically bound the size of the test, and establish its consistency via the functional delta method. Note that the consistency of our family of tests is established without requiring a bounded support, which represents an advantage compared to classic methods based on integrals of CDFs. Moreover, our simulation studies show that our tests are often more reliable than the established KSB3
test by Barrett and Donald (2003), which may have problems detecting violations of the null hypothesis in some cases.
The LPP may also be used to define families of fractional-degree orders that are "between" FSD and SSD (or beyond SSD) by using a simple transformation. In this regard, we propose a method to define a continuum of SD relations, called _transformed SD_, in the spirit of the recent works by Muller et al. (2017), Lando and Bertoli-Barsotti (2020), and Huang et al. (2020). Interestingly, our tests can be easily adapted to this more general family of orders by simply transforming the samples through the same transformation used in the definition of transformed SD. In particular, FSD can be obtained as a limiting case, in which the empirical LPP of the transformed sample tends to the classic empirical P-P plot. This opens up the possibility of applying our class of tests to a wide family of stochastic orders.
The paper is organised as follows. Section 2 introduces the LPP and describes the idea behind the proposed family of tests. In Section 3, we propose an estimator of the LPP and study its properties. The empirical process associated with the LPP is investigated in Section 4, where we establish a weak convergence result that can be used to derive asymptotic properties of the tests. Namely, in Section 5, we establish bounded size under the null hypothesis and consistency under the alternative one, for both independent and paired samples. The extension to a family of fractional-degree orders is discussed in Section 6. In Section 7, we illustrate the finite sample properties of the tests through simulation studies, focusing on tests arising from sup-norm and integral-based functionals. Finally, Section 8 contains our concluding remarks. All the tables and proofs are reported in the Appendix.
## 2 Preliminaries
Throughout this paper, \(H\) denotes a general CDF supported on the non-negative half line, with finite mean \(\mu_{H}\). In particular, we consider a pair of non-negative random variables \(X\) and \(Y\) with CDFs \(F\) and \(G\), respectively, and finite expectations. When \(F\) and \(G\) are
absolutely continuous, we will denote their densities with \(f\) and \(g\), respectively. Given that stochastic orders depend only on distribution functions, for any order relation \(\succ\) we may write \(X\succ Y\) or \(F\succ G\) interchangeably.
Let \(L^{p}(0,1)\), for \(p\geq 1\), be the class of real-valued functions on the unit interval equipped with the \(L^{p}\) norm \(||.||_{p}\), that is, for \(v\in L^{p}(0,1)\), \(||v||_{p}=(\int_{0}^{1}|v(t)|^{p}dt)^{1/p}\), and \(L^{\infty}(0,1)\) be the class of bounded real-valued functions equipped with the uniform norm \(||.||_{\infty}\). Moreover, let \(C[0,1]\) be the space of continuous real-valued functions on [0,1] also equipped with the uniform norm. Henceforth, "increasing" means "non-decreasing" and "decreasing" means "non-increasing". Given a function \(r\), we denote with \(r_{+}=\max(0,r)\) its positive part. If \(r\) is increasing, \(r^{-1}(y)=\inf\{x:r(x)>y\}\) denotes its right-continuos generalised inverse. Finally, \(\rightsquigarrow\) denotes weak convergence, while \(\to_{p}\) denotes convergence in probability.
### Stochastic dominance
We say that \(X\) is larger than \(Y\) with respect to FSD, denoted as \(X\geq_{1}Y\), if \(F(x)\leq G(x),\forall x\). Equivalently, \(X\geq_{1}Y\) if and only if \(\mathbb{E}u(X)\geq\mathbb{E}u(Y)\) for any increasing function \(u\). Within an economic framework, coherently with the expected-utility approach, one may assume that \(X\) and \(Y\) represent monetary lotteries and \(u\) is a utility function. Under this perspective, FSD represents all non-satiable decision-makers, that is all those with an increasing utility, and therefore can be seen as one of the strongest ordering principles. On the other hand, FSD has a limited range of applicability since, in real-world applications, CDFs often cross and hence distributions cannot be ordered using this criterion.
For this reason, weaker ordering relations have been introduced, among which the most important is the SSD. We say that \(X\) is larger than \(Y\) with respect to SSD, denoted as \(X\geq_{2}Y\), if \(\int_{-\infty}^{x}F(t)dt\leq\int_{-\infty}^{x}G(t)dt,\forall x\). Equivalently, \(X\geq_{2}Y\) if and only if \(\mathbb{E}u(X)\geq\mathbb{E}u(Y)\) for any increasing and concave function \(u\). In economics, SSD generally represents all non-satiable and risk-averse decision-makers, expressing a preference for the random variable with larger values or smaller dispersion. For example, \(X\geq_{2}Y\) entails that \(\mathbb{E}X\geq\mathbb{E}Y\) and, in case of equality, \(\mathrm{Var}(X)\leq\mathrm{Var}(Y)\) and \(\Gamma(X)\leq\Gamma(Y)\), where \(\Gamma\) denotes the Gini
coefficient. The above definitions may be generalised to \(k\)-th order SD, denoted as \(X\geq_{k}Y\), \(k=1,2,3,...\), and represented by the following integral inequality \(F^{[k]}(x)\leq G^{[k]}(x),\forall x\), where \(H^{[1]}=H\) and \(H^{[k]}(x)=\int_{-\infty}^{x}H^{[k-1]}(t)dt\), for \(k\geq 2\).
Besides the classic definitions of SD discussed above, different notions -- often including FSD and SSD as special or limiting cases -- have been studied in the literature. Notable examples are the _inverse_ SD (Muliere and Scarsini, 1989), which is based on recursive integration of the quantile function instead of the CDF, and coincides with classic SD at degrees 1 and 2, and also some fractional-degree SD relations that interpolate FSD and SSD (see, e.g., Muller et al., 2017), as discussed in more detail in Section 6.
### The Lorenz P-P plot
The goal of this paper is to test the null hypothesis \(\mathcal{H}_{0}:X\geq_{2}Y\) versus the alternative \(\mathcal{H}_{1}:X\ngeq_{2}Y\). This requires estimating some kind of distance between the situation of dominance and the situation of non-dominance. The classic solution (Davidson and Duclos, 2000; Barrett and Donald, 2003) is to construct test statistics based on an empirical version of the difference \(\int_{-\infty}^{x}F(t)dt-\int_{-\infty}^{x}G(t)dt\), which is expected to be large, at least at some point, if \(\mathcal{H}_{0}\) is false. However, the main issue with the usual definition of SSD, based on these integrated CDFs, is that these are unbounded in \([0,\infty)\), so there are no uniformly consistent estimators for such integrals unless both distributions have bounded support. Not by chance, Barrett and Donald (2003) require that \(F\) and \(G\) have common bounded support \([0,a]\), with \(a\) finite, to derive consistent tests of stochastic dominance of order \(k\), including SSD. To avoid this limitation, we rely on an alternative but equivalent definition of SSD in terms of the unscaled Lorenz curve, which is always bounded. In particular, we observe that some stochastic orders may be alternatively expressed in terms of a Q-Q plot (Lando et al., 2023) or a P-P plot (Lehmann and Rojo, 1992). Similarly, SSD can be characterised by the modified P-P plot described below.
Let \(H^{-1}\) be the (left-continuous) quantile function of the CDF \(H\). The unscaled Lorenz curve of \(H\) is defined as \(L_{H}(p)=\int_{0}^{p}H^{-1}(t)dt,p\in[0,1]\). The symbol \(L_{H}\) is often used for the scaled version of the Lorenz curve, that is \(L_{H}/\mu_{H}\), while we use \(L_{H}\) to denote the
_unscaled_ Lorenz curve, for the sake of simplicity. Also note that \(L_{H}:[0,1]\to[0,\mu_{H}]\) is increasing, convex and continuous in the unit interval. However, for technical reasons, we also let \(L_{H}(p)=+\infty\) for \(p>1\). With this extension, the generalised inverse function \(L_{H}^{-1}:[0,\infty)\to[0,1]\) is increasing, concave, and continuous in \([0,\mu_{H}]\), while \(L_{H}^{-1}(y)=1\) for \(y>\mu_{H}\).
Now, given the pair of CDFs \(F\) and \(G\), consider the increasing continuous function
\[Z(p)=L_{G}^{-1}\circ L_{F}(p),\qquad p\in[0,1],\]
which takes values in \([0,1\wedge L_{G}^{-1}(\mu_{F})]\), where \(x\wedge y\) denotes the minimum between two real numbers \(x\) and \(y\). Letting \(\nu=1\wedge L_{F}^{-1}(\mu_{G})\), note that, if \(\mu_{G}<\mu_{F}\), then \(\nu<1\) and \(Z(p)=1\) for \(p\in(\nu,1]\). Given some point \(y=L_{F}(p)\), for \(p\in[0,1]\), the graph of \(Z\) is a P-P plot with coordinates \((L_{F}^{-1}(y),L_{G}^{-1}(y))\), which will be referred to as the Lorenz P-P plot (LPP). Within an economic framework, \(Z(p)\) returns the probability given by \(G\) to the average level of income corresponding to \(L_{F}(p)\). In particular, if such a level cannot be reached under \(G\), we have \(Z(p)=1\). The LPP is scale-free, like the classic P-P plot; in particular, if \(X\) and \(Y\) are multiplied by a positive scale factor, then \(Z\) remains unchanged.
To see how \(Z\) can be leveraged to characterize SSD, first recall that \(X\geq_{2}Y\) if and only if \(L_{F}(p)\geq L_{G}(p),\ \forall p\in[0,1]\), see, e.g., Shaked and Shantikumar (2007, Ch. 4). Such a relation can be equivalently expressed in terms of \(Z\):
\[X\geq_{2}Y\iff Z(p)\geq p,\ \forall p\in[0,1]. \tag{1}\]
It is generally complicated to obtain an explicit expression of \(Z\) for parametric probabilistic models. Explicit calculations for the case of a Weibull versus a unit exponential distribution are provided in Example 1 below, while a graphical illustration is given in Figure 1. Differently, and more importantly for our testing purposes, the LPP can be computed quite easily in the empirical case, as discussed in Section 3.
**Example 1**.: Consider the Weibull distribution \(F(x)=1-\exp(-(x/b)^{a})\), with \(a,b>0\), and the unit exponential distribution, \(G(x)=1-\exp(-x)\), both supported on \(x\geq 0\). In
this case, the LPP has the following expression (see the Appendix for details):
\[Z(p)=1\wedge\mathcal{R}\left[1-\exp\left(1+W_{-1}\left(\frac{b\left(\Gamma\left(1 +1/a\right)-\Gamma\left(1+1/a,-\log(1-p)\right)\right)-1}{e}\right)\right) \right],\]
where \(\mathcal{R}\) indicates the real part of a complex number, \(\Gamma(\cdot,x)\) is the incomplete gamma function and \(W_{-1}\) is the Lambert function (Corless et al., 1996). Using the properties of SSD and the crossing conditions described in Shaked and Shantikumar (2007), it is easy to verify that \(F\geq_{2}G\) if and only if \(a\geq 1\) and \(\mu_{F}=b\)\(\Gamma\left(1+1/a\right)\geq\mu_{G}=1\). Figure 1 shows the behaviour of \(Z\) when \(F\geq_{2}G\) and \(F\not\geq_{2}G\).
### Detecting deviations from SSD
Denote the identity function by \(I\). The representation of SSD in (1) can be leveraged to construct a test. In fact, \(\mathcal{H}_{0}:X\geq_{2}Y\) is false if and only if \(I-Z\) is strictly positive at some point in the unit interval. Accordingly, departures from SSD can be detected by quantifying the positive part of the difference between \(I\) and \(Z\). This may be represented by some functional \(\mathcal{T}\) applied to the difference \(I-Z\)\(\in C[0,1]\), with \(\mathcal{T}\) satisfying some desirable properties. In particular, we propose a family of test statistics obtained as empirical versions of the functionals
\[\mathcal{T}_{p}(I-Z)=||(I-Z)_{+}||_{p},\]
for \(p\geq 1\), including \(p=\infty\).
Figure 1: The LPP in Example 1 for: \(a\)=2, \(b\)=1.5 (dashed); \(a\)=2, \(b\)=0.8 (dotted); \(a\)=0.6, \(b\)=1.2 (dot-dashed).
It can be shown that functionals of this type satisfy the following properties.
**Proposition 1**.: _For every \(v_{1},v_{2}\in C[0,1]\) and for every \(p\geq 1\),_
1. _If_ \(v_{1}(x)=0,\forall x\in[0,1]\)_, then_ \(\mathcal{T}_{p}(v_{1})=0\) _;_
2. _if_ \(v_{1}(x)\leq 0,\forall x\in[0,1]\)_, then_ \(\mathcal{T}_{p}(v_{2})\leq\mathcal{T}_{p}(v_{2}-v_{1}),\forall v_{2}\)_;_
3. _if_ \(v_{1}(x)>0\) _for some_ \(x\in[0,1]\)_, then_ \(\mathcal{T}_{p}(v_{1})>0\)_;_
4. \(|\mathcal{T}_{p}(v_{1})-\mathcal{T}_{p}(v_{2})|\leq||v_{1}-v_{2}||_{\infty}\)_;_
5. \(c\mathcal{T}_{p}(v_{1})=\mathcal{T}_{p}(cv_{1})\)_, for any positive constant_ \(c>0\)_;_
6. \(\mathcal{T}_{p}\) _is convex._
7. _For any_ \(p_{2}\geq p_{1}\geq 1\)_,_ \(\mathcal{T}_{p_{2}}(v_{1})\geq\mathcal{T}_{p_{1}}(v_{1})\)_._
Henceforth, we will denote simply by \(\mathcal{T}\) any general functional satisfying the above properties 1)-6). These properties determine a family of functionals which may be used to obtain consistent tests. In particular, properties 2) and 3) completely characterise SSD, in that \(\mathcal{T}(I-Z)=0\) if and only if \(X\geq_{2}Y\), while \(\mathcal{T}(I-Z)>0\) if and only if \(X\not\geq_{2}Y\). Differently, property 7) deals just with the class \(\mathcal{T}_{p}\) and shows that functionals of this kind measure the deviations from \(\mathcal{H}_{0}\) in a monotone way, that is, smaller (larger) values of \(p\) downsize (emphasize) deviations, represented by the function \((I-Z)_{+}\). Proposition 1 generalises Lemma 2 of Barrett et al. (2014), which deals with the special cases of \(\mathcal{T}_{1}\) and \(\mathcal{T}_{\infty}\). They introduced tests for the Lorenz dominance by applying \(\mathcal{T}\) to the difference between the (scaled) Lorenz curves, that is \(\mathcal{T}(L_{G}/\mu_{G}-L_{F}/\mu_{F})\). One may extend their approach to SSD by considering \(\mathcal{T}(L_{G}-L_{F})\) (see, e.g., Zhuang et al., 2023). However, in this paper, we propose leveraging \(\mathcal{T}(I-Z)\), which has some advantages over \(\mathcal{T}(L_{G}-L_{F})\). For instance, \(I-Z\) is scale-free by properties of the LPP. On the contrary, if \(X\) and \(Y\) are multiplied by a positive scale factor \(c>0\), then the difference between the unscaled Lorenz curves becomes \(c(L_{G}-L_{F})\). Moreover, \(|L_{G}-L_{F}|<\max(\mu_{F},\mu_{G})\) whereas \(|I-Z|<1\).
## 3 Estimation of the LPP
### Sampling assumptions
Let \(\mathcal{X}=\{X_{1},...,X_{n}\}\) and \(\mathcal{Y}=\{Y_{1},...,Y_{m}\}\) be i.i.d. random samples from \(F\) and \(G\), respectively. As in Barrett et al. (2014), we will deal with two different sampling schemes: independent sampling and matched pairs. In the first scheme, the two samples \(\mathcal{X}\) and \(\mathcal{Y}\) are independent of each other, and sample sizes \(n\) and \(m\) may differ. In contrast, in the matched-pairs scheme, \(n=m\) and we have \(n\) i.i.d. pairs \(\{(X_{1},Y_{1}),...,(X_{n},Y_{n})\}\) drawn from a bivariate distribution with \(F\) and \(G\) as marginal CDFs. For both sampling schemes, we will consider the asymptotic regime in which \(n\rightarrow\infty\), \(\lim_{n\rightarrow\infty}nm/(n+m)=\infty\) and \(\lim_{n\rightarrow\infty}n/(n+m)=\lambda\in[0,1]\). These assumptions are quite standard in the literature (see, e.g., Barrett and Donald, 2003) and imply that, as \(n\) diverges, \(m\) also goes to infinity with the same order.
### Empirical LPP
The abovementioned random samples \(\mathcal{X}\) and \(\mathcal{Y}\) yield the empirical CDFs
\[F_{n}(x)=(1/n)\sum\nolimits_{i=1}^{n}\mathds{1}(X_{i}\leq x)\quad\text{and} \quad\ G_{m}(x)=(1/m)\sum\nolimits_{i=1}^{m}\mathds{1}(Y_{i}\leq x),\]
respectively. We denote the order statistics of rank \(k\) from \(\mathcal{X}\) and \(\mathcal{Y}\) with \(X_{(k)}\) and \(Y_{(k)}\), and their sample means with \(\overline{X}_{n}\) and \(\overline{Y}_{m}\), respectively. By the plugin method, the empirical counterparts of \(L_{F}\) and \(L_{G}^{-1}\) are \(L_{F_{n}}\) and \(L_{G_{m}}^{-1}\), where \(L_{F_{n}}(p)=\int_{0}^{p}F_{n}^{-1}(t)dt\) for \(p\in[0,1]\), \(L_{G_{m}}\) is defined similarly, and \(L_{G_{m}}^{-1}\) is the inverse of \(L_{G_{m}}\). Coherently with our definition of \(L_{G}^{-1}\), we let \(L_{G_{m}}^{-1}(p)=1\) for \(p>\overline{Y}_{m}\). Note that \(L_{F_{n}}\) coincides with the empirical unscaled Lorenz curve (Shorrocks, 1983), that is a piecewise linear function joining the points \((k/n,(1/n)\sum_{i=0}^{k}X_{(i)})\), \(k=0,...,n\), with \(X_{(0)}:=0\).
Our definitions of \(L_{F_{n}}\) and \(L_{G_{m}}^{-1}\) differ from the ones, based on step functions, in Csorgo
et al. (2013):
\[\widetilde{L}_{F_{n}}(p)=\begin{cases}(1/n)\sum_{i=1}^{[np]+1}X_{(i)}&p\in[0,1),\\ \overline{X}_{n}&p=1,\\ +\infty&p>1,\end{cases}\]
\[\widetilde{L}_{G_{m}}^{-1}(p)=\begin{cases}0&p\in[0,Y_{(1)}/m]\\ (k-1)/m&p\in[(1/m)\sum_{i=1}^{k-1}Y_{(i)},(1/m)\sum_{i=1}^{k}Y_{(i)}),\quad 2 \leq k\leq m,\\ 1&p\geq\overline{Y}_{m}\end{cases}\]
where clearly \(\widetilde{L}_{G_{m}}^{-1}(p)=\inf\{u:\widetilde{L}_{G_{m}}(u)>p\}\). Note that \(L_{F_{n}}\) and \(\widetilde{L}_{F_{n}}\) coincide at points \(X_{(i)}\), \(i=1,..,n\), and, likewise, \(L_{G_{m}}^{-1}\) and \(\widetilde{L}_{G_{m}}^{-1}\) coincide at points \(i/n\), so these alternative empirical versions of \(L_{F}\) and \(L_{G}^{-1}\) are clearly asymptotically equivalent.
According to the different empirical versions of \(L_{F}\) and \(L_{G}^{-1}\), we may obtain different estimators of \(Z\). One may consider \(Z_{n,m}=L_{G_{m}}^{-1}\circ L_{F_{n}}\), which is a continuous piecewise linear function, or alternatively \(\widetilde{Z}_{n,m}=\widetilde{L}_{G_{m}}^{-1}\circ\widetilde{L}_{F_{n}}\), which is a step function with jumps in the points \(i/n\) (\(i=1,...,n\)), taking values in \(\{j/m:j=1,...,m\}\). In an economic framework, \(\widetilde{L}_{G_{m}}^{-1}\) gives the relative frequency of observations from \(Y\) whose level of income is at most \((1/m)\sum_{j=1}^{k}Y_{(j)}\). Therefore, \(\widetilde{Z}_{n,m}(k/n)\) returns the relative frequency of observations from \(Y\) whose level of income is at most \((1/n)\sum_{i=1}^{k}X_{(i)}\). Note that the value of \(Z_{n,m}\) at its "node" points \(i/n\) does not generally coincide with the value of \(\widetilde{Z}_{n,m}\) at its jump points. In this paper, we will use \(\widetilde{Z}_{n,m}\) or \(Z_{n,m}\) as is more convenient, since the two are asymptotically equivalent. In fact, the sup-distance among \(\widetilde{Z}_{n,m}\) and \(Z_{n,m}\) tends to zero as \(n\) and \(m\) diverge, as established in the following proposition.
**Proposition 2**.: _For any \(n,m>0\), \(\sup|\widetilde{Z}_{n,m}-Z_{n,m}|\leq 1/m\). Hence, as \(m\to\infty\), \(\sup|\widetilde{Z}_{n,m}-Z_{n,m}|\to 0\)._
In our asymptotic scenario, when \(n\to\infty\) we also have \(m\to\infty\), hence the second part of Proposition 2 holds. Moreover, based on the strong uniform consistency of Lorenz curve estimators and their inverse functions (Goldie, 1977; Csorgo et al., 2013), we can prove the strong uniform consistency of \(Z_{n,m}\) and \(\widetilde{Z}_{n,m}\).
**Proposition 3**.: _As \(n,m\rightarrow\infty\), \(Z_{n,m}\to Z\) and \(\widetilde{Z}_{n,m}\to Z\) a.s. and uniformly in \([0,1]\)._
## 4 Weak convergence of the LPP process
The empirical process associated with \(Z\), henceforth referred to as the LPP process, may be useful to characterize the limit distribution of the test statistic under the null hypothesis of SSD. In this section, we study the asymptotic properties of such a process. Define the LPP process as
\[\mathcal{Z}_{n}(p)=\sqrt{r_{n}}(Z_{n,m}(p)-Z(p)),\qquad p\in[0,1],\]
where \(r_{n}=nm/(n+m)\), and let \(\nu_{n}=1\wedge L_{F_{n}}^{-1}(\overline{Y}_{m})\) be the empirical counterpart of \(\nu=1\wedge L_{F}^{-1}(\mu_{G})\). For \(\nu<1\) we know that \(Z(t)=1\) when \(t\in(\nu,1]\). In this case we have \(\sup|\mathcal{Z}_{n}\mathbb{1}(\nu,1]|\to 0\) a.s., since also \(Z_{n,m}(p)=1\) for \(t\in(\nu_{n},1]\) and \(\nu_{n}\rightarrow\nu\) a.s. In other words, the interval \((\nu_{n},1]\) contains no information. Accordingly, we are particularly interested in the asymptotic behaviour of \(\mathcal{Z}_{n}\) restricted to \([0,\nu_{n}]\), namely \(\mathcal{Z}_{n}\mathbb{1}[0,\nu_{n}]\). Weak convergence of the LPP process can be derived under the following assumptions.
**Assumption 1**.: Both \(F\) and \(G\) are continuously differentiable with strictly positive density, and have a finite moment of order \(2+\epsilon\) for some \(\epsilon>0\). Moreover, \(F(0)=G(0)=0\).
**Assumption 2**.: There exists some number \(c>0\) such that \(G^{-1}(0)=c\).
The latter assumption does not represent a limitation in terms of applicability. In fact, if \(G^{-1}(0)=0\), one can apply the test to the shifted samples \(\mathcal{X}+\epsilon\) and \(\mathcal{Y}+\epsilon\), for some small \(\epsilon>0\), recalling that \(X\geq_{2}Y\) if and only if \(X+\epsilon\geq_{2}Y+\epsilon\). In our simulations we set \(\epsilon=10^{-4}\), obtaining results that are almost indistinguishable from those under \(\epsilon=0\). However, since the unscaled Lorenz curve is not translation invariant, the outcome of any test based on it (such as, e.g., Andreoli, 2018; Zhuang et al., 2023) may depend on the shift \(\epsilon\). Actually, in our experiments, we noted that larger values of \(\epsilon\) may even improve the power of the test.
The following theorem establishes the weak convergence of \(\mathcal{Z}_{n}\) under Assumptions 1-2, leveraging some recent results in Kaji (2018) that enable the derivation of the Hadamard
differentiability of the map from CDFs to quantile functions. As discussed in Sun and Beare (2021, Section 2.4), this extends the applicability of earlier Hadamard differentiability conditions, based on stronger distributional assumptions such as bounded supports (Van der Vaart and Wellner, 1996, Lemma 3.9.23). Then, the weak convergence of \(\mathcal{Z}_{n}\) follows by the functional delta method (Van der Vaart and Wellner, 1996, Sect. 3.9).
Let \(\mathcal{B}\) be a centered Gaussian element of \(C[0,1]\times C[0,1]\) with covariance function \(Cov(\mathcal{B}(x_{1},y_{1}),\mathcal{B}(x_{2},y_{2}))=C(x_{1}\wedge x_{2},y_{ 1}\wedge y_{2})-C(x_{1},y_{1})C(x_{2},y_{2})\). Under the independent-sampling scheme, \(C(x_{1},y_{1})=x_{1}y_{1}\) is the product copula, whereas, under the matched-pairs scheme, \(C\) is the copula associated with the pair \((X_{i},Y_{i})\), \(i=1,...,n\). Now, let \(\mathcal{B}_{1}(x_{1})=\mathcal{B}(x_{1},1)\) and \(\mathcal{B}_{2}(x_{2})=\mathcal{B}(1,x_{2})\). The random elements \(\mathcal{B}_{1}\) and \(\mathcal{B}_{2}\) are Brownian bridges that are independent under the independent-sampling scheme, but may be dependent under the matched-pairs one.
**Theorem 1**.: _Under Assumptions 1-2 and both independent-sampling and matched-pairs schemes, we have \(\sqrt{r_{n}}(Z_{n,m}-Z)\rightsquigarrow\mathcal{Z}\mathds{1}[0,\nu]\) in \(C[0,1],\) where_
\[\mathcal{Z}=\frac{-\sqrt{\lambda}\int_{0}^{Z}\mathcal{B}_{2}(p)dG^{-1}(p)+ \sqrt{1-\lambda}\int_{0}^{\cdot}\mathcal{B}_{1}dF^{-1}(p)}{G^{-1}\circ Z}.\]
It is interesting to observe that, if \(X=_{d}Y\), the result of Theorem 1 boils down to
\[\sqrt{r_{n}}(Z_{n,m}(t)-t)\rightsquigarrow\frac{1}{F^{-1}(t)}\int_{0}^{t}(\sqrt {\lambda}\mathcal{B}_{2}(u)-\sqrt{1-\lambda}\mathcal{B}_{1}(u))dF^{-1}(u)\]
\[=\frac{1}{F^{-1}(t)}\int_{0}^{t}\widetilde{\mathcal{B}}(u)dF^{-1}(u)\quad \text{in }C[0,1],\]
where \(\widetilde{\mathcal{B}}\) is the Brownian bridge defined as \(\widetilde{\mathcal{B}}=-\sqrt{1-\lambda}\mathcal{B}_{1}+\sqrt{\lambda} \mathcal{B}_{2}\). Finally, note that, by the asymptotic equivalence implied by Proposition 2, all the results in this section still hold if one replaces \(Z_{n,m}\) with \(\widetilde{Z}_{n,m}\).
## 5 Asymptotic properties of the test
As discussed in Section 2.3, deviations from \(\mathcal{H}_{0}:X\geq_{2}Y\) can be measured via the test statistic \(\mathcal{T}_{n}=\sqrt{r_{n}}\ \mathcal{T}(I-Z_{n,m}).\) Intuitively, we reject \(\mathcal{H}_{0}\) if \(\mathcal{T}_{n}\) is large enough.
However, since the null hypothesis is nonparametric, the main issue is how to determine the distribution of \(\mathcal{T}_{n}\), or alternatively of an upper bound for \(\mathcal{T}_{n}\), under \(\mathcal{H}_{0}\). Following the approach of Barrett et al. (2014), it is easily seen that, under \(\mathcal{H}_{0}\), the test statistic \(\sqrt{r_{n}}\ \mathcal{T}(I-Z_{n,m})\) is dominated by \(\sqrt{r_{n}}\ \mathcal{T}(Z-Z_{n,m})\), which therefore can be used to simulate \(p\)-values or critical values via bootstrap, thus ensuring that the size of the test is asymptotically bounded by some arbitrarily small probability \(\alpha\). By the continuous mapping theorem, \(\sqrt{r_{n}}\ \mathcal{T}(Z-Z_{n,m})\) is asymptotically distributed as \(\mathcal{T}(\mathcal{Z})\), allowing us to derive large-sample properties of the test. The limit behaviour of \(\mathcal{T}_{n}\) under the null and the alternative hypotheses is established in the following lemma.
**Lemma 1**.:
1. _Under_ \(\mathcal{H}_{0}\)_,_ \(\sqrt{r_{n}}\ \mathcal{T}(I-Z_{n,m})\leq\sqrt{r_{n}}\ \mathcal{T}(Z-Z_{n,m})\leadsto\mathcal{T}( \mathcal{Z})\)_. Moreover, for any_ \(\alpha<1/2\)_, the_ \((1-\alpha)\) _quantile of_ \(\mathcal{T}(\mathcal{Z})\) _is positive, finite, and unique._
2. _Under_ \(\mathcal{H}_{1}\)_,_ \(\sqrt{r_{n}}\ \mathcal{T}(I-Z_{n,m})\rightarrow_{p}\infty\)_._
From a practical point of view, the limit distribution of \(\sqrt{r_{n}}\mathcal{T}(Z-Z_{n,m})\) under the null hypothesis may be approximated using a bootstrap approach, as discussed in the next subsection.
### Bootstrap decision rule
Let us denote the bootstrap estimators of the empirical CDFs \(F_{n}\) and \(G_{m}\) as \(F_{n}^{*}\) and \(G_{m}^{*}\), respectively:
\[F_{n}^{*}(x)=(1/n)\sum\nolimits_{i=1}^{n}M_{i}^{1}\mathbb{1}\,(x\leq X_{i}), \qquad G_{m}^{*}(x)=(1/m)\sum\nolimits_{i=1}^{m}M_{i}^{2}\mathbb{1}\,(x\leq Y _{i}),\]
where \(M^{1}=(M_{1}^{1},...,M_{n}^{1})\) and \(M^{2}=(M_{1}^{2},...,M_{m}^{2})\) are independent of the data and are drawn from a multinomial distribution according to the chosen sampling scheme. In particular, under the independent-sampling scheme, \(M^{1}\) and \(M^{2}\) are independently drawn from multinomial distributions with uniform probabilities over \(n\) and \(m\) trials, respectively. Under the matched-pairs scheme, we have \(M^{1}=M^{2}\) drawn from the multinomial
distribution with uniform probabilities over \(n=m\) trials, which means that we sample (with replacement) pairs of data, from the \(n\) pairs \(\{(X_{1},Y_{1}),...,(X_{n},Y_{n})\}\). Correspondingly, by applying the definitions in Section 2, we obtain the bootstrap estimators of the unscaled Lorenz curves, denoted with \(L_{F_{n}^{*}}\) and \(L_{G_{m}^{*}}\), as well as the inverse \(L_{G_{m}^{*}}^{-1}\), and we define \(Z_{n,m}^{*}=L_{G_{m}^{*}}^{-1}\circ L_{F_{n}^{*}}\). As is shown below, the random process \(\sqrt{r_{n}}\ \mathcal{T}(Z_{n,m}-Z_{n,m}^{*})\) has the same limiting distribution as \(\mathcal{T}(\mathcal{Z})\). Therefore, bootstrap \(p\)-values are determined by
\[p=P\{\sqrt{r_{n}}\ \mathcal{T}(Z_{n,m}-Z_{n,m}^{*})>\sqrt{r_{n}}\ \mathcal{T}(I-Z_{n,m})\},\]
and can be approximated, based on \(K\) bootstrap replicates, by
\[p\approx(1/K)\sum\nolimits_{k=1}^{K}\mathbb{1}\{\sqrt{r_{n}}\ \mathcal{T}(Z_{n,m}-Z_{k;n,m}^{*})>\sqrt{r_{n}}\ \mathcal{T}(I-Z_{n,m})\},\]
where \(Z_{k;n,m}^{*}\) is the \(k\)-th resampled realisation of \(Z_{n,m}^{*}\). As usual, the test rejects \(\mathcal{H}_{0}\) if \(p<\alpha\). The asymptotic behaviour of the test is addressed by the following proposition.
**Proposition 4**.: _Under Assumptions 1-2 and the sampling schemes in Section 3.1,_
1. _If_ \(\mathcal{H}_{0}\) _is true,_ \(\lim_{n\to\infty}P\{\text{reject }\mathcal{H}_{0}\}\leq\alpha\)_;_
2. _If_ \(\mathcal{H}_{1}\) _is true,_ \(\lim_{n\to\infty}P\{\text{reject }\mathcal{H}_{0}\}=1\)_._
## 6 Extension to fractional-degree SD
An important topic in SD theory is represented by SD relations that are "between" FSD and SSD. This is motivated by the fact that FSD is a strong requirement, but, on the other hand, SSD corresponds to total risk-aversion, which is quite restrictive in some cases (Muller et al., 2017). There are different ways to define classes of orders that interpolate between FSD and SSD, and each leads to a different family of SD relations, typically parametrised by some real number that represents the strength of the dominance. The first attempt in this direction is ascribable to Fishburn (1980), who used fractional-degree integration to interpolate the classic \(k\)-th order SD at all integer orders \(k\geq 1\). More recently, Muller et al. (2017), Huang et al. (2020), and Lando and Bertoli-Barsotti (2020)
proposed different parametrizations, with different interpretations and properties, which coincide with classic SD only at orders 1 and 2. In this section, we introduce a simple but very general family of fractional-degree orders, which have the advantage that they can be easily tested using the LPP method discussed earlier. Such a family can be defined as follows.
Let \(\mathcal{U}\) be the family of increasing absolutely continuous functions \(u\) over the non-negative half line. Under an economic perspective, \(u\) may be understood as a utility function, assigning values to monetary outcomes. For some \(u\in\mathcal{U}\), we say that \(X\) dominates \(Y\) with respect to \(u\)-_transformed stochastic dominance_ (\(u\)-TSD), and write \(X\geq_{u}^{T}Y\), if \(u(X)\geq_{2}u(Y)\). TSD has been studied by Meyer (1977), who denoted it as SSD with respect to \(u\), and by Huang et al. (2020), who focused on a particular parametric choice of \(u\). In fact, since \(u\)-TSD represents SSD between the transformed random variables \(u(X)\) and \(u(Y)\), then it can be simply expressed through the LPP of \(u(X)\) and \(u(Y)\).
The behaviour of TSD clearly depends on the choice of \(u\). To understand this behaviour, let \(u,\tilde{u}\in\mathcal{U}\) be two transformation functions defined on the same interval. Generalizing Chan et al. (1990), we say that \(u\) is _more convex_ than \(\tilde{u}\) and write \(u\geq_{c}\tilde{u}\) iff \(u\circ\tilde{u}^{-1}\) is convex. The following theorem shows that TSD can be equivalently expressed in terms of expected utilities, thus generalizing Theorem 1 of Huang et al. (2020).
**Theorem 2**.: \(X\geq_{u}^{T}Y\) _if and only if \(\mathbb{E}(\phi(X))\geq\mathbb{E}(\phi(Y))\), for every increasing utility \(\phi\) such that \(u\geq_{c}\phi\)._
It is easy to see that, if \(u\) and \(\phi\) are twice differentiable, the condition \(u\geq_{c}\phi\) is equivalent to \(\rho_{\phi}(x)\geq\rho_{u}(x),\forall x\), where \(\rho_{g}(x)=g^{\prime\prime}(x)/g^{\prime}(x)\) is the Arrow-Pratt index of absolute risk aversion associated with the utility function \(g\). Moreover, the following general properties hold.
**Theorem 3**.:
1. _If_ \(u_{1}\geq_{c}u_{2}\) _then_ \(X\geq_{u_{1}}^{T}Y\implies X\geq_{u_{2}}^{T}Y\)_;_
2. \(X\geq_{1}Y\) _if and only if_ \(X\geq_{u}^{T}Y,\forall u\in\mathcal{U}\)_._
Intuitively, the degree of convexity of the function \(u\) determines the strength of the SD relation, and SSD is obtained by taking \(u\) to be the identity function, whereas FSD is obtained when \(u\) is infinitely "steep".
Families of utility functions within \(\mathcal{U}\) can be obtained easily by composing the quantile function and the CDF of two absolutely continuous random variables. For example, one may consider the class of utility functions studied by Huang et al. (2020) and given by \(u_{c}(x)=\exp\left((1/c-1)x\right)\), for \(c\in(0,1)\). Since this paper deals with tests for non-negative random variables, we focus on a simpler choice, that is \(u_{\theta}(x)=x^{\theta}\), with \(\theta\geq 0\). Correspondingly, hereafter we denote the ordering relation \(X\geq_{u_{\theta}}^{T}Y\) with \(X\geq_{1+1/\theta}^{T}Y\), thus yielding a continuum of SD relations that get stronger and stronger as \(\theta\) grows. By Theorem 3, this order is characterised by those utility functions that have an Arrow-Pratt index larger than or equal to \((\theta-1)/x\).
Since \(X\geq_{1+1/\theta}^{T}Y\) is equivalent to \(X^{\theta}\geq_{2}Y^{\theta}\), a test for \(\mathcal{H}_{0}^{1+1/\theta}:X\geq_{1+1/\theta}^{T}Y\) versus \(\mathcal{H}_{1}^{1+1/\theta}:X\not\geq_{1+1/\theta}^{T}Y\) is readily obtained by applying our method to the LPP of the transformed random samples \(\mathcal{X}^{\theta}\) and \(\mathcal{Y}^{\theta}\). In particular, we consider the _generalised LPP_, given by \(\widetilde{Z}_{n,m}^{\theta}=(\widetilde{L}_{G_{m}}^{\theta})^{-1}\circ \widetilde{L}_{F_{n}}^{\theta}\), where \(\widetilde{L}_{F_{n}}^{\theta}\) and \(\widetilde{L}_{G_{m}}^{\theta}\) are the empirical (step-valued) unscaled Lorenz curves corresponding to the transformed samples \(\{X_{i}^{\theta}:i=1,...,n\}\), and \(\{Y_{j}^{\theta}:j=1,...,m\}\). \(\widetilde{Z}_{n,m}^{\theta}\) is a generalised P-P plot, in that it coincides with \(\widetilde{Z}_{n,m}\) for \(\theta=1\). More interestingly, we prove that, as \(\theta\rightarrow\infty\), \(\widetilde{Z}_{n,m}^{\theta}\) tends to the classic P-P plot of the non-transformed samples, that is to \(G_{m}\circ F_{n}^{-1}\), as depicted in Figure 2. In particular, one may always find some \(\theta\) large enough such that the two P-P plots coincide, meaning that our tests may be also applied to FSD, expressed as \(G\circ F^{-1}(x)\geq x\). (tests for FSD based on the P-P plot have been studied, e.g., by Davidov and Herman (2012) and Beare and Clarke (2022)). In fact, this idea is coherent with the intuition that, for \(\theta\rightarrow\infty\), the stochastic inequality \(X\geq_{1+1/\theta}^{T}Y\) reduces to \(X\geq_{1}Y\), expressed as \(F(x)\leq G(x),\forall x\), as formally established in the following theorem.
**Theorem 4**.:
1. _For_ \(\theta\rightarrow\infty\)_,_ \(X\geq_{1}Y\) _if and only if_ \(X\geq_{1+1/\theta}^{T}Y\)_;_
2. _There exists some_ \(\theta_{0}\) _such that, for_ \(\theta>\theta_{0}\)_, the generalised LPP coincides with the classic P-P plot, that is,_ \(\widetilde{Z}_{n,m}^{\theta}=G_{m}\circ F_{n}^{-1}\)_._
To test FSD as a limit case of TSD, one should choose a value of \(\theta\) that ensures the result above. However, if \(\theta\) is too large, computations may be difficult, depending on the precision of the software used. We recommend using \(\theta=50\), which corresponds to testing \(\geq_{1.02}^{T}\), for a good approximation of FSD.
## 7 Simulations
We perform numerical analyses to investigate the finite-sample properties of the proposed tests. In all simulations, we consider a significance level \(\alpha=0.1\), and run 500 experiments,
Figure 2: The P-P plot (dashed) of two samples of size \(n=m=20\) versus the generalised LPP \(\widetilde{Z}_{n,m}^{\theta}\) (solid), for \(\theta=1,2,5,10\). In this example, the plots coincide for \(\theta\geq 48\).
with 500 bootstrap replicates for each experiment. For simplicity, we set \(n=m\), so henceforth we will drop the subscript \(m\). Namely, we consider \(n=m=50,100,200,500,1000\). The shift \(\epsilon\) is set to \(10^{-4}\), as discussed in Section 4. All computations have been performed in R, and the code is openly available at [https://github.com/siriolegramanti/SSD](https://github.com/siriolegramanti/SSD).
In light of Proposition 2, instead of \(Z_{n}\) we use \(\widetilde{Z}_{n}\), which can be computed faster. Accordingly, we consider two different test statistics, namely \(\mathcal{T}_{\infty}(I-\widetilde{Z}_{n})\) and \(\mathcal{T}_{1}(I-\widetilde{Z}_{n})\); see Section 2.3. For \(n=m\), \(\mathcal{T}_{\infty}\) and \(\mathcal{T}_{1}\) can be rewritten, respectively, as
\[\mathcal{T}_{\infty}(I-\widetilde{Z}_{n})=\max_{i}\left(\frac{i}{n}- \widetilde{Z}_{n}\left(\frac{i}{n}\right)\right),\qquad\mathcal{T}_{1}(I- \widetilde{Z}_{n})=\sqrt{n}\frac{1}{n}\sum\nolimits_{i=1}^{n}\Psi\left(\frac{ 2i-1}{2n}\right),\]
where \(\Psi(t)=(t-\widetilde{Z}_{n}(t))_{+}\). Our results are compared with those obtained from the tests of Barrett and Donald (2003), which represent the state of the art for SSD tests. In particular, Barrett and Donald (2003) propose three bootstrap-based tests, based on a least favourable configuration, denoted as KSB1, KSB2, and KSB3, which differ just for the bootstrap method employed to simulate the \(p\)-values. We focus on KSB3 since it is based on the approach that is most similar to ours. Moreover, KSB3 seems to provide the best results compared to KSB1 and KSB2 as far as concerns SSD; see tables II-A and II-B of Barrett and Donald (2003). The \(p\)-values of KSB3 were computed using a grid of evenly spaced values \(t_{1}<\ldots<t_{r}\), where \(t_{1}\) and \(t_{r}\) are the smallest and the largest values in the pooled sample, respectively. As for the number of grid points, we set \(r=100\) as in Barrett and Donald (2003), but we did not notice substantial differences in increasing \(r\).
Note that one pair of distributions gives rise to two different hypothesis tests. In fact, one may test \(\mathcal{H}_{0}:F\geq_{2}G\) versus \(\mathcal{H}_{1}:F\not\geq_{2}G\), but also the reverse hypothesis test, denoted as \(\mathcal{H}_{0}^{R}:G\geq_{2}F\) versus \(\mathcal{H}_{1}^{R}:G\not\geq_{2}F\). Except for the trivial case \(F=G\), if \(Z\) does not cross the identity we may have that \(\mathcal{H}_{0}\) is true while \(\mathcal{H}_{0}^{R}\) is false, or vice versa; differently, if \(Z\) crosses the identity, \(\mathcal{H}_{0}\) and \(\mathcal{H}_{0}^{R}\) are both false.
### Size properties
To investigate the behaviour of the tests under the null hypothesis, we simulate samples from the Weibull family, denoted by \(W(a,b)\), with CDF \(F_{W}(x;a,b)=1-\exp\{-\left(x/b\right)^{a}\}\).
Since the mean of a \(W(a,b)\) is \(b/q_{a}\), where \(q_{a}=1/\Gamma(1+1/a)\), we let \(F\sim W(a,q_{a})\), for \(a=1,1.25,1.5,1.75,2\), and fix \(G\sim W(1,1)\). All these distributions have mean 1, and in all these cases \(\mathcal{H}_{0}\) holds. Clearly, for \(a=1\) we have \(F=G\), whereas the dominance of \(F\) over \(G\) becomes stronger, and more apparent, for larger values of \(a\).
The results in Tables 1, 2a, 3a and 4a confirm that the proposed tests, both with \(\mathcal{T}_{\infty}\) and \(\mathcal{T}_{1}\), behave as described in Proposition 4, part 1. Namely, the rejection rate tends to be bounded by \(\alpha=0.1\) under \(\mathcal{H}_{0}\). More specifically, we observe that the rejection rate of the proposed tests tends to \(\alpha\) when \(F=G\) (see Table 1), while it tends to 0 when \(F\) strictly dominates \(G\) (see Tables 2a, 3a and 4a). The rejection rate for the KSB3 test by Barrett and Donald (2003) is also asymptotically bounded by \(\alpha\) but, when the dominance is stronger, it is still about \(\alpha\) for \(n=1000\). For such a sample size, the rejection rate of both the proposed tests has already reached 0.
### Power properties
We now investigate the behaviour of the tests under \(\mathcal{H}_{1}\). In particular, we focus on cases in which \(F\) is dominated by \(G\), so that \(\mathcal{H}_{0}\) should be rejected quite easily since \(Z\) is always below the identity. As we discuss in 7.2.1, the three tests considered behave quite similarly in such cases. We also focus on critical cases in which neither of the two distributions dominates the other, and therefore \(Z\) crosses the identity. In particular, the most critical situation for our class of tests is when \(Z\) is above the identity except for a small interval (see Figure 4a). The simulation results in 7.2.2 and 7.2.3 show that, in some of the most difficult cases, \(\mathcal{T}_{1}\) and KSB3 struggle to reject \(\mathcal{H}_{0}\), whereas the proposed \(\mathcal{T}_{\infty}\) test stands out as the most reliable.
#### 7.2.1 Weibull distribution
Using the same distributions as in Section 7.1, except for the case \(F=G\), we have that \(F>_{2}G\) (strictly) and therefore \(G\not\geq_{2}F\). In these cases, \(Z\) is always above the identity. The results, reported in Tables 2b, 3b and 4b, show that the power of the tests increases with the sample size. In particular, \(\mathcal{T}_{1}\) seems to outperform \(\mathcal{T}_{\infty}\) for smaller sample sizes,
while both the proposed \(\mathcal{T}_{1}\) and \(\mathcal{T}_{\infty}\) tests provide larger power compared to the KSB3 test by Barrett and Donald (2003).
#### 7.2.2 Lognormal mixture vs. lognormal distribution
As a more critical example, we focus on a special case considered by Barrett and Donald (2003, Case 5). Here, \(F\) is a mixture of lognormal distributions, namely \(F=0.9F_{LN}(\cdot;0.85;0.4)+0.1F_{LN}(\cdot;0.4,0.4)\), whereas \(G=F_{LN}(\cdot;0.86,0.6)\). These CDFs cross multiple times, and also \(Z\) crosses the identity from below so that \(F\not\geq_{2}G\) but also \(G\not\geq_{2}F\). In other words, both \(\mathcal{H}_{0}\) and \(\mathcal{H}_{0}^{R}\) are false. In the latter case, the null hypothesis is hard to reject, because \(Z\) crosses the identity from above, and it exceeds the identity just in a small subset of the unit interval. Note that Barrett and Donald (2003) just apply their test to \(\mathcal{H}_{0}\) versus \(\mathcal{H}_{1}\), overlooking the reverse situation \(\mathcal{H}_{0}^{R}\) versus \(\mathcal{H}_{1}^{R}\). As illustrated in Table 5, KSB3 seems to outperform our tests in detecting \(\mathcal{H}_{1}\). In particular, \(\mathcal{T}_{1}\) exhibits quite a poor performance with the sample sizes considered (to increase its power up to 0.68, we need to reach \(n=5000\)). Conversely, KSB3 has a really poor performance in rejecting \(\mathcal{H}_{0}^{R}\), while the proposed \(\mathcal{T}_{\infty}\) and \(\mathcal{T}_{1}\) tests provide a large power in this critical setting.
#### 7.2.3 Singh-Maddala Distribution
As a third case, let us consider the Singh-Maddala distribution, denoted as SM\((a,q,b)\), with CDF \(F_{\text{SM}}(x;a,q,b)=1-[1+(x/b)^{a}]^{-q}\). In all the following scenarios, the scale parameter \(b\) is set to 1 and hence omitted, while the two shape parameters \(a\) and \(q\) vary. As in Section 7.2.2, we generate scenarios in which \(Z\) crosses the identity. In particular, we target the worst-case scenarios for our proposed tests, to investigate their limitations, by setting \(F\sim\text{SM}(1.5,q)\) and \(G\sim\text{SM}(1,q)\), for \(q=1.2,1.5,1.8\). As shown in Figure 3, larger values of \(q\) correspond to cases in which it is harder to detect the difference between \(Z\) and the identity, especially using \(\mathcal{T}_{1}\). Tables 5(a), 6(a) and 7(a) show that KSB3 delivers larger power compared to our tests in such critical cases. In particular, while the performance of \(\mathcal{T}_{\infty}\) significantly improves for larger samples and lower \(q\), the power of \(\mathcal{T}_{1}\) is constantly close to 0, even for \(n=1000\) and \(q=1.2\). In light of part 7) of Proposition 1, this is
due to the fact that \(\mathcal{T}_{1}\) downsizes the deviations from the null, which are hardly classified as "large", at least with the sample sizes considered. Indeed, the \(p\)-values of \(\mathcal{T}_{1}\) tend to decrease and to be less variable as \(n\) grows, coherently with Proposition 4 (see Figure 4a). This suggests that the power of \(\mathcal{T}_{1}\) may eventually tend to 1 for larger samples. However, when applied to the reverse hypotheses \(\mathcal{H}_{0}^{R}\) and \(\mathcal{H}_{1}^{R}\), the proposed tests \(\mathcal{T}_{\infty}\) and \(\mathcal{T}_{1}\) exhibit good performance, with rejection rates significantly increasing with \(n\); see Tables 6b, 7b and 8b. On the contrary, KSB3 struggles to detect non-dominance and its power remains close to 0, even for large samples.
### Paired samples
To simulate dependent samples we first draw a sample \(\{(Z_{i}^{1},Z_{i}^{2}):i=1,\ldots,n\}\) from a bivariate normal distribution, with standard marginals and correlation coefficient \(\rho\). Then, by transforming the data via the standard normal CDF \(\Phi\), we obtain a dependent sample from a bivariate distribution with uniform marginals \(\{U_{i}^{1}=\Phi(Z_{i}^{1}):i=1,\ldots,n\}\) and \(\{U_{i}^{2}=\Phi(Z_{i}^{2}):i=1,\ldots,n\}\). Finally, a dependent sample from a bivariate distribution with margins \(F\) and \(G\) is obtained as \(\{(F^{-1}(U_{i}^{1}),G^{-1}(U_{i}^{2})):i=1,\ldots,n\}\). In particular, we consider \(\rho=0.25,0.5,0.75\). As in the previous subsections, we compare our results with those of KSB3. Note that, although Barrett and Donald (2003) assume independence to
Figure 3: The behaviour of \(Z\) for \(q=1.2\) (solid), \(q=1.5\) (dashed) and \(q=1.8\) (dotted) in the Singh-Maddala case. Especially for \(q=1.8\), it becomes very hard to detect deviations from \(\mathcal{H}_{0}\). In the reverse cases, the LPPs are just the inverse functions of these.
prove the consistency properties of such a test, our simulations reveal that KSB3 exhibits a good performance even in the dependent case.
In this paired setting, we consider the same Singh-Maddala distributions as in Section 7.2.3, focusing on the cases \(q=1.2\) and \(q=1.8\). The results, reported in Tables 9-14, confirm the ones reported in the previous tables, although it can be seen that a stronger dependence generally leads to larger rejection rates. Even \(\mathcal{T}_{1}\), which was struggling with independent Singh-Maddala samples, shows a more evident decreasing trend in the empirical distribution of the \(p\)-values when samples are paired (see Figure 3(b)).
### Test for FSD
As discussed in Section 6, our methodology also allows to test TSD, including an approximation of FSD, obtained as \(\geq_{1+1/\theta}^{T}\) with \(\theta\rightarrow\infty\). We then apply the method described in Section 6 to the same Singh-Maddala distributions studied in Section 7.2.3. Since in these cases, SSD does not hold, we have that, _a fortiori_, the FSD null hypothesis, denoted as \(\mathcal{H}_{0}^{1}:F\geq_{1}G\), is also false. This hypothesis can be tested using a sufficiently large value of \(\theta\), as discussed in Section 6. In particular, we set \(\theta=50\), which corresponds to
Figure 4: Box-plots of the simulated \(p\)-values of \(\mathcal{T}_{1}\)
approximating the FSD null hypothesis, \(\mathcal{H}_{0}^{1}\), with \(\mathcal{H}_{0}^{1.02}\). Our method is compared with the FSD version of the KSB3 test described in Barrett and Donald (2003). In contrast to the KSB3 test for SSD, this latter test may be shown to be consistent even in the case when the distributions have unbounded supports.
All the tests considered tend to provide a larger simulated power compared to the SSD case. This is logical since FSD is more stringent than SSD, and therefore, for the same pairs of distributions, it is easier to detect violations of FSD rather than of SSD. The results in Tables 15-17 show that KSB3 tends to provide larger power than our \(\mathcal{T}_{\infty}\) and \(\mathcal{T}_{1}\) tests under \(\mathcal{H}_{1}^{1}:F\not\succeq_{1}G\). On the contrary, under the reverse alternative \((\mathcal{H}_{1}^{1})^{R}:G\not\succeq_{1}F\), KSB3 exhibits a worse performance, also showing an unexpected behaviour, in that its rejection rates first increase and then decrease as \(n\) grows.
## 8 Concluding remarks
In this paper, we proposed leveraging the LPP as a new tool to detect deviations from SSD in the case of non-negative random variables. The same approach can be used to test TSD, hence including FSD as a limit case. The asymptotic properties in Section 5 and the numerical results in Section 7 show that our family of tests can be a valid alternative to the established tests based on the difference between integrals of CDFs, such as the tests in Barrett and Donald (2003). In particular, the KSB3 test is outperformed by our proposed sup-based test \(\mathcal{T}_{\infty}\) in most of the cases analysed, sometimes with a remarkable gap.
Among the two tests proposed, our simulations reveal that the sup-based test \(\mathcal{T}_{\infty}\) is also overall more reliable than the integral-based \(\mathcal{T}_{1}\), which has lower power in the most critical cases. However, both tests may be useful. In fact, in light of Proposition 1 part 7), and according to our numerical results in Section 7.2.3 and Section 7.2.2, \(\mathcal{T}_{\infty}\) performs better than \(\mathcal{T}_{1}\) when deviations from \(\mathcal{H}_{0}\) are subtle, while \(\mathcal{T}_{1}\) provides higher power than \(\mathcal{T}_{\infty}\) when deviations are more apparent. Therefore, in applications, it could be useful to use both tests and compare the \(p\)-values. It is also worth noting that our proposed tests
seem to improve in terms of power when the samples are dependent.
In general, the advantage of using the LPP instead of integrals of CDFs is that it can be approximated uniformly, which allows to establish asymptotic properties without requiring bounded support; moreover, the LPP has a different sensitivity in detecting violations of SSD, compared to other methods. Finally, the power of our tests may be improved further by combining the same proposed test statistics with different and less conservative bootstrap schemes. The latter represents an interesting direction for future work.
\begin{table}
\begin{tabular}{r r r r} \hline \hline \(n\) & \(\mathcal{T}_{\infty}\) & \(\mathcal{T}_{1}\) & KSB3 \\ \hline
50 & 0.25 & 0.17 & 0.43 \\
100 & 0.43 & 0.15 & 0.59 \\
200 & 0.58 & 0.10 & 0.74 \\
500 & 0.89 & 0.08 & 0.98 \\
1000 & 0.99 & 0.08 & 1.00 \\ \hline \end{tabular}
\begin{tabular}{r r r} \hline \hline \(n\) & \(\mathcal{T}_{\infty}\) & \(\mathcal{T}_{1}\) & KSB3 \\ \hline
50 & 0.29 & 0.45 & 0.14 \\
100 & 0.37 & 0.50 & 0.13 \\
200 & 0.56 & 0.66 & 0.15 \\
500 & 0.87 & 0.88 & 0.26 \\
1000 & 0.99 & 0.99 & 0.48 \\ \hline \end{tabular}
\begin{tabular}{r r r} \hline \hline \(n\) & \(\mathcal{T}_{\infty}\) & \(\mathcal{T}_{1}\) & KSB3 \\ \hline
50 & 0.41 & 0.57 & 0.16 \\
100 & 0.56 & 0.67 & 0.18 \\
200 & 0.81 & 0.85 & 0.28 \\
500 & 0.99 & 0.99 & 0.51 \\
1000 & 1.00 & 1.00 & 0.88 \\ \hline \end{tabular}
\end{table}
Table 4: Rejection rates under \(F\sim W(1.3,q_{1.3})\) and \(G\sim W(1,1)\); independent samples
\begin{table}
\begin{tabular}{r r r r} \hline \hline \(n\) & \(\mathcal{T}_{\infty}\) & \(\mathcal{T}_{1}\) & KSB3 \\ \hline
50 & 0.04 & 0.05 & 0.12 \\
100 & 0.04 & 0.02 & 0.13 \\
200 & 0.02 & 0.00 & 0.12 \\
500 & 0.01 & 0.00 & 0.11 \\
1000 & 0.00 & 0.00 & 0.10 \\ \hline \end{tabular}
\begin{tabular}{r r r} \hline \hline \(n\) & \(\mathcal{T}_{\infty}\) & \(\mathcal{T}_{1}\) & KSB3 \\ \hline
50 & 0.29 & 0.45 & 0.14 \\
100 & 0.37 & 0.50 & 0.13 \\
200 & 0.56 & 0.66 & 0.15 \\
500 & 0.87 & 0.88 & 0.26 \\
1000 & 0.99 & 0.99 & 0.48 \\ \hline \end{tabular}
\begin{tabular}{r r r} \hline \hline \(n\) & \(\mathcal{T}_{\infty}\) & \(\mathcal{T}_{1}\) & KSB3 \\ \hline
50 & 0.41 & 0.57 & 0.16 \\
100 & 0.56 & 0.67 & 0.18 \\
200 & 0.81 & 0.85 & 0.28 \\
500 & 0.99 & 0.99 & 0.51 \\
1000 & 1.00 & 1.00 & 0.88 \\ \hline \end{tabular}
\end{table}
Table 3: Rejection rates under \(F\sim W(1.2,q_{1.2})\) and \(G\sim W(1,1)\); independent samples
\begin{table}
\begin{tabular}{r r r r} \hline \hline \(n\) & \(\mathcal{T}_{\infty}\) & \(\mathcal{T}_{1}\) & KSB3 \\ \hline
50 & 0.02 & 0.00 & 0.38 \\
100 & 0.04 & 0.00 & 0.52 \\
200 & 0.05 & 0.00 & 0.63 \\
500 & 0.11 & 0.00 & 0.88 \\
1000 & 0.23 & 0.00 & 0.97 \\ \hline \end{tabular}
\begin{tabular}{r r r} \hline \hline \(n\) & \(\mathcal{T}_{\infty}\) & \(\mathcal{T}_{1}\) & KSB3 \\ \hline
50 & 0.56 & 0.77 & 0.06 \\
100 & 0.76 & 0.87 & 0.03 \\
200 & 0.97 & 0.98 & 0.02 \\
500 & 1.00 & 1.00 & 0.01 \\
1000 & 1.00 & 1.00 & 0.03 \\ \hline \end{tabular}
\begin{tabular}{r r r} \hline \hline \(n\) & \(\mathcal{T}_{\infty}\) & \(\mathcal{T}_{1}\) & KSB3 \\ \hline
50 & 0.43 & 0.68 & 0.04 \\
100 & 0.63 & 0.77 & 0.01 \\
200 & 0.92 & 0.94 & 0.00 \\
500 & 1.00 & 1.00 & 0.00 \\
1000 & 1.00 & 1.00 & 0.00 \\ \hline \end{tabular}
\end{table}
Table 7: Rejection rates under \(F\sim\mathrm{SM}(1.5,1.5)\), \(G\sim\mathrm{SM}(1,1.5)\); independent samples
\begin{table}
\begin{tabular}{r r r r} \hline \hline \(n\) & \(\mathcal{T}_{\infty}\) & \(\mathcal{T}_{1}\) & KSB3 \\ \hline
50 & 0.04 & 0.01 & 0.49 \\
100 & 0.05 & 0.00 & 0.59 \\
200 & 0.09 & 0.00 & 0.74 \\
500 & 0.20 & 0.00 & 0.94 \\
1000 & 0.44 & 0.00 & 1.00 \\ \hline \hline \end{tabular}
\begin{tabular}{r r r r} \hline \hline \(n\) & \(\mathcal{T}_{\infty}\) & \(\mathcal{T}_{1}\) & KSB3 \\ \hline
50 & 0.75 & 0.93 & 0.07 \\
100 & 0.94 & 0.99 & 0.04 \\
200 & 1.00 & 1.00 & 0.02 \\
500 & 1.00 & 1.00 & 0.02 \\
1000 & 1.00 & 1.00 & 0.06 \\ \hline \hline \end{tabular}
\end{table}
Table 10: Rejection rates under \(F\sim\mathrm{SM}(1.5,1.8)\), \(G\sim\mathrm{SM}(1,1.8)\), dependent samples with \(\rho=0.5\)
\begin{table}
\begin{tabular}{r r r r} \hline \hline \(n\) & \(\mathcal{T}_{\infty}\) & \(\mathcal{T}_{1}\) & KSB3 \\ \hline
50 & 0.03 & 0.00 & 0.58 \\
100 & 0.08 & 0.00 & 0.70 \\
200 & 0.13 & 0.00 & 0.87 \\
500 & 0.32 & 0.00 & 0.99 \\
1000 & 0.72 & 0.00 & 1.00 \\ \hline \hline \end{tabular}
\begin{tabular}{r r r r} \hline \hline \(n\) & \(\mathcal{T}_{\infty}\) & \(\mathcal{T}_{1}\) & KSB3 \\ \hline
50 & 0.92 & 0.99 & 0.07 \\
100 & 1.00 & 1.00 & 0.04 \\
200 & 1.00 & 1.00 & 0.04 \\
500 & 1.00 & 1.00 & 0.05 \\
1000 & 1.00 & 1.00 & 0.15 \\ \hline \hline \end{tabular}
\begin{tabular}{r r r r} \hline \hline \(n\) & \(\mathcal{T}_{\infty}\) & \(\mathcal{T}_{1}\) & KSB3 \\ \hline
50 & 0.34 & 0.58 & 0.03 \\
100 & 0.56 & 0.71 & 0.01 \\
200 & 0.89 & 0.91 & 0.01 \\
500 & 1.00 & 1.00 & 0.00 \\
1000 & 1.00 & 1.00 & 0.00 \\ \hline \hline \end{tabular}
\end{table}
Table 12: Rejection rates under \(F\sim\mathrm{SM}(1.5,1.2)\), \(G\sim\mathrm{SM}(1,1.2)\), dependent samples with \(\rho=0.25\)
\begin{table}
\begin{tabular}{r r r r} \hline \hline \(n\) & \(\mathcal{T}_{\infty}\) & \(\mathcal{T}_{1}\) & KSB3 \\ \hline
50 & 0.21 & 0.05 & 0.76 \\
100 & 0.38 & 0.01 & 0.85 \\
200 & 0.64 & 0.01 & 0.95 \\
500 & 0.96 & 0.01 & 0.99 \\
1000 & 1.00 & 0.03 & 0.99 \\ \hline \end{tabular}
\begin{tabular}{r r r} \hline \hline \(n\) & \(\mathcal{T}_{\infty}\) & \(\mathcal{T}_{1}\) & KSB3 \\ \hline
50 & 0.42 & 0.68 & 0.03 \\
100 & 0.69 & 0.84 & 0.00 \\
200 & 0.96 & 0.97 & 0.00 \\
500 & 1.00 & 1.00 & 0.00 \\
1000 & 1.00 & 1.00 & 0.00 \\ \hline \end{tabular}
\begin{tabular}{r r r} \hline \hline \(n\) & \(\mathcal{T}_{\infty}\) & \(\mathcal{T}_{1}\) & KSB3 \\ \hline
50 & 0.61 & 0.88 & 0.02 \\
100 & 0.91 & 0.97 & 0.00 \\
200 & 1.00 & 1.00 & 0.00 \\
500 & 1.00 & 1.00 & 0.00 \\
500 & 1.00 & 1.00 & 0.00 \\ \hline \end{tabular}
\end{table}
Table 13: Rejection rates under \(F\sim\mathrm{SM}(1.5,1.2)\), \(G\sim\mathrm{SM}(1,1.2)\), dependent samples with \(\rho=0.5\)
\begin{table}
\begin{tabular}{r r r r} \hline \hline \(n\) & \(\mathcal{T}_{\infty}\) & \(\mathcal{T}_{1}\) & KSB3 \\ \hline
50 & 0.08 & 0.09 & 0.15 \\
100 & 0.11 & 0.06 & 0.24 \\
200 & 0.20 & 0.05 & 0.39 \\
500 & 0.64 & 0.10 & 0.85 \\
1000 & 0.96 & 0.29 & 0.97 \\ \hline \end{tabular}
\begin{tabular}{r r r} \hline \hline \(n\) & \(\mathcal{T}_{\infty}\) & \(\mathcal{T}_{1}\) & KSB3 \\ \hline
50 & 0.08 & 0.44 & 0.32 \\
100 & 0.19 & 0.56 & 0.44 \\
200 & 0.52 & 0.80 & 0.52 \\
500 & 0.99 & 0.99 & 0.39 \\
1000 & 1.00 & 1.00 & 0.22 \\ \hline \end{tabular}
\begin{tabular}{r r r} \hline \hline \(n\) & \(\mathcal{T}_{\infty}\) & \(\mathcal{T}_{1}\) & KSB3 \\ \hline
50 & 0.10 & 0.59 & 0.47 \\
100 & 0.30 & 0.70 & 0.63 \\
200 & 0.74 & 0.91 & 0.78 \\
500 & 0.99 & 1.00 & 0.78 \\
1000 & 1.00 & 1.00 & 0.67 \\ \hline \end{tabular}
\end{table}
Table 17: FSD test. Rejection rates under \(F\sim\mathrm{SM}(1.5,1.8)\), \(G\sim\mathrm{SM}(1,1.8)\)
## Appendix B: Proofs
Calculations of Example 1.: The unscaled Lorenz curve of \(F\) is
\[L_{F}(p)=b\left(\Gamma\left(1+\frac{1}{a}\right)-\Gamma\left(1+\frac{1}{a},-\log( 1-p)\right)\right),\]
while
\[L_{G}(p)=p+(1-p)\log(1-p).\]
It is well known (e.g. Goldie, 1977) that \(L_{G}\) can be expressed as \(L_{G}(p)=M_{G}\circ G^{-1}(p)\), with
\[M_{G}(x)=\int_{0}^{x}t\ dG(t)=1-e^{-x}(x+1),\qquad x\geq 0.\]
Noting that \(M_{G}(x)\leq\mu_{G}=1\) for any \(x\geq 0\), this function can be inverted using the Lambert \(W_{-1}\) function (Corless et al., 1996), that is \(M_{G}^{-1}(t)=-1-W_{-1}\left((t-1)/e\right)\). Accordingly,
\[L_{G}^{-1}(t)=G\circ M_{G}^{-1}(t)=1-\exp\left(1+W_{-1}\left(\frac{t-1}{e} \right)\right).\]
Finally, by composition, we obtain the expression of \(Z\) in Example 1.
Proof of Proposition 1.:
1. This follows from the properties of the \(L^{p}\) norm.
2. If \(v_{2}(x)\leq 0,\forall x\in[0,1]\) then \(v_{1}(x)-v_{2}(x)\geq v_{1}(x),\forall x\in[0,1]\) which implies \((v_{1}(x)-v_{2}(x))_{+}^{p}\geq(v_{1}(x))_{+}^{p},\forall x\in[0,1]\) and therefore \(||(v_{1}-v_{2})_{+}||_{p}\geq||(v_{1})_{+}||_{p}\) by monotonicity of integrals.
3. The proof is the same as in Lemma 2 of Barrett et al. (2014) and relies on the fact that \(v_{1}\in C[0,1]\).
4. Minkowski's inequality implies that, for some pair of functions \(u,v\in C[0,1]\), \(||u||_{p}=||(u-v)+v||_{p}\leq||u-v||_{p}+||v||_{p}\), so that \(||u||_{p}-||v||_{p}\leq||u-v||_{p}\), and similarly, \(||u-v||_{p}\geq||v||_{p}-||u||_{p}\), therefore, \(|||u||_{p}-||v||_{p}|\leq||u-v||_{p}\). Then \[|||(v_{1})_{+}||_{p}-||(v_{2})_{+}||_{p}|\leq||(v_{1})_{+}-(v_{2})_{+}||_{p} \leq||v_{1}-v_{2}||_{p}\leq||v_{1}-v_{2}||_{\infty},\] where the second inequality follows from the fact that, for every \(x\in[0,1]\), \(|(v_{1}(x))_{+}-(v_{2}(x))_{+}|\leq|v_{1}(x)-v_{2}(x)|\).
5. The proof follows from absolute homogeneity of the \(L^{p}\) norm.
6. Let \(\beta\in[0,1]\). By convexity of the function \((\cdot)_{+}\), Minkowski's inequality, and absolute homogeneity of the \(L^{p}\) norm, \[\mathcal{T}_{p}(\beta(v_{2})+(1-\beta)v_{1})=||(\beta v_{2}+(1- \beta)v_{1})_{+}||_{p}\leq||\beta(v_{2})_{+}+(1-\beta)(v_{1})_{+}||_{p}\] \[\leq\beta||(v_{2})_{+}||_{p}+(1-\beta)||(v_{1})_{+}||_{p}=\beta \mathcal{T}_{p}(v_{2})+(1-\beta)\mathcal{T}_{p}(v_{1}).\]
7. This follows from basic properties of \(L^{p}\) norms.
Proof of Proposition 2.: \(\widetilde{Z}_{n,m}-Z_{n,m}\) can be expressed as
\[\widetilde{L}_{G_{m}}^{-1}\circ\widetilde{L}_{F_{n}}-L_{G_{m}}^{-1}\circ L_{ F_{n}}=(\widetilde{L}_{G_{m}}^{-1}\circ\widetilde{L}_{F_{n}}-\widetilde{L}_{G_{m }}^{-1}\circ L_{F_{n}})+(\widetilde{L}_{G_{m}}^{-1}\circ L_{F_{n}}-L_{G_{m}}^ {-1}\circ L_{F_{n}}).\]
For the first summand, which is the difference between two step functions, we have \(\widetilde{L}_{G_{m}}^{-1}\circ\widetilde{L}_{F_{n}}(p)\geq\widetilde{L}_{G_ {m}}^{-1}\circ L_{F_{n}}(p)\) for every \(p\in[0,1]\), since \(\widetilde{L}_{F_{n}}(p)\geq L_{F_{n}}(p)\) for every \(p\in[0,1]\). Moreover, \(\widetilde{L}_{G_{m}}^{-1}\circ\widetilde{L}_{F_{n}}(k/n)=\widetilde{L}_{G_{m }}^{-1}\circ L_{F_{n}}(k/n)\) for \(k=0,...,n\), while, within each interval \(((k-1)/n,k/n)\), the difference \(\widetilde{L}_{G_{m}}^{-1}\circ\widetilde{L}_{F_{n}}(p)-\widetilde{L}_{G_{m}} ^{-1}\circ L_{F_{n}}(p)\) is bounded above the height of the jumps of \(\widetilde{L}_{G_{m}}^{-1}\), that is, \(1/m\). For the latter summand, \(\widetilde{L}_{G_{m}}^{-1}\circ L_{F_{n}}-L_{G_{m}}^{-1}\circ L_{F_{n}}\in[- \frac{1}{m},0]\), since clearly \(L_{G_{m}}^{-1}\circ L_{F_{n}}\) is the linear interpolator of the jump points of the step function \(\widetilde{L}_{G_{m}}^{-1}\circ L_{F_{n}}\). Hence, the result follows.
Proof of Proposition 3.: As proved in Theorem 10.1 and Theorem 13.2 of Csorgo et al. (2013), \(\widetilde{L}_{G_{m}}^{-1}\) and \(\widetilde{L}_{F_{n}}\) converge strongly and uniformly to \(L_{G}^{-1}\) and \(L_{F}\), respectively. Since \(L_{G}^{-1}\) is uniformly continuous in \([0,\infty)\) and \(\sup_{p}|L_{F_{n}}(p)-L_{F}(p)|\to 0\) almost surely, we obtain that \(\sup_{p}|L_{G}^{-1}\circ\widetilde{L}_{F_{n}}(p)-L_{G}^{-1}\circ L_{F}(p)|\to 0\) almost surely. Then, for every \(p\in(0,1)\),
\[|\widetilde{L}_{G_{m}}^{-1}\circ\widetilde{L}_{F_{n}}(p)-L_{G}^{- 1}\circ L_{F}(p)| \leq|\widetilde{L}_{G_{m}}^{-1}\circ\widetilde{L}_{F_{n}}(p)-L_{G }^{-1}\circ\widetilde{L}_{F_{n}}(p)|+|L_{G}^{-1}\circ\widetilde{L}_{F_{n}}(p)- L_{G}^{-1}\circ L_{F}(p)|\] \[\leq\sup_{p}|\widetilde{L}_{G_{m}}^{-1}(p)-L_{G}^{-1}(p)|+\sup_{p }|L_{G}^{-1}\circ\widetilde{L}_{F_{n}}(p)-L_{G}^{-1}\circ L_{F}(p)|.\]
Since both terms in the right-hand side converge to \(0\) with probability \(1\), we obtain that \(\widetilde{Z}_{n,m}\) converges strongly and uniformly to \(Z\) in \([0,1]\). By Proposition 2, \(|\widetilde{Z}_{n,m}-Z_{n,m}|\to 0\) for \(n\to\infty\) and \(m\to\infty\), therefore the same property is satisfied by \(Z_{n,m}\)
Proof of Theorem 1.: Let \(\mathbb{L}\) be the space of maps \(z:[0,\infty)\to\mathbb{R}\) with \(\lim_{x\to-\infty}z(x)=0\) and \(\lim_{x\to\infty}z(x)=1\), and the norm \(||z||_{\mathbb{L}}=\max\{||z||_{\infty},||1-z||_{1}\}\). As shown by Kaji (2018), under assumption i), the map \(\phi(F)=F^{-1}\), from CDFs to quantile functions, is Hadamard differentiable at \(F\), tangentially to the set \(\mathbb{L}_{0}\) of continuous functions in \(\mathbb{L}\), with derivative map
\[\phi^{\prime}_{F}(z)=-(z\circ F^{-1})(F^{-1})^{\prime}.\]
The linear map \(\psi(F^{-1})=\int_{0}^{\cdot}F^{-1}(t)dt\) coincides with its Hadamard derivative. Accordingly, by the chain rule (Van der Vaart and Wellner, 1996, Lemma 3.9.3), the composition map \(\psi\circ\phi:F\to L_{F}\) is also Hadamard differentiable at \(F\) tangentially to \(\mathbb{L}_{0}\), with derivative
\[(\psi\circ\phi)^{\prime}_{F}(z)=\psi^{\prime}_{\phi(F)}\circ\phi^{\prime}_{F} (z)=-\int_{0}^{\cdot}z\circ F^{-1}(p)dF^{-1}(p).\]
Now, observe that
\[\begin{pmatrix}\sqrt{n}(F_{n}-F)\\ \sqrt{m}(G_{m}-G)\end{pmatrix}\rightsquigarrow\begin{pmatrix}\mathcal{B}_{1} \circ F\\ \mathcal{B}_{2}\circ G\end{pmatrix}\text{ in }\mathbb{L}\times\mathbb{L},\]
as shown in Lemma 5.1 of Sun and Beare (2021). Then, the functional delta method (Van der Vaart and Wellner, 1996, Theorem 3.9.13) implies the joint weak convergence
\[\begin{pmatrix}\sqrt{n}(L_{F_{n}}-L_{F})\\ \sqrt{m}(L_{G_{m}}-L_{G})\end{pmatrix}=\begin{pmatrix}\sqrt{n}(\psi\circ\phi( F_{n})-\psi\circ\phi(F))\\ \sqrt{m}(\psi\circ\phi(G_{m})-\psi\circ\phi(G))\end{pmatrix}\rightsquigarrow \begin{pmatrix}(\psi\circ\phi)^{\prime}_{F}(\mathcal{B}_{1}\circ F)\\ (\psi\circ\phi)^{\prime}_{G}(\mathcal{B}_{2}\circ G)\end{pmatrix}\\ =\begin{pmatrix}-\int_{0}^{\cdot}\mathcal{B}_{1}(t)dF^{-1}(t)\\ -\int_{0}^{\cdot}\mathcal{B}_{2}(t)dG^{-1}(t)\end{pmatrix}=:\begin{pmatrix} \mathcal{L}_{F}\\ \mathcal{L}_{G}\end{pmatrix}\text{ in }C[0,1]\times C[0,1]. \tag{2}\]
Now, consider the process \(\sqrt{m}(L_{G_{m}}^{-1}(t)-L_{G}^{-1}(t))\), for \(t\in[0,\mu_{G}]\). The LC \(L_{G}\) is increasing and continuous on \([0,1]\), therefore the inverse function \(L_{G}^{-1}\) is increasing and continuous on \([0,\mu_{G}]\), moreover the derivative \(L_{G}^{\prime}=G^{-1}\) is strictly positive in the unit interval (note that assumption ii) entails that \(G^{-1}(0)=c>0\)). Then, by the inverse map theorem (Van der Vaart and Wellner, 1996, Lemma 3.9.20) the map \(\eta:L_{G}\to L_{G}^{-1}\) is Hadamard-differentiable at \(L_{G}\), tangentially to the set of bounded functions on \([0,1]\), with derivative
\[\eta^{\prime}_{L_{G}}(z)=-\frac{z\circ L_{G}^{-1}}{L_{G}^{\prime}\circ L_{G}^ {-1}}=-\frac{z\circ L_{G}^{-1}}{G^{-1}\circ L_{G}^{-1}}.\]
Since \(r_{n}/n\to 1-\lambda\) and \(r_{n}/m\to\lambda\), by (2) and the functional delta method, the above result implies
\[\sqrt{r_{n}}\begin{pmatrix}L_{F_{n}}-L_{F}\\ L_{G_{m}}^{-1}-L_{G}^{-1}\end{pmatrix}\rightsquigarrow\begin{pmatrix}\lambda \mathcal{L}_{F}\\ (1-\lambda)\eta^{\prime}_{L_{G}}(\mathcal{L}_{G})\end{pmatrix}=:\begin{pmatrix} \lambda\mathcal{L}_{F}\\ (1-\lambda)\mathcal{C}_{G}\end{pmatrix}\text{ in }C[0,1]\times C[0,\mu_{G}], \tag{3}\]
where \(\mathcal{C}_{G}\) is defined as
\[\mathcal{C}_{G}(t)=\frac{\int_{0}^{L_{G}^{-1}(t)}\mathcal{B}_{2}(p)dG^{-1}(p) }{G^{-1}\circ L_{G}^{-1}(t)},\qquad t\in[0,\mu_{G}].\]
Now, consider the maps \(\pi=\psi\circ\phi:F\to L_{F}\), \(\theta:h\to L_{G}^{-1}\circ h\) and the composition map \(\zeta:C[0,1]\times C[0,\mu_{G}]\to C[0,\nu]\) defined by \(\zeta(\pi,\theta)(x)=\theta\circ\pi(x)\). Recall that the Hadamard derivative of \(\theta\) is \(\theta^{\prime}_{L_{F}}(\alpha)=((L_{G}^{-1})^{\prime}\circ L_{F})\alpha\), and \(L_{G}^{-1}\) is uniformly norm-bounded, since \((L_{G}^{-1})^{\prime}\leq 1/c\), therefore we can apply Lemma 3.2.27 of Van der Vaart and Wellner (1996), which establishes that \(\zeta\) is Hadamard differentiable at \((\pi,\theta)\), tangentially to the set \(C[0,1]\times UC[0,\mu_{G}]\), where \(UC[0,\mu_{G}]\) is the family of uniformly continuous functions on \([0,\mu_{G}]\), with derivative
\[\zeta^{\prime}_{\pi,\theta}(\alpha,\beta)(x)=\beta\circ\pi(x)+\theta^{\prime}_ {\pi(x)}(\alpha(x))=\beta\circ L_{F}(x)+\theta^{\prime}_{L_{F}}(\alpha(x))= \beta\circ L_{F}(x)+((L_{G}^{-1})^{\prime}\circ L_{F})\alpha(x).\]
Now, since \(Z=\zeta(L_{F},L_{G}^{-1})\), using (3), the functional delta method and the Hadamard differentiability of the composition map \(\zeta\) give
\[\sqrt{r_{n}}(Z_{n,m}-Z)\mathbb{1}[0,\nu] =\sqrt{r_{n}}(\zeta(L_{F_{n}},L_{G_{m}}^{-1})-\zeta(L_{F},L_{G}^ {-1}))\mathbb{1}[0,\nu]\] \[\rightsquigarrow\zeta^{\prime}_{L_{F},L_{G}^{-1}}(\lambda\mathcal{ L}_{F},(1-\lambda)\mathcal{C}_{G})=\sqrt{\lambda}\mathcal{C}_{G}\circ L_{F}+ \sqrt{1-\lambda}\frac{\mathcal{L}_{F}}{G^{-1}\circ L_{G}^{-1}\circ L_{F}}\] \[=\frac{-\sqrt{1-\lambda}\int_{0}^{\cdot}\mathcal{B}_{1}dF^{-1}(p) +\sqrt{\lambda}\int_{0}^{Z}\mathcal{B}_{2}(p)dG^{-1}(p)}{G^{-1}\circ Z}\quad \text{in }C[0,\nu],\]
which implies the statement since \(\mathcal{Z}_{n}\mathbb{1}(\nu,1]\rightsquigarrow 0\).
Proof of Lemma 1.: Bear in mind that \(\mathcal{Z}\mathbb{1}[0,\nu]\) is a mean-zero Gaussian process since it is obtained by integrating and normalizing Gaussian processes. Under \(\mathcal{H}_{0}\), \(p-Z(p)\leq 0,\forall p\in[0,1]\), hence
\[\sqrt{r_{n}}\ \mathcal{T}(I-Z_{n,m})\leq\sqrt{r_{n}}\ \mathcal{T}(Z-Z_{n,m})= \mathcal{T}(\sqrt{r_{n}}(Z-Z_{n,m}))\rightsquigarrow\mathcal{T}(\mathcal{Z}),\]
where the last step follows from the continuous mapping theorem, since the map \(\mathcal{T}(f)\) satisfies \(|\mathcal{T}(f)-\mathcal{T}(g)|\leq||f-g||_{\infty}\), where \(f,g\) are continuous functions on the unit interval, as proved in Lemma 2 of Barrett et al. (2014). The \((1-\alpha)\) quantile of the distribution of \(\mathcal{T}(\mathcal{Z})\) is positive, finite, and unique because \(\mathcal{Z}\) is a mean zero Gaussian process, so the proof follows by the same arguments used in the proof of Lemma 4 in Barrett et al. (2014). Since, by Proposition 3, \(Z_{n,m}\) converges strongly and uniformly to \(Z\), under \(\mathcal{H}_{1}\) we have \(\mathcal{T}(I-Z_{n,m})\to_{p}\mathcal{T}(I-Z)>0\). Finally, multiplying by \(\sqrt{r_{n}}\), we obtain the second result.
Proof of Proposition 4.: As proved in Lemma 5.2 of Sun and Beare (2021),
\[\begin{pmatrix}\sqrt{n}(F_{n}^{*}-F_{n})\\ \sqrt{m}(G_{n}^{*}-G_{n})\end{pmatrix}\underset{M}{\overset{ass}{\rightsquigarrow}} \begin{pmatrix}\mathcal{B}_{1}\circ F\\ \mathcal{B}_{2}\circ G\end{pmatrix}\text{ in }\mathbb{L}\times\mathbb{L},\]
where \(\underset{M}{\overset{ass*}{\rightsquigarrow}}\) denotes weak convergence conditional on the data a.s., see (Kosorok, 2008, p.20). The proof of Theorem 1 establishes the Hadamard-differentiability of the maps \(\psi\circ\phi:F\to L_{F}\) and \(\eta\circ\psi\circ\phi:G\to L_{G}^{-1}\), so that the functional delta method for the bootstrap implies
\[\sqrt{r_{n}}\begin{pmatrix}L_{F_{n}^{*}}-L_{F_{n}}\\ L_{G_{n}^{*}}^{-1}-L_{G_{n}}^{-1}\end{pmatrix}\underset{M}{\overset{P}{ \rightsquigarrow}}\begin{pmatrix}\lambda\mathcal{L}_{F}\\ (1-\lambda)\mathcal{C}_{G}\end{pmatrix}\text{ in }C[0,1]\times C[0,\mu_{G}],\]
where \(\underset{M}{\overset{P}{\rightsquigarrow}}\) denotes weak convergence conditional on the data in probability (see Kosorok, 2008, p.20). Using the Hadamard differentiability of the composition map \(\zeta(L_{F},L_{G}^{-1})=L_{G}^{-1}\circ L_{F}\), the functional delta method for bootstrap implies \(\sqrt{r_{n}}(Z_{n,m}^{*}-Z_{n,m})\underset{M}{\overset{P}{\rightsquigarrow}} \mathcal{Z}\), which entails that \(\mathcal{T}(\sqrt{r_{n}}(Z_{n,m}-Z_{n,m}^{*}))\underset{M}{\overset{P}{ \rightsquigarrow}}\mathcal{T}(-\mathcal{Z})=_{d}\mathcal{T}(\mathcal{Z})\) by the continuous mapping theorem. The test rejects the null hypothesis if the test statistic exceeds the bootstrap threshold \(c_{n}^{*}(\alpha)=\inf\{y:P(\sqrt{r_{n}}\ \mathcal{T}(Z_{n,m}-Z_{k;n,m}^{*})>y| \mathcal{X},\mathcal{Y})\leq\alpha\}\), but the weak convergence result implies \(c_{n}^{*}(\alpha)\to_{p}c(\alpha)=\inf\{y:P(\mathcal{T}(\mathcal{Z})>y)\leq\alpha\}\). Hence, Lemma 1 yields the result.
Proof of Theorem 2.: Integrating by substitution, we can see that \(X\geq_{u}^{T}Y\) if and only if \(u(X)\geq_{2}u(Y)\), since \(P(u(X)\leq t)=F_{X}\circ u^{-1}(t)\), and similarly for \(Y\). Hence, by setting
\(\phi=g\circ u\), the proof follows from the classic characterisation of SSD, since \(\mathbb{E}(g\circ u(X))\geq\mathbb{E}(g\circ u(Y))\), for any increasing concave function \(g\).
Proof of Theorem 3.: Point 1) follows from the fact that \(u_{1}(X)\geq_{2}u_{1}(Y)\) implies \(\mathbb{E}(u_{2}\circ u_{1}^{-1}\circ u_{1}(X))=\mathbb{E}(u_{2}(X))\geq \mathbb{E}(u_{2}(Y))\), because the composition \(u_{2}\circ u_{1}^{-1}\) is concave by construction. The "only if" part of point 2) is trivial. The "if" part follows from the characterisation of FSD, taking into account that the equivalent condition of Theorem 2, that is, \(\mathbb{E}(\phi(X))\geq\mathbb{E}(\phi(Y)),\forall\phi\leq_{c}u\), for every \(u\in\mathcal{U}\), implies that such an inequality holds just for every increasing \(\phi\in\mathcal{U}\). Since any increasing function may be approximated by a sequence of functions in \(\mathcal{U}\), we have \(X\geq_{1}Y\).
Proof of Theorem 4.: 1. \(X\geq_{1+1/\theta}^{T}Y\) con be expressed as
\[\int_{-\infty}^{x}F(t)du(t)\leq\int_{-\infty}^{x}G(t)du(t),\qquad\forall x. \tag{4}\]
Integrating by parts and by substitution, we obtain that, for both \(H=F\) and \(H=G\),
\[\mathcal{I}_{H}^{\theta}(x)=\int_{0}^{x}H(t)du_{\theta}(t)=u_{\theta}(x)H(x)- \int_{0}^{x}u_{\theta}(t)dH(t)=u_{\theta}(x)H(x)-\int_{0}^{H(x)}u_{\theta} \circ H^{-1}(y)dy.\]
Hence,
\[\frac{\mathcal{I}_{H}^{\theta}(x)}{u_{\theta}(x)}=H(x)-\int_{0}^{H(x)}\frac{u _{\theta}\circ H^{-1}(y)}{u_{\theta}(x)}dy=H(x)-\int_{0}^{H(x)}\left(\frac{H^ {-1}(y)}{x}\right)^{\theta}dy\to H(x),\]
by the Lebesgue dominated convergence theorem, recalling that \(H^{-1}(y)/x\leq 1\) as \(y\leq H(x)\). Now, it is readily seen that \(X\geq_{1+1/\theta}^{T}Y\) if and only if \(\mathcal{I}_{F}^{\theta}(x)/u_{\theta}(x)\leq\mathcal{I}_{G}^{\theta}(x)/u_{ \theta}(x)\) for any \(x\), which implies the result.
2. Let \(x_{1},...,x_{n}\) and \(y_{1},...,y_{m}\) be ordered realisations from \(X\) and \(Y\), respectively. By properties of the power function, there exists some number \(\theta_{0}\) such that, for \(\theta>\theta_{0}\), \((1/n)\sum_{k=1}^{i}x_{k}^{\theta}\in((1/m)\sum_{k=1}^{j-1}y_{k}^{\theta},(1/m )\sum_{k=1}^{j}y_{k}^{\theta})\) if and only if \(x_{i}/n\in(y_{j-1}/m,y_{j}/m)\). In fact,
\[x_{i}\left(\frac{1}{n}(\sum_{k=1}^{i-1}\left(\tfrac{x_{k}}{x_{i}}\right)^{ \theta}+1)\right)\in\left(y_{j-1}\left(\frac{1}{m}(\sum_{k=1}^{j-2}\left( \tfrac{y_{k}}{y_{j-1}}\right)^{\theta}+1)\right),y_{j}\left(\frac{1}{m}(\sum_ {k=1}^{j-1}\left(\tfrac{y_{k}}{y_{j}}\right)^{\theta}+1)\right)\right).\]
Accordingly, for any \(i=1,...,n\) and \(\theta>\theta_{0}\), \(\widetilde{Z}_{n,m}^{\theta}\) returns \(j/m\) if \(x_{i}/n\in(y_{j-1}/m,y_{j}/m)\), which coincides with the P-P plot \(G_{m}\circ F_{n}^{-1}\)
## Funding
This research was supported by the Italian funds ex-MURST60%. T.L. was also supported by the Czech Science Foundation (GACR) under the project 20-16764S and by VSB-TU Ostrava (SGS project SP2021/15).
_Conflict of interest:_ None declared.
|
2302.04894
|
Two-loop hard thermal loops for any model
|
Hard thermal loops describe how soft gauge fields are screened and damped in
hot plasmas. As such they are used to calculate transport coefficients,
Sphaleron rates, equations of state, and particle production. However, most
calculations are done using one-loop self-energies. And two-loop contributions
can be large. To that end this paper provides vector two-loop self-energies for
generic models: Any scalar, fermion, or vector representation; and all possible
renormalizable terms. Several examples are given to showcase the results.
Two-loop results for higher-point functions are also given.
|
Andreas Ekstedt
|
2023-02-09T19:00:18Z
|
http://arxiv.org/abs/2302.04894v1
|
# Two-loop hard thermal loops for any model
###### Abstract
Hard thermal loops describe how soft gauge fields are screened and damped in hot plasmas. As such they are used to calculate transport coefficients, Sphaleron rates, equations of state, and particle production. However, most calculations are done using one-loop self-energies. And two-loop contributions can be large. To that end this paper provides vector two-loop self-energies for generic models: Any scalar, fermion, or vector representation; and all possible renormalizable terms. Several examples are given to showcase the results. Two-loop results for higher-point functions are also given.
DESY-23-016
## 1 Introduction
Be it phase transitions [1, 2, 3]; Baryon violation [4, 5]; photon emission from heavy-ion collisions [6, 7]; or axion production [8, 9, 10]; thermal field theory is indispensable all the same. Whilst the picture is quite complicated for generic systems, the physics is considerably simpler if we look at length-scales of the order \(L\gg T^{-1}\). For in that case high-energy modes with \(E\sim T\) behave as quasiparticles [11, 12]. And much intuition from plasma physics directly carries over. So as charged particles move in, say an electric field, they redistribute themselves to screen the field. Likewise, free charges resist a changing magnetic field in accordance with Lenz's law. In both cases the field is screened by high-energy modes.
The best way to incorporate this screening depends on the situation. In equilibrium, for example, the scalar potential (\(A^{0}\)) effectively obtains a thermal Debye mass [1]. Since there is no time dependence, it is useful to describe such systems with a three-dimensional field theory [13, 14]. A different effective description, known as hard thermal loops, can be used when fields vary slowly in time.
These hard thermal loops are particularly important when the system is pushed from equilibrium. This is because deviations from equilibrium are driven back by scattering processes; and the characteristic momentum transfer, and thus the cross-section, is set by the screening length. As such hard thermal loops are key for calculating transport coefficients [15; 16; 17; 18; 19], particle production [20; 21; 22], and colour conductivity [23; 24; 25].
Though hard thermal loops are important, little is known about them beyond leading order. Existing studies are limited to quantum electrodynamics at high temperatures [26; 27] and at finite chemical potential [28]. Reason being that direct evaluations are hampered by an increased complexity at two loops. Nevertheless, in this paper we use kinetic theory to simplify the calculations. Our method of choice is rather compact and admits neat expressions for generic model--including non-abelian theories.
The first section of the paper describes the calculation; section 3 provides results for general models; section 4 provides higher-point correlators; and additional details are given in the appendices. The results are also given in the accompanying HTLGen.m file.
## 2 The real-time formalism
Throughout this article we use the mostly-plus metric: \(P^{2}=-(p^{0})^{2}+\vec{p}^{2}\), and all four-vectors are denoted by capitalized letters, while spatial vectors are denoted by lowercase ones. To save ink we also use the notation \(p^{2}\equiv\vec{p}^{2}\).
Because we are interested in real-time dynamics we have to double the field content [29; 30]: Here we follow [6; 31; 32] and use retarded and advanced fields, otherwise known as the r/a basis. In this basis there are three propagators for each field. For a free theory these are1
Footnote 1: See [32; 33] for a clear diagrammatic representation of Feynman rules in this basis.
\[\Delta^{rr}_{B/F}(P)=2\pi\delta(P^{2})\left\{\theta(p^{0})N_{B/F }^{+}(p^{0},\vec{p})+\theta(-p^{0})N_{B/F}^{-}(-p^{0},-\vec{p})\right\}, \tag{2.1}\] \[\Delta^{R}(P)=\frac{-i}{P^{2}-i\eta p^{0}},\quad\Delta^{A}(P)= \frac{-i}{p^{2}+i\eta p^{0}},\quad N_{B/F}(p^{0},\vec{p})=\frac{1}{2}\pm n_{B/ F}(p^{0}). \tag{2.2}\]
To condense the notation we denote \(rr\) propagators by
\[\Delta_{X}(P)=2\pi\delta(P^{2})\left\{\theta(p^{0})N_{X}(p^{0},\vec{p})+\theta (-p^{0})\overline{N}_{X}(-p^{0},-\vec{p})\right\}, \tag{2.3}\]
where \(X=V,F,S\) depending on the particle. In this case the \(rr\) propagator for vectors, fermions, and scalars is
\[D^{rr}_{\mu\nu}(P)=g_{\mu\nu}\Delta_{V}(P),\quad S^{rr}_{F}=-\not{p}\Delta_{F} (P),\quad D^{rr}_{S}(P)=\Delta_{S}(P), \tag{2.4}\]
where we have used Feynman gauge.
To handle divergences we use dimensional regularization. This means that our inte
gration measures are
\[\int_{P}\equiv\left(\frac{\mu^{2}e^{\gamma}}{4\pi}\right)^{\epsilon}\int\frac{d^{ D}P}{(2\pi)^{D}},\quad\int_{P}\equiv\left(\frac{\mu^{2}e^{\gamma}}{4\pi} \right)^{\epsilon}\int\frac{d^{d}p}{(2\pi)^{d}}, \tag{5}\]
where \(D=4-2\epsilon\) and \(d=3-2\epsilon\).
### Hard thermal loops from transport equations
As of yet, two-loop hard thermal loops are only known for quantum electrodynamics [26, 27, 28]. These calculations are quite involved and have so far been done using Feynman diagrams2. To make our calculations tractable, we instead use transport equations. This method is well-known, and is a clean way to derive hard thermal loops at leading order [12, 35, 36, 37]. Here we extend the method to the next order. Essentially we use that fields with typical momenta \(p\thicksim T\) can be treated as quasiparticles. For example, we can describe electrons with the Vlasov equation:
Footnote 2: See [34] for results with general external momenta.
\[\dot{N}_{F}^{\pm}+\vec{v}\cdot\vec{\nabla}N_{F}^{\pm}\pm e\left(\vec{E}+\vec{ v}\times\vec{B}\right)\cdot\vec{\nabla}^{p}N_{F}^{\pm}=0. \tag{6}\]
If we now assume that the electrons are driven slightly away from equilibrium by the electric field, we can expand the electron distribution as
\[N_{F}^{\pm}=\frac{1}{2}-n_{F}-\delta n_{F}^{\pm},\quad v\cdot \partial\delta n_{F}^{\pm}(\vec{p},x)=\mp e\vec{v}\cdot\vec{E}\frac{d}{dp}n_{ F}(p), \tag{7}\] \[v^{\mu}=(1,\vec{v}),\quad\vec{v}\equiv\frac{\vec{p}}{p^{0}}, \quad n_{F}(p)=\left(e^{p/T}+1\right)^{-1}. \tag{8}\]
The photon self-energy then follows from the electron current [12, 35]:
\[\partial_{\mu}F^{\nu\mu}=e\left\langle\bar{\Psi}\gamma^{\nu}\Psi\right\rangle \thicksim e\int_{P}v^{\mu}(N_{F}^{+}-N_{F}^{-}), \tag{9}\]
where \(\left\langle.,.\right\rangle\) denotes the average over hard modes with characteristic momenta \(p\thicksim T\). That is
\[\partial_{\mu}F^{\nu\mu}=-e\int\frac{d^{4}p}{(2\pi)^{4}}\text{Tr} \left[\not{p}\gamma^{\nu}\right]\Delta_{F}(p)=2e\int\frac{d^{3}p}{(2\pi)^{3} }v^{\nu}\left[N^{+}(p,x)-N^{-}(p,x)\right] \tag{10}\] \[=4e^{2}\int\frac{d^{3}p}{(2\pi)^{3}}\frac{v^{\nu}\vec{v}\cdot \vec{E}(x)}{v\cdot K}n_{F}^{\prime}(p)=-\frac{e^{2}T^{2}}{3}\int\frac{d\Omega _{\nu}}{4\pi}\frac{v^{\nu}\vec{v}\cdot\vec{E}(x)}{v\cdot K}\equiv\Pi^{\nu\mu} A_{\mu}, \tag{11}\]
where \(\vec{E}=-\vec{\nabla}A^{0}-\dot{\vec{A}}\).
Note that the kinetic approach works because quantum fields with \(p\thicksim eT\) behave classically at high temperatures. In generic situations we have no right to expect classical equations of motion. We should also remember that scattering processes become important at time scales of order \(t\thicksim(e^{4}T)^{-1}\)[11, 23, 24, 25], and that our results only hold for soft fields: \(\dot{A}\thicksim\vec{\nabla}A\thicksim(eT)A\).
### Using kinetic theory beyond leading order
There are two ways that we can go about applying the kinetic approach at two loops. First, we can include resummed self-energies directly in the transport equations and use this to derive effective particle distributions [37]. While possible, this approach involves evaluating self-energies at finite external momentum. Instead we elect to only use leading-order transport equations--two-loop results are then obtained by calculating corrections to the fermion current \(\left\langle\bar{\Psi}_{\gamma}{}^{\nu}\Psi\right\rangle\). At first glance it seems like we are back to brute-force evaluating diagrams. Be that as it may, working with currents is considerably easier than calculating self energies. And as we shall see, the results for different kinds of particles involve the same compact expressions.
### Two-loop hard thermal loops
Let us demonstrate our approach for quantum electrodynamics. The two-loop contribution to the electron current is shown in figure 0(a):
\[\left\langle\bar{\Psi}_{\gamma}{}^{\mu}\Psi\right\rangle_{\text{ 2-loop}}=e^{2}\int_{PQ} F^{\mu}\left\{\Delta_{F}(P)\Delta_{V}(Q)\left[\Delta^{R}(P) \Delta^{R}(P+Q)+\Delta^{A}(P)\Delta^{A}(P+Q)\right]\right.\] \[\left.+\Delta_{F}(P)\Delta_{F}(P+Q)\left[\Delta^{R}(P)\Delta^{A} (Q)+\Delta^{A}(P)\Delta^{R}(Q)\right]\right.\] \[\left.+\Delta_{F}(P+Q)\Delta_{V}(Q)\left[\Delta^{R}(P)\Delta^{A} (P)\right]\right\}, \tag{2.12}\]
where \(F^{\mu}=\text{Tr}\not{\!\!\!p}\gamma^{\mu}\not{\!\!\!p}\gamma^{\alpha}\left( \not{\!\!\!p}+Q\right)\gamma_{\alpha}=-4(D-2)\left[(P+Q)^{2}p^{\mu}-P^{2}q^{ \mu}-Q^{2}p^{\mu}\right]\).
To evaluate the integrals we have to define
\[\delta(p^{2})\Delta^{R/A}(P). \tag{2.13}\]
This expression contains terms with two delta functions--these must be regulated. To do so we use the original approach [30]:
\[\pi\delta(P^{2})\Delta^{R/A}(P)=\frac{-i}{P^{2}\mp i\eta p^{0}} \frac{\eta}{P^{4}+\eta^{2}}=\pm p^{0}\left[\frac{\eta}{P^{4}+\eta^{2}}\right]^ {2}-i\frac{P^{2}\eta}{(P^{4}+\eta^{2})^{2}} \tag{2.14}\] \[=\pm p^{0}\left[\pi\delta(P^{2})\right]^{2}-\frac{i}{2}\frac{ \partial}{\partial p_{0}^{2}}\left[\pi\delta(P^{2})\right]. \tag{2.15}\]
For a given topology all \(\left[\pi\delta(P^{2})\right]^{2}\) terms cancel, while the remaining pieces can be handled by integration-by-parts.
As an example, consider
\[\int_{PQ}F^{\mu}\Delta_{F}(P)\Delta_{V}(Q)\left[\Delta^{R}(P)\Delta^{R}(P+Q)+ \Delta^{A}(P)\Delta^{A}(P+Q)\right]. \tag{2.16}\]
After using equation (14) we find
\[\pi\Delta_{F}(P^{2})\big{[}\Delta^{R}(P)\Delta^{R}(P+Q)+\Delta^{A}(P) \Delta^{A}(P+Q)\big{]} \tag{17}\] \[=p^{0}\big{[}\pi\delta(P^{2})\big{]}^{2}\big{[}\Delta^{R}(P+Q)- \Delta^{A}(P+Q)\big{]}-\frac{i}{2}\frac{\partial}{\partial p_{0}^{2}}\big{[} \pi\delta(P^{2})\big{]}\big{[}\Delta^{R}(P+Q)+\Delta^{A}(P+Q)\big{]}.\]
The first term vanishes, so we are left with the second term. Now, for the \(P^{2}q^{\mu}\) term to contribute, the \(\frac{\partial}{\partial p_{0}^{2}}\) derivative must hit \(P^{2}\). So this term is proportional to
\[\int_{PQ}q^{\mu}\Delta_{F}(P)\Delta_{V}(Q)\bigg{[}\frac{1}{(P+Q)^{2}}\bigg{]}. \tag{18}\]
Naively we expect a collinear (\(\vec{p}\parallel\vec{q}\)) divergence from the angular integration, but these cancel once we sum all contributions.
The \(p^{\mu}(P+Q)^{2}\) factor results in a term proportional to
\[\int_{PQ}\frac{N_{V}(q)}{qp}\left\{\big{[}\hat{\sigma}_{0}^{p}N_{F}-\hat{ \sigma}_{0}^{p}\overline{N}_{F}\big{]}\nu_{p}^{\mu}-\frac{\nu_{p}^{\mu}-n^{\mu} }{p}(N_{F}-\overline{N}_{F})\right\},\quad n^{\mu}=\big{(}1,\vec{0}\big{)}\,. \tag{19}\]
Finally, the \(Q^{2}p^{\mu}\) term does not contribute as \(\Delta_{V}(Q)\) sets \(Q^{2}=0\).
The remaining terms in \(\big{\langle}\tilde{\Psi}\gamma^{\mu}\Psi\big{\rangle}_{\text{2-loop}}\) are obtained in the same way. After performing the integrals and using the formulas in appendix A, we find
\[\Pi^{\mu\nu}_{\text{NLO}}(K)=-\frac{e^{4}T^{2}}{8\pi^{2}}\int\frac{d\Omega_{v} }{4\pi}\left\{v^{\mu}v^{\nu}\bigg{[}\frac{(k^{0})^{2}}{(v\cdot K)^{2}}-\frac{2 k^{0}}{v\cdot K}\bigg{]}+[v^{\mu}n^{\nu}+n^{\mu}v^{\nu}]\frac{k^{0}}{v\cdot K}-n^{ \mu}n^{\nu}\right\}, \tag{20}\]
which can be compared with the leading-order self-energy
\[\Pi^{\mu\nu}_{\text{LO}}(K)=-\frac{e^{2}T^{2}}{3}\int\frac{d\Omega_{v}}{4\pi }\bigg{[}n^{\mu}n^{\nu}+v^{\mu}v^{\nu}\frac{k_{0}}{v\cdot K}\bigg{]}. \tag{21}\]
This result is in agreement with previous calculations [27, 28]. For completeness we have to add power-corrections. This is done in section 3.4.
### Procedure for general diagrams
Irrespective of the diagram or particle, the only terms that contribute are of the form
\[\int_{PQ}(ap^{\mu}+bq^{\mu})(P+Q)^{2}\Delta_{X}(P)\Delta_{Y}(Q)\big{[}\Delta^ {R}(P)\Delta^{R}(P+Q)+\Delta^{A}(P)\Delta^{A}(P+Q)\big{]}. \tag{22}\]
The piece going with \(p^{\mu}\) give terms proportional to
\[\int_{PQ}\frac{N_{Y}(q)}{qp}\left\{\big{[}\hat{\sigma}_{0}^{p}N_{X}(p)-\hat{ \sigma}_{0}^{p}\overline{N}_{X}(p)\big{]}\,v_{p}^{\mu}-\frac{v_{p}^{\mu}-n^{ \mu}}{p}(N_{X}(p)-\overline{N}_{X}(p))\right\}, \tag{23}\]
and the term multiplying \(q^{\mu}\) give terms of the form
\[\int_{PQ}v_{q}^{\mu}\left(N_{Y}(q)-\overline{N}_{Y}(q)\right)\frac{1}{p^{2}} \left\{\partial_{0}^{p}\big{[}N_{X}(p)+\overline{N}_{X}(p)\big{]}-\frac{1}{p} \left(N_{X}(p)+\overline{N}_{X}(p)\right)\right\}. \tag{2.24}\]
In our example we only had the first type, but the second type of terms appears in non-abelian theories. Physically the first Lorentz structure corresponds to deviations from the ballistic approximation:
\[v_{p}^{\mu}=\frac{p^{\mu}}{p^{0}}\to v_{p}^{\mu}-\frac{m^{2}}{2p^{2}}\Big{[}v_ {p}^{\mu}-n^{\mu}\Big{]}+\ldots, \tag{2.25}\]
where \(m^{2}\thicksim\int p^{-1}n_{B/F}(p)\) represents hard charges obtaining a thermal mass.
The second structure, on the other hand, represents a renormalization of the hard distributions themselves. Connected with this the momentum integral in equation (2.24) contain divergences3.
Footnote 3: These cancel once counter-term insertions are added.
We also note that the calculation is simpler in Feynman gauge. In particular, scalar and vector currents contain terms of the form
\[\left\langle A_{\mu}^{a}R^{i}R^{j}\right\rangle,\quad\left\langle A_{\mu}^{a} A_{\nu}^{b,\nu}A_{\nu}^{c}\right\rangle, \tag{2.26}\]
which at two loops give the diagrams shown in figures 1(a) and 1(b). However, in Feynman gauge these diagrams vanish. In addition, the ghost-current shown in figure 1(d) does not contribute at two loops in Feynman gauge.
## 3 Generic models
We denote scalar particles by \(i,j,k,\ldots\); vector particles by \(a,b,c,\ldots\); and fermions by \(I,J,K,\ldots\). To parametrize a general model we use the Lagrangian [38; 39; 40; 41]
\[\mathcal{L}= -\frac{1}{2}R_{i}(-\delta_{ij}\partial_{\mu}\partial^{\mu}+\mu_{ ij})R_{j}-\frac{1}{4}F_{\mu\nu}^{a}F^{\mu\nu,b}\delta_{ab}-\frac{1}{2\xi_{a}}( \partial_{\mu}A^{a,\mu})^{2}\] \[-\partial^{\mu}\overline{\eta}^{a}\partial_{\mu}\eta^{a}+i\psi^{ \dagger,\mathcal{I}}\overline{\sigma}^{a}\partial_{\mu}\psi_{I}-\frac{1}{2}(M^ {IJ}\psi_{I}\psi_{J}+\text{h.c.})+\mathcal{L}_{\text{int}}\] \[\mathcal{L}_{\text{int}}= -\frac{1}{4!}\lambda^{ijkn}R_{i}R_{j}R_{k}R_{m}-\frac{1}{2}(Y^{ II}R_{i}\psi_{I}\psi_{J}+h.c) \tag{3.1}\] \[+g_{J}^{a,I}A_{\mu}^{a}\psi^{\dagger,\mathcal{J}}\overline{\sigma} ^{a}\psi_{I}-g_{jk}^{a}A_{\mu}^{a}R_{j}\partial^{\mu}R_{k}-\frac{1}{2}g_{jn}^ {a}g_{kr}^{b}A_{\mu}^{a}A^{\mu,b}R_{j}R_{k}-g^{abc}A^{\mu,a}A^{\nu,b}\partial_ {\mu}A_{\nu}^{c}\] \[-\frac{1}{4}g^{abe}g^{cde}A^{\mu a}A^{\nu b}A_{\mu}^{c}A_{\nu}^{d }+g^{abc}A_{\mu}^{a}\eta^{b}\partial^{\mu}\overline{\eta}^{c}\,.\]
In this notation \(R_{i}\) are scalar fields in a real basis; \(A_{\mu}^{a}\) are vector bosons; \(\eta^{a}\) are ghosts; and \(\psi_{I}\) are Weyl fermions [42]. The sigma matrices are defined as
\[\sigma^{\mu}=\big{(}\mathbb{1},\sigma^{i}\big{)},\quad\overline{\sigma}^{\mu} =\left(-\mathbb{1},\sigma^{i}\right)\,, \tag{3.2}\]
and satisfy
\[\left\{\sigma_{\mu},\overline{\sigma}_{\nu}\right\}=-2g_{\mu\nu}, \quad g_{\mu\nu}=\text{diag}\left(-1,\overline{1}\right). \tag{3.3}\]
The couplings are normalized such that for the Standard-model we have
\[\delta_{ab}\text{Tr}\big{[}g_{V}^{a}g_{V}^{b}\big{]}=-24g_{s}^{2} -6g_{w}^{2},\quad\delta_{ab}\text{Tr}\big{[}g_{S}^{a}g_{S}^{b}\big{]}=-3g_{w}^ {2}-g_{Y}^{2},\] \[\delta_{ab}\text{Tr}\big{[}g_{F}^{a}g_{F}^{b}\big{]}=N_{F}\left(1 6g_{s}^{2}+6g_{w}^{2}+\frac{10}{3}g_{Y}^{2}\right).\]
For a generic model these coupling tensors can be calculated by hand, but they are also straightforward to find from GroupMath [43].
To calculate hard thermal loops we use resummed distributions. For a general model these are [12, 44]
\[N_{V}^{\pm}\to N_{V}^{ab,\pm}=\delta^{ab}\bigg{[}\frac{1}{2}+n_{B}(p^{0}) \bigg{]}+\delta N_{V}^{ab,\pm}(p^{0},\overline{p}), \tag{3.4}\] \[N_{S}^{\pm}\to N_{S}^{ij,\pm}=\delta^{ij}\bigg{[}\frac{1}{2}+n_{B}(p^{0} )\bigg{]}+\delta N_{S}^{ij,\pm}(p^{0},\overline{p}),\] (3.5) \[N_{F}^{\pm}\to N_{F,J}^{I,\pm}=\delta_{J}^{I}\bigg{[}\frac{1}{2}-n_{F} (p^{0})\bigg{]}-\delta N_{F,J}^{I,\pm}(p^{0},\overline{p}), \tag{3.6}\]
where
\[n_{B}(p)=\left(e^{p/T}-1\right)^{-1},\quad n_{F}(p)=\left(e^{p/T }+1\right)^{-1}. \tag{3.7}\]
We can condense the notation further:
\[\delta N_{V}^{ab}\equiv-ig^{abc}\delta N_{V}^{c},\quad\delta N_{ S}^{ij}\equiv-ig_{ij}^{c}\delta N_{S}^{c},\quad\delta N_{F,J}^{I}\equiv g_{J}^{c, I}\delta N_{F}^{c}, \tag{3.8}\]
where the distributions satisfy4
Footnote 4: We are for the moment omitting higher-point functions. These are calculated in section 4.
\[\nu\cdot\partial SN_{X}^{\pm,a}=\mp\vec{\nu}\cdot\vec{E}^{a}n_{ X}^{\prime}(p),\quad\vec{E}^{a}=-\dot{\vec{A}}^{a}-\vec{\nabla}A^{0,a}. \tag{3.9}\]
Figure 1: Figures a and b represent corrections to the fermion current; figures b, c, e, k, j, and e represent corrections to the vector current; figure d represents corrections to the ghost current; figures f, f, m, and h represent corrections to the scalar current.
Figure 2: Additional diagrams that contribute to the vector self-energy at next-to-leading order. Diagrams c and d correspond to mass insertions, and diagrams a and b vanish in Feynman gauge.
### Conventions and structure of the calculation
All correlators that contribute at next-to-leading order are shown in figures 1 and 2, and the details are given in appendices B, C, D, and E. We use
\[\Pi_{1}^{\mu\nu} =\int\frac{d\Omega_{\nu}}{4\pi}\bigg{(}n^{\mu}n^{\nu}+k^{0}\frac{v ^{\mu}v^{\nu}}{v\cdot K}\bigg{)},\quad n^{\mu}=\big{(}1,\vec{0}\big{)} \tag{3.10}\] \[\Pi_{2}^{\mu\nu} =\int\frac{d\Omega_{\nu}}{4\pi}\left\{v^{\mu}v^{\nu}\bigg{[}\frac{ (k^{0})^{2}}{(v\cdot K)^{2}}-\frac{2k^{0}}{v\cdot K}\bigg{]}+[v^{\mu}n^{\nu}+n ^{\mu}v^{\nu}]\frac{k^{0}}{v\cdot K}-n^{\mu}n^{\nu}\right\}, \tag{3.11}\]
to signify the two Lorentz structures that appear. Note that these satisfy \(K_{\mu}\Pi^{\mu\nu}=0\), so the self-energy is automatically transverse.
To derive the self-energy we need various currents:
\[\partial_{\mu}F^{\gamma\mu,a}=j_{F}^{a,\nu}+j_{S}^{a,\nu}+j_{S}^{a, \nu}+j_{V}^{a,\nu}. \tag{3.12}\]
The fermion current is given by
\[j_{F}^{a,\nu}=g_{I}^{a,J}\left\langle\psi^{\dagger,I}\overline{ \sigma}^{\nu}\psi_{J}\right\rangle. \tag{3.13}\]
The scalar current is
\[j_{S}^{a,\nu}=\frac{1}{2!}g_{ij}^{a}\left\langle\partial^{\nu}R _{i}R_{j}-R_{i}\partial^{\nu}R_{j}\right\rangle. \tag{3.14}\]
The ghost current is
\[j_{g}^{a,\nu}=g^{abc}\left\langle\eta^{b}\partial^{\mu}\overline {\eta}^{c}\right\rangle. \tag{3.15}\]
Finally, the vector current is
\[j_{V}^{a,\nu}=-g^{abc}\left\langle\partial_{\mu}A^{\nu,b}A^{ \mu,c}+A^{\nu,b}\partial\cdot A^{c}+A_{\mu}^{b}\partial^{\nu}A^{\mu,c}-A^{b} \cdot\partial A^{\nu,c}\right\rangle. \tag{3.16}\]
### One-loop hard thermal loops
As mentioned, one-loop results are well known [45, 46, 12, 35]. With our notation the results are
\[\Pi_{\text{LO}}^{\mu\nu}=\Pi_{V}^{\mu\nu}+\Pi_{F}^{\mu\nu}+\Pi_ {S}^{\mu\nu}, \tag{3.17}\]
where
\[\Pi_{V}^{\mu\nu} =-(D-2)\text{Tr}\big{[}g_{V}^{a}g_{V}^{b}\big{]}\int_{p}n_{B}^{ \prime}(p)\Pi_{1}^{\mu\nu}=\frac{T^{2}}{3}\text{Tr}\big{[}g_{V}^{a}g_{V}^{b} \big{]}\Pi_{1}^{\mu\nu}+\mathcal{O}(\epsilon), \tag{3.18}\] \[\Pi_{F}^{\mu\nu} =2\text{Tr}\big{[}g_{F}^{a}g_{F}^{b}\big{]}\int_{p}n_{F}^{\prime} (p)\Pi_{1}^{\mu\nu}=-\frac{T^{2}}{6}\text{Tr}\big{[}g_{F}^{a}g_{F}^{b}\big{]} \Pi_{1}^{\mu\nu}+\mathcal{O}(\epsilon),\] (3.19) \[\Pi_{S}^{\mu\nu} =-\text{Tr}\big{[}g_{S}^{a}g_{S}^{b}\big{]}\int_{p}n_{B}^{\prime} (p)\Pi_{1}^{\mu\nu}=\frac{T^{2}}{6}\text{Tr}\big{[}g_{S}^{a}g_{S}^{b}\big{]} \Pi_{1}^{\mu\nu}+\mathcal{O}(\epsilon). \tag{3.20}\]
### Two-loop hard thermal loops
At two loops various diagrams introduce factors of \(D=4-2\epsilon\); below we have only kept the \(\mathcal{O}(\epsilon^{0})\) contribution, but the full results are given in the appendices. We separate the result as
\[\Pi^{\mu\nu,ab}_{\text{NLO}}=-\Big{[}\Pi^{\mu\nu,ab}_{\text{V}}+ \Pi^{\mu\nu,ab}_{\text{SV}}+\Pi^{\mu\nu,ab}_{\text{FV}}+\Pi^{\mu\nu,ab}_{\text {SF}}\Big{]}, \tag{3.21}\]
signifying pure vector, scalar-vector, fermion-vector, and scalar-fermion-vector type interactions respectively. All repeated indices are summed.
Let us start with the pure-vector contribution:
\[\Pi^{\mu\nu,ab}_{\text{V}}=11T^{2}\frac{\log(\frac{\mu e^{\nu}}{ 4\pi T})-\frac{1}{22}}{36\pi^{2}}g^{adc}_{V}g^{cef}_{V}g^{dn}_{V}g^{emb}_{V}\Pi ^{\mu\nu}_{1}-\frac{T^{2}}{12\pi^{2}}g^{adc}_{V}g^{cef}_{V}g^{dn}_{V}g^{emb}_{V} \Pi^{\mu\nu}_{2}. \tag{3.22}\]
The scalar-vector contribution is
\[\Pi^{\mu\nu,ab}_{\text{SV}}=\frac{T^{2}}{192\pi^{2}}\text{Tr} \big{[}g^{a}_{\text{SS}}g^{b}_{S}\big{]}_{jl}\,\lambda^{lInn}\Pi^{\mu\nu}_{2} +\frac{1}{8\pi^{2}}\text{Tr}\big{[}g^{a}_{\text{SS}}g^{b}_{S}\big{]}_{ij}\, \mu^{ij}\Pi^{\mu\nu}_{2}\] \[-T^{2}\frac{\log\frac{\mu e^{\nu}}{4\pi T}+1}{288\pi^{2}}\text{Tr }\big{[}g^{a}_{\text{SS}}g^{c}_{S}\big{]}\text{Tr}\big{[}g^{c}_{S}g^{b}_{S} \big{]}\Pi^{\mu\nu}_{1}-\frac{T^{2}}{32\pi^{2}}\text{Tr}\big{[}g^{a}_{S}g^{b} _{S}g^{c}_{S}g^{c}_{S}\big{]}\Pi^{\mu\nu}_{2}\] \[-T^{2}\frac{\log\frac{\mu e^{\nu}}{4\pi T}}{24\pi^{2}}g^{ace}_{V}g ^{bdc}_{V}\text{Tr}\big{[}g^{d}_{S}g^{e}_{S}\big{]}\Pi^{\mu\nu}_{1}+\frac{T^{2 }}{48\pi^{2}}g^{ace}_{V}g^{bdc}_{V}\text{Tr}\big{[}g^{d}_{S}g^{e}_{S}\big{]} \Pi^{\mu\nu}_{2}\] \[+T^{2}\frac{\log\frac{\mu e^{\nu}}{4\pi T}-3}{72\pi^{2}}\left\{ \text{Tr}\big{[}g^{a}_{S}g^{c}_{S}\big{]}\text{Tr}\big{[}g^{c}_{V}g^{b}_{V} \big{]}+\text{Tr}\big{[}g^{a}_{V}g^{c}_{V}\big{]}\text{Tr}\big{[}g^{c}_{S}g^{b }_{S}\big{]}\right\}\Pi^{\mu\nu}_{1}. \tag{3.23}\]
The fermion-vector contribution is
\[\Pi^{\mu\nu,ab}_{\text{FV}}=T^{2}\frac{\log\frac{\mu e^{\nu}}{4 \pi T}}{24\pi^{2}}g^{ace}_{V}g^{bcd}_{V}\text{Tr}\big{[}g^{d}_{F}g^{e}_{F} \big{]}\Pi^{\mu\nu}_{1}-T^{2}\frac{\log\frac{\mu e^{\nu}}{4\pi T}-\frac{1}{2} +\log(4)}{72\pi^{2}}\text{Tr}g^{a}_{F}g^{c}_{F}\text{Tr}g^{c}_{F}g^{b}_{P}\Pi ^{\mu\nu}_{1}\] \[-T^{2}\frac{\log\frac{\mu e^{\nu}}{4\pi T}+\frac{3}{2}-8\log(2)} {288\pi^{2}}\left\{\text{Tr}\big{[}g^{a}_{F}g^{c}_{F}\big{]}\text{Tr}\big{[} g^{c}_{V}g^{b}_{V}\big{]}+\text{Tr}\big{[}g^{a}_{V}g^{c}_{V}\big{]}\text{Tr} \big{[}g^{c}_{F}g^{b}_{F}\big{]}\right\}\Pi^{\mu\nu}_{1}\] \[+\frac{T^{2}}{16\pi^{2}}\text{Tr}g^{c}_{F}g^{c}_{F}g^{a}_{F}g^{b} _{F}\Pi^{\mu\nu}_{2}-\frac{T^{2}}{48\pi^{2}}g^{ace}_{V}g^{bcd}_{V}\text{Tr} \big{[}g^{d}_{F}g^{e}_{F}\big{]}\Pi^{\mu\nu}_{2}+\frac{1}{8\pi^{2}}\text{Tr} \big{[}g^{a}_{F}M_{F}M^{\dagger}_{F}g^{b}_{F}\big{]}\Pi^{\mu\nu}_{2}. \tag{3.24}\]
And finally, the mixed fermion-scalar contribution is
\[\Pi^{\mu\nu,ab}_{\text{SF}}=T^{2}\frac{5\log\frac{\mu e^{\nu}}{4 \pi T}-1+8\log(2)}{576\pi^{2}}\left\{\text{Tr}\big{[}g^{a}_{S}g^{c}_{S}\big{]} \text{Tr}\big{[}g^{c}_{F}g^{b}_{F}\big{]}+\text{Tr}\big{[}g^{a}_{F}g^{c}_{F} \big{]}\text{Tr}\big{[}g^{c}_{S}g^{b}_{S}\big{]}\right\}\Pi^{\mu\nu}_{1}\] \[+\frac{T^{2}}{32\pi^{2}}\big{[}g^{a}_{F}g^{b}_{F}\big{]}^{J} \left(YY^{c}\right)^{J}_{I}\Pi^{\mu\nu}_{2}+\frac{T^{2}}{192\pi^{2}}\big{[}g^{ a}_{S}g^{b}_{S}\big{]}_{ij}\left(YY^{c}+Y^{c}Y\right)^{ij}\Pi^{\mu\nu}_{2}. \tag{3.25}\]
In the traces over generators the contractions are made with the conventions
\[\text{Tr}\big{[}g^{a}_{V}g^{b}_{V}\big{]}=g^{acd}g^{bdc},\quad \text{Tr}\big{[}g^{a}_{S}g^{b}_{S}\big{]}=g^{a}_{ij}g^{b}_{ji},\quad\text{Tr} \big{[}g^{a}_{F}g^{b}_{F}\big{]}=g^{a,I}_{J}g^{b,J}_{I}. \tag{3.26}\]
Note that our two-loop results in equation (3.22) ensures that
\[\Pi^{\mu\nu}=\Pi^{\mu\nu}_{\text{LO}}+\Pi^{\mu\nu}_{\text{NLO}}, \tag{3.27}\]
is renormalization-scale invariant. As such one should choose \(\mu\thicksim T\) to ensure that no large logarithms are present.
### Power corrections from one-loop diagrams
Power corrections modify the kinetic terms and are, for example, responsible for anomalous dimensions. We forgo using transport equations since the diagrams are straightforward to evaluate [27, 28, 47].
We use a convention where the Debye mass is given by
\[(m_{D}^{2})^{ab}=-\lim_{k^{0}\to 0}\Pi^{\mu\nu,ab}_{\text{NLO}}\,, \tag{3.28}\]
with \(\Pi^{\mu\nu,ab}_{\text{NLO}}\) defined by equation (3.21). This means that we have rescaled our vector fields to make the \(A^{0}\) kinetic term canonical when \(k^{0}=0\). To wit, we have moved all renormalization-scale dependence (and some finite pieces) away from the power corrections5. The original results--before field-redefinitions--are given in appendix E.4.
Footnote 5: Our convention makes the Lorentz structure easy at two loops, in addition, the result is manifestly renormalization-scale invariant.
That said, the scalar loop gives
\[\Pi^{\mu\nu,ab}_{S}(K)=-\text{Tr}\big{[}g^{a}_{S}g^{b}_{S}\big{]}\int_{P} \left(2P+K\right)^{\mu}\left(2P+K\right)^{\nu}\Delta_{S}(P)\Delta^{R}(P+K). \tag{3.29}\]
We are only interested in the sub-leading correction scaling as \(K^{2}\thicksim k^{2}\thicksim(gT)^{2}\). After expanding the integral, and adding counter-terms, we find
\[g_{\mu\nu}\Pi^{\mu\nu,ab}_{S}(K)=\text{Tr}\big{[}g^{a}_{S}g^{b} _{S}\big{]}\frac{K^{2}}{16\pi^{2}}\left\{-\frac{2}{3}+k^{0}L(K)\right\}, \tag{3.30}\] \[\Pi^{00,ab}_{S}(K)=-\text{Tr}\big{[}g^{a}_{S}g^{b}_{S}\big{]} \frac{k^{2}}{16\pi^{2}}\left\{\frac{1}{3}\frac{(k^{0})^{2}}{k^{2}}(k^{0}L(K)- 1)\right\} \tag{3.31}\]
The fermion loop gives
\[\Pi^{\mu\nu,ab}_{F}(K)=\text{Tr}\big{[}g^{a}_{F}g^{b}_{F}\big{]} \int_{P}F^{\mu\nu}\Delta_{F}(P)\Delta^{R}(P+K), \tag{3.32}\] \[F^{\mu\nu}=2\big{[}-g^{\mu\nu}\big{(}K\cdot P+P^{2}\big{)}+2p^{ \mu}p^{\nu}+k^{\mu}p^{\nu}+k^{\nu}p^{\mu}\big{]}\,. \tag{3.33}\]
After expanding the integral, and adding counter-terms, we find
\[g_{\mu\nu}\Pi_{F}^{\mu\nu,ab}(K)=\mathrm{Tr}\big{[}g_{F}^{a}g_{F}^{b} \big{]}\frac{K^{2}}{16\pi^{2}}\left\{\frac{4}{3}+4k^{0}L(K)\right\}, \tag{3.34}\] \[\Pi_{F}^{00,ab}(K)=-\mathrm{Tr}\big{[}g_{F}^{a}g_{F}^{b}\big{]} \frac{k^{2}}{16\pi^{2}}\left\{\frac{2}{3}k^{0}\left(3-\frac{(k^{0})^{2}}{k^{2} }\right)L(K)+\frac{2}{3}\frac{(k^{0})^{2}}{k^{2}}\right\}.\]
For non-abelian diagrams we group ghosts and vectors together. After adding counter-terms we find
\[g_{\mu\nu}\Pi_{V}^{\mu\nu,ab}(K)=\mathrm{Tr}\big{[}g_{V}^{a}g_{V} ^{b}\big{]}\frac{K^{2}}{16\pi^{2}}\left\{\frac{4}{3}+10k^{0}L(K)\right\}, \tag{3.35}\] \[\Pi_{V}^{00,ab}(K)=-\mathrm{Tr}\big{[}g_{V}^{a}g_{V}^{b}\big{]} \frac{k^{2}}{16\pi^{2}}\left\{\frac{2}{3}k^{0}\left(6-\frac{(k^{0})^{2}}{k^{2} }\right)L(K)+\frac{2(k^{0})^{2}}{3k^{2}}\right\}.\]
### Transverse and longitudinal self-energies
It is useful to write the vector self-energy in terms of transverse and longitudinal components [48]:
\[\Pi^{\mu\nu}=\Pi_{T}\Pi_{T}^{\mu\nu}+\Pi_{L}P_{L}^{\mu\nu},\qquad P _{T}^{ij}=\delta^{ij}-\frac{p^{i}p^{j}}{p^{2}},\quad P_{L}^{\mu\nu}=g^{\mu\nu}- \frac{K^{\mu}K^{\nu}}{K^{2}}-P_{T}^{\mu\nu}. \tag{3.36}\]
We then find
\[\Pi_{T}=\frac{1}{d-1}\left[g_{\mu\nu}\Pi^{\mu\nu}+\frac{K^{2}}{k^{ 2}}\Pi^{00}\right],\quad\Pi_{L}=-\frac{K^{2}}{k^{2}}\Pi^{00}. \tag{3.37}\]
Since our results are built from the Lorentz structures \(\Pi_{1}^{\mu\nu}\) and \(\Pi_{2}^{\mu\nu}\) defined in equation 3.10, we only need to find the traces of these. To wit
\[\Pi_{1}^{00}=1-k^{0}L[K],\quad g_{\mu\nu}\Pi_{1}^{\mu\nu}=-1, \tag{3.38}\] \[\Pi_{2}^{00}=-1-\frac{(k^{0})^{2}}{K^{2}},\quad g_{\mu\nu}\Pi_{2 }^{\mu\nu}=1+2k^{0}L[K],\] (3.39) \[L[K]\equiv\frac{1}{2k}\log\frac{k^{0}+k+i\eta}{k^{0}-k+\eta}, \quad\eta=0^{+}, \tag{3.40}\]
where we have used known results for the angular integrals [49, 28].
### Examples
Consider now the gluon self-energy with \(N_{q}\) quarks:
\[\Pi_{\text{NLO}}^{\mu\nu} =\frac{g_{s}^{4}(N_{q}+6)T^{2}\Big{[}(4N_{q}-66)\log\frac{\mu e^{ \gamma}}{4\pi T}-2N_{q}+8N_{q}\log(2)+3\Big{]}}{288\pi^{2}}\Pi_{1}^{\mu\nu} \tag{3.41}\] \[-\frac{g_{s}^{4}(N_{q}-18)T^{2}}{48\pi^{2}}\Pi_{2}^{\mu\nu}. \tag{3.42}\]
Next, the Standard-Model. The gluon self-energy is
\[\Pi_{\text{NLO}}^{\mu\nu}= -\frac{g_{s}^{4}T^{2}\Big{[}14\log\frac{\mu e^{\nu}}{4\pi T}+3-16 \log(2)\Big{]}}{8\pi^{2}}\Pi_{1}^{\mu\nu} \tag{3.43}\] \[-\frac{g_{s}^{2}T^{2}\Big{[}-48g_{s}^{2}+27g_{w}^{2}+11g_{Y}^{2}+ 12y_{t}^{2}\Big{]}}{192\pi^{2}}\Pi_{2}^{\mu\nu}. \tag{3.44}\]
Here \(g_{s}\) is the strong coupling constant, \(g_{w}\) the weak one, \(g_{Y}\) is the hypercharge coupling, and \(y_{t}\) is the top-Yukawa coupling.
Finally, take an SO(10) gauge theory with \(N_{F}\) fermions in the spinor (16) representation, and a \(45\oplus 16\) Higgs. The gauge self-energy is
\[\Pi_{\text{NLO}}^{\mu\nu} =\frac{g_{x}^{4}(N_{F}+14)T^{2}\Big{[}(2N_{F}-41)\log\frac{\mu e^{ \gamma}}{4\pi T}+N_{F}(\log(16)-1)+5\Big{]}}{36\pi^{2}}\Pi_{1}^{\mu\nu} \tag{3.45}\] \[-\frac{g_{x}^{4}(71N_{F}-1415)T^{2}}{192\pi^{2}}\Pi_{2}^{\mu\nu}. \tag{3.46}\]
## 4 Higher-point hard thermal loops
So far we have focused on the self-energy, but higher-point correlators can be extracted from the results in section 3.3. In particular, it is well-known that at one loop all higher-point functions can be derived by using [44, 12, 35]
\[\big{[}\nu\cdot D,\delta N_{X}^{\pm}\big{]}^{a}=\mp\vec{v}\cdot \vec{E}^{a}n_{X}^{\prime}(p), \tag{4.1}\]
where the covariant derivative is \(\big{[}D_{\mu}N\big{]}^{a}=\partial_{\mu}N^{a}+g^{abc}A_{\mu}^{b}N^{c}\). We can then expand the currents as
\[j_{\mu}^{a}=\Pi_{\mu\nu}^{ab}A^{\nu,b}+\frac{1}{2}\Gamma_{\mu\nu \rho}^{abc}A^{\nu,b}A^{\rho,c}+\ldots \tag{4.2}\]
To find these higher-point functions we can use the results in section 3.1 together with the replacements:
\[C^{ab}\Pi_{1}^{\mu\nu} \to C^{ab}\int\frac{d\Omega_{\nu}}{4\pi}\bigg{[}\frac{\nu^{\mu} \vec{v}\cdot\vec{E}}{\nu\cdot D}\bigg{]}^{b}, \tag{4.3}\] \[D^{ab}\Pi_{2}^{\mu\nu} \to D^{ab}\int\frac{d\Omega_{\nu}}{4\pi}\left\{\nu^{\mu}\left(- \frac{D_{0}}{(\nu\cdot D)^{2}}-\frac{1}{\nu\cdot D}\right)\vec{v}\cdot\vec{E}- \frac{\nu^{\mu}-n^{\mu}}{\nu\cdot D}\vec{v}\cdot\vec{E}\right\}^{b}, \tag{4.4}\]
where now \(E^{a,i}=\partial^{i}A^{0,a}-\partial^{0}A^{i,a}+g^{abc}A^{i,b}A^{0,c}\).
Consider the first Lorentz-structure, which coincides with the one-loop one. The corresponding three-point vertex is well-known [35, 45, 50]:
\[C^{ab}\Pi_{1}^{\mu\nu}\to-iC^{ae}g^{ebc}\Gamma_{1}^{\mu\nu\rho}(P,Q,R),\ \ \ \ \Gamma_{1}^{\mu\nu\rho}(P,Q,R)=\int\frac{d\Omega_{\nu}}{4\pi}\frac{\nu^{\mu} \nu^{\nu}\nu^{\rho}}{\nu\cdot P}\bigg{[}\frac{q^{0}}{\nu\cdot Q}-\frac{r^{0}}{ \nu\cdot R}\bigg{]}. \tag{4.5}\]
Likewise, it is possible to find the three-point vertex corresponding to \(\Pi_{2}^{\mu\nu}\) by expanding the covariant derivatives. Yet it is easier to exploit that this new Lorentz structure arises because the ballistic approximation ceases to hold:
\[v_{p}^{\mu}\to v_{p}^{\mu}-\frac{m^{2}}{2p^{2}}(v_{p}^{\mu}-n^{\mu})+\ldots \tag{4.6}\]
As such we can use equation 4.5--together with the correction above6--and collect all terms proportional to \(m^{2}\):
Footnote 6: We have to remember that the original term depends on \(\int dpp^{2}n^{\prime}(E)\), which when \(E\approx p+\frac{m^{2}}{2p}\) becomes \(\int dp\Big{[}p^{2}n^{\prime}(p)-\frac{m^{2}}{2}n^{\prime}(p)\Big{]}\).
\[D^{ab}\Pi_{2}^{\mu\nu} \to-iD^{ae}g^{ebc}\Pi_{2}^{\mu\nu\rho}(P,Q,R), \tag{4.7}\] \[\Gamma_{2}^{\mu\nu\rho}(P,Q,R)=\int\frac{d\Omega_{\nu}}{4\pi} \frac{-2\nu^{\mu}v^{\nu}v^{\rho}+(n^{\mu}v^{\nu}v^{\rho}+\text{perm})}{v\cdot P }\Bigg{[}\frac{q^{0}}{v\cdot Q}-\frac{r^{0}}{v\cdot R}\Bigg{]}\] (4.8) \[+\int\frac{d\Omega_{\nu}}{4\pi}\frac{v^{\mu}v^{\nu}v^{\rho}}{v \cdot P}\Bigg{[}\frac{p^{0}}{v\cdot P}\Bigg{(}\frac{q^{0}}{v\cdot Q}-\frac{r^{ 0}}{v\cdot R}\Bigg{)}+\Bigg{(}\frac{(q^{0})^{2}}{(v\cdot Q)^{2}}-\frac{(r^{0}) ^{2}}{(v\cdot R)^{2}}\Bigg{)}\Bigg{]}. \tag{4.9}\]
Note that the Ward identity is automatically satisfied since
\[P_{\mu}\Gamma_{1}^{\mu\nu\rho}(P,Q,R) =\Pi_{1}^{\gamma\rho}(Q)-\Pi_{1}^{\gamma\rho}(R), \tag{4.10}\] \[P_{\mu}\Gamma_{2}^{\mu\nu\rho}(P,Q,R) =\Pi_{2}^{\gamma\rho}(Q)-\Pi_{2}^{\gamma\rho}(R). \tag{4.11}\]
The same procedure can be applied to four-point interactions, which at one-loop are given in [45, 50].
## 5 Conclusions
In this paper we have provided hard thermal loops, for vector boson self-energies, at two-loops for any renormalizable model. This was made possible by using transport equations to simplify the calculations--thus extending known one-loop methods [12, 35, 36, 37]. In particular, this approach provides compact expression for each particle type; the result is independent of the matching scale; and known results for Debye masses are reproduced in the appropriate limit. We also demonstrated how higher-point functions can be extracted from the results.
The results of this paper can be used to study particle production in the early universe; transport coefficients; and wall speeds in first-order phase transitions [51]. The effect of including two-loop contributions is likely significant for the strong interaction, since the coupling constant is rather large \(N\alpha_{S}\sim 0.3\) when \(T\sim 100\) GeV.
The next step is to provide two-loop hard thermal loops for fermion propagators. Performing these calculations for quarks, by using Feynman diagrams, is likely arduous beyond leading order. However, we expect that similar methods as used in this paper will prove useful in this endeavour.
## Acknowledgements
I am grateful to the high-energy physics group at the University of Granada for their hospitality as this work was being finished. I also want to thank Geraldine Servant for help with the manuscript. This work has been supported by the Swedish Research Council, project number VR:2021-00363 and by the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy - EXC 2121 Quantum Universe - 390833306.
## Appendix A Derivatives of resummed distributions
The distributions satisfy
\[\delta N_{X}^{\pm}=\mp\frac{ep_{a}F^{\alpha\beta}}{p\cdot\hat{\sigma}}\hat{ \sigma}_{p}^{\beta}n_{X}(p^{0}).\] (A.1)
So taking the derivative \(\frac{\partial}{\partial p^{0}}\) and going to momentum space we find
\[\int pdp\frac{\partial}{\partial p^{0}}\delta N_{X}^{\pm}\to\mp\int dp\left[ \frac{k^{0}}{(\nu\cdot K)^{2}}-\frac{1}{\nu\cdot K}\right]\vec{v}\cdot\vec{E}( K)n_{X}^{\prime}(p).\] (A.2)
### Momentum integrals
We use dimensional regularization where \(d=3-2\epsilon\). When evaluating the self-energy we encounter the integrals
\[T^{2\epsilon}\int dpp^{d-1}n_{B}^{\prime}(p) = -\frac{1}{3}\pi^{2}T^{2}+\frac{1}{3}\pi^{2}T^{2}(-24\log(A)+3+ \log(4)+2\log(\pi))\epsilon,\] (A.3) \[T^{2\epsilon}\int dpp^{d-1}n_{F}^{\prime}(p) = -\frac{1}{6}\pi^{2}T^{2}+\frac{1}{6}\pi^{2}T^{2}(-24\log(A)+3+ \log(16)+2\log(\pi))\epsilon,\] (A.4) \[T^{2\epsilon}\int dpp^{d-3}n_{B}(p) = -\frac{T}{2\epsilon}+\mathcal{O}(\epsilon),\quad T^{2\epsilon} \int dpp^{d-3}n_{F}(p)=T\log(2)+\mathcal{O}(\epsilon),\] (A.5) \[T^{2\epsilon}\int dpp^{d-3}n_{B}^{\prime}(p) = \frac{1}{2}+\mathcal{O}(\epsilon),\quad T^{2\epsilon}\int dpp^{d- 3}n_{F}^{\prime}(p)=-\frac{1}{2}+\mathcal{O}(\epsilon)\] (A.6)
Here A\(\approx 1.28243\) is the Glaisher constant.
## Appendix B Non-abelian gauge theories
Note that all (collinear) divergences resulting from angular integrations cancel. We will however obtain divergences--real ones--from radial integrations: \(\int_{p}\frac{n_{B}(p)}{p^{2}}\sim-\frac{T}{2\epsilon}\). The \(\epsilon\) poles from these terms cancel once zero-temperature counterterms are used.
Throughout this and the following sections we keep factors of \(D=4-2\epsilon\) explicit. There are four contributions. Corrections to the vector current are shown in figures 0(c), 0(e), 0(f), 0(g), 0(h
and ij; sunset corrections to the ghost current are shown in figure 0(d). We also note that diagram 2b vanishes.
### Vector current
We start with the vector current. The sunset diagram gives
\[\Pi^{\mu}_{\tt 1C}=-\frac{1}{4}g^{abc}g^{hgn}g^{def}\int_{PQ}F^{\mu} \left\{\delta^{en}\delta^{cd}\Delta^{bh}_{V}(P)\Delta^{gf}_{V}(Q)\left[\Delta ^{R}(P)\Delta^{R}(P+Q)+\Delta^{A}(P)\Delta^{A}(P+Q)\right]\right.\] \[\left.\delta^{cd}\delta^{gf}\Delta^{bh}_{V}(P)\Delta^{en}_{V}(P+Q )\left[\Delta^{R}(P)\Delta^{A}(Q)+\Delta^{A}(P)\Delta^{R}(Q)\right]\right.\] \[\left.\delta^{cd}\delta^{bh}\Delta^{gf}_{V}(Q)\Delta^{en}_{V}(P+Q )\Delta^{A}(P)\Delta^{R}(P)\right\} \tag{11}\]
where
\[F^{\mu}(P,Q)= (P+Q)^{2}\left[(5-4D)p^{\mu}+2(2D-3)q^{\mu}\right]+P^{2}\left[(5-6 D)p^{\mu}\right]\] \[+Q^{2}\left[(11-8D)p^{\mu}+2(3-2D)q^{\mu}\right]. \tag{12}\]
We can rewrite the terms so that they all multiply \(\Lambda^{ab}=g^{anc}_{V}g^{cef}_{V}g^{nfn}_{V}g^{emb}_{V}\). Explicitly,
\[\Pi^{\mu}_{\tt 1C}=\Lambda^{ab}\frac{(4D-5)}{4}\int_{PQ}n_{B}(q) \frac{1}{pq}\left\{\partial^{p}_{0}\left[N(p)_{V}-\overline{N}_{V}(p)\right] v^{\mu}_{p}-\frac{v^{\mu}_{p}-n^{\mu}}{p}\left(N_{V}(p)-\overline{N}_{V}(p) \right)\right\}^{b}\] \[+\Lambda^{ab}\frac{(2D-3)}{4}\int_{PQ}v^{\mu}_{p}\frac{1}{q^{2} }\left(n_{B}(q)-qn^{\prime}_{B}(q)\right)\left(N_{V}(p)-\overline{N}_{V}(p) \right)^{b} \tag{13}\]
We now turn to the bubble diagram. We find
\[\Pi^{\mu}_{\tt 1j}=\frac{1}{4}g^{abl}_{V}g^{ecg}_{V}g^{fdg}_{V}\delta^{dl} \int_{PQ}F^{\mu}\Delta^{bc}_{V}(P)\left(\Delta^{R}(P)+\Delta^{A}(p)\right) \Delta^{fg}_{V}(Q),\quad F^{\mu}=-4(D-1)^{2}p^{\mu}.\]
The result is
\[\Pi^{\mu}_{\tt 1j}=-\frac{(D-1)^{2}}{2}\Lambda^{ab}\int_{PQ}n_{B}(q) \frac{1}{pq}\left\{\partial^{p}_{0}\left[N_{V}(p)-\overline{N}_{V}(p)\right] v^{\mu}_{p}-\frac{v^{\mu}_{p}-n^{\mu}}{p}\left(N_{V}(p)-\overline{N}_{V}(p) \right)\right\}^{b}.\]
### Ghost diagrams
We now consider diagrams with internal ghosts. There are two contributions: The ghost-current with an internal vector and the vector current with a ghost loop--shown in figures
1d and 1e respectively. The latter diagram vanish, so we only need the former one:
\[\Pi^{\mu}_{\ref{eq:1}}=\frac{1}{2}g^{abc}_{V}g^{hgn}_{V}g^{def}_{V} \int_{PQ}F^{\mu}\left\{\delta^{en}\delta^{cd}\Delta^{bh}_{V}(P) \Delta^{,gf}_{V}(Q)\left[\Delta^{R}(P)\Delta^{R}(P+Q)+\Delta^{A}(P)\Delta^{A}(P+ Q)\right]\right.\] \[\left.\delta^{cd}\delta^{sf}\Delta^{bh}_{V}(P)\Delta^{en}_{V}(P+ Q)\left[\Delta^{R}(P)\Delta^{A}(Q)+\Delta^{A}(P)\Delta^{R}(Q)\right]\right.\] \[\left.\delta^{cd}\delta^{bh}\Delta^{gf}_{V}(Q)\Delta^{en}_{V}(P+ Q)\Delta^{A}(P)\Delta^{R}(P)\right\}, \tag{11}\] \[F^{\mu}=(P+Q)^{2}(q^{\mu}-p^{\mu}/2)+P^{2}p^{\mu}/2-Q^{2}(q^{\mu }+\frac{3}{2}p^{\mu}).\]
We find
\[\Pi^{\mu}_{\ref{eq:1}}= -\Lambda^{ab}\frac{1}{4}\int_{PQ}\frac{n_{B}(q)}{pq}\left\{\partial ^{p}_{0}\left[N_{V}(p)-\overline{N}_{V}(p)\right]\nu^{\mu}_{p}-\frac{\nu^{\mu} _{p}-n^{\mu}}{p}\left(N_{V}(p)-\overline{N}_{V}(p)\right)\right\}^{b}\] \[-\frac{1}{4}\Lambda^{ab}\int_{PQ}\nu^{\mu}_{p}\frac{n_{B}(q)-qn^{ \prime}_{B}(q)}{q^{2}}\left(N_{V}(p)-\overline{N}_{V}(p)\right)^{b}. \tag{12}\]
### Total contribution from non-abelian diagrams
We find
\[\Pi^{\mu\nu}_{\ref{eq:1}}+\Pi^{\mu\nu}_{\ref{eq:1}}+\Pi^{\mu\nu}_{\ref{eq:1 }}=T^{2}\frac{(D-2)^{2}}{48\pi^{2}}\Lambda^{ab}\Pi^{\mu\nu}_{2}-2(D-2)\Lambda^ {ab}I_{\text{VV}}\Pi^{\mu\nu}_{1}, \tag{13}\]
where
\[I_{\text{VV}}=\int_{PQ}n^{\prime}_{B}(q)\frac{1}{p^{2}}\left(n^{\prime}_{B}(p) -\frac{n_{B}(p)}{p}\right)=T^{2}\left\{\frac{1}{48\pi^{2}\epsilon}+\frac{(24 \log(A)+4\log\frac{\mu}{4\pi T}+2\gamma-1)}{48\pi^{2}}\right\},\]
and \(\Lambda^{ab}=g^{anc}_{V}g^{cef}_{V}g^{nf}_{V}g^{emb}_{V}\).
### Fermion diagrams
#### c.1 Fermion current
We now turn diagrams with fermions. We will omit collinear divergences as they cancel once we sum fermion and vector currents. The fermion current gives
\[\Pi^{\mu}_{\ref{eq:1}}=g^{an}_{I}g^{cJ}_{K}g^{dL}_{M}\int_{PQ}F^{ \mu}(P,Q)\left\{\delta^{M}_{N}\delta^{K}_{L}\Delta^{I}_{F,J}(P)\Delta^{cd}_{V} (Q)\left[\Delta^{R}(P)\Delta^{R}(P+Q)+\Delta^{A}(P)\Delta^{A}(P+Q)\right]\right.\] \[\left.\delta^{ac}\delta^{M}_{N}\Delta^{I}_{F,J}(P)\Delta^{K}_{F,L }(P+Q)\left[\Delta^{R}(P)\Delta^{A}(Q)+\Delta^{A}(P)\Delta^{R}(Q)\right]\right.\] \[\left.\delta^{M}_{N}\delta^{I}_{J}\Delta^{K}_{F,L}(P+Q)\Delta^{ ac}_{V}(Q)\left[\Delta^{R}(P)\Delta^{A}(P)\right]\right\},\] \[F^{\mu}=-2(D-2)\left[(P+Q)^{2}p^{\mu}-P^{2}q^{\mu}-Q^{2}p^{\mu}\right] \tag{14}\]
We find
\[\Pi^{\mu}_{\ref{eq:1}a}=\frac{D-2}{2}\text{Tr}g^{c}_{F}g^{c}_{F}g^{a}_{F}g^{b}_{F} \int_{PQ}\frac{N_{B}(q)-N_{F}(q)}{pq}\left\{\partial_{0}^{p}\left[N_{F}^{+}(p)- N_{F}^{-}(p)\right]v_{p}^{\mu}-\frac{v_{p}^{\mu}-n^{\mu}}{p}(N_{F}^{+}(p)-N_{F}^{-}(p)) \right\}^{b}.\]
Here we should use the leading-order relation: \(N_{B}(q)-N_{F}(q)=n_{B}(q)+n_{F}(q)\). After inserting the resummed propagators and performing the integrals we find
\[\Pi^{\mu\nu}_{\ref{eq:1}a}=-\frac{(D-2)T^{2}}{32\pi^{2}}\text{Tr}g^{c}_{F}g^{ c}_{F}g^{a}_{F}g^{b}_{F}\Pi^{\mu\nu}_{2}.\] (C.2)
### Vector current
Consider now fermion corrections to the vector current:
\[\Pi^{\mu}_{\ref{eq:1}b}=-\frac{1}{2}g^{ace}_{V}g^{d,J}_{J}g^{f,K}_ {L} \int_{PQ}F^{\mu}\delta^{J}_{K}\delta^{ef}\,\Delta^{cd}_{V}(P) \Delta^{L}_{F,I}(Q)\left[\Delta^{R}(P)\Delta^{R}(P+Q)+\Delta^{A}(P)\Delta^{A} (P+Q)\right]\] \[+\delta^{I}_{L}\delta^{ef}\,\Delta^{cd}_{V}(P)\Delta^{J}_{F,K}(P+ Q)\left[\Delta^{R}(P)\Delta^{A}(Q)+\Delta^{A}(P)\Delta^{R}(Q)\right]\] \[+\delta^{ef}\,\delta^{cd}\,\Delta^{J}_{F,K}(P+Q)\Delta^{L}_{F,I}( Q)\left[\Delta^{R}(P)\Delta^{A}(P)\right],\] (C.3)
where
\[F^{\mu}=-2i\left\{(P+Q)^{2}\left[(D-2)p^{\mu}+2q^{\mu}\right]-(D-2)P^{2}p^{ \mu}+Q^{2}\left[(D-4)p^{\mu}-2q^{\mu}\right]\right\}.\] (C.4)
We are left with
\[\Pi^{\mu}_{\ref{eq:1}b}=-\frac{(D-2)}{2}g^{ace}_{V}g^{bcd}_{V} \text{Tr}\big{[}g^{d}_{F}g^{e}_{F}\big{]}\int_{PQ}\frac{n_{F}(q)}{qp}\left\{ \partial_{0}^{p}\left[N_{V}(p)-\overline{N}_{V}(p)\right]v_{p}^{\mu}-\frac{( v_{p}^{\mu}-n^{\mu})}{p}\left(N_{V}(p)-\overline{N}_{V}(p)\right)\right\}^{b}\] \[+ig^{ace}_{V}\text{Tr}\big{[}(g^{c}_{F}g^{e}_{F}-g^{e}_{F}g^{e}_{ F})g^{b}_{F}\big{]}\int_{PQ}(N_{F}(q)-\overline{N}_{F}(q))^{b}\frac{v_{q}^{\mu}}{p^{2 }}\left\{n_{B}^{\prime}(p)-\frac{n_{B}(p)}{p}\right\}\] (C.5)
This result can be further simplified because \(-ig^{ace}_{V}\text{Tr}\big{[}(g^{c}_{F}g^{e}_{F}-g^{e}_{F}g^{e}_{F})g^{b}_{F} \big{]}=g^{ace}_{V}g^{bcd}_{V}\text{Tr}\big{[}g^{d}_{F}g^{e}_{F}\big{]}\), so the entire diagram is proportional to the structure \(g^{ace}_{V}g^{bcd}_{V}\text{Tr}\big{[}g^{d}_{F}g^{e}_{F}\big{]}\). In any case, after inserting the resummed propagators and performing the integrals we find
\[\Pi^{\mu\nu}_{\ref{eq:1}b}=(D-2)\frac{T^{2}}{96\pi^{2}}g^{ace}_{V}g^{bcd}_{V} \text{Tr}\big{[}g^{d}_{F}g^{e}_{F}\big{]}\Pi^{\mu\nu}_{2}+T^{2}g^{ace}_{V}g^{ bcd}_{V}\text{Tr}\big{[}g^{d}_{F}g^{e}_{F}\big{]}I_{\text{FV}}\Pi^{\mu\nu}_{1},\] (C.6)
where
\[I_{\text{FV}}=\int_{PQ}n^{\prime}_{F}(q)\frac{1}{p^{2}}\left(n^{\prime}_{B}(p) -\frac{n_{B}(p)}{p}\right)=T^{2}\left\{\frac{1}{48\pi^{2}\epsilon}-\frac{-24 \log A-4\log\frac{\mu}{4\pi T}-2\gamma+1+\log(4)}{48\pi^{2}}\right\},\]
contains divergences that cancel against counter-term insertions.
### Yukawa diagrams
There are two diagrams with Yukawa couplings, one from the fermion current and one from the scalar current. The sum of the two gives
\[\Pi^{\mu\nu}_{1\hskip-1.0pt1}+\Pi^{\mu\nu}_{1\hskip-1.0pt1}=-\frac{T^{2}}{32\pi ^{2}}\big{[}g^{a}_{F}g^{b}_{F}\big{]}^{J}_{J}\big{(}YY^{c}\big{)}^{J}_{I}\Pi^{ \mu\nu}_{2}-\frac{T^{2}}{192\pi^{2}}\big{[}g^{a}_{S}g^{b}_{S}\big{]}_{ij}\left( YY^{c}+Y^{c}Y\right)^{ij}\Pi^{\mu\nu}_{2}\] (C.7)
## Appendix D Scalar Diagrams
### Scalar current
The vector sunset gives
\[\Pi^{\mu}_{1\hskip-1.0pt1}=\frac{1}{2}g^{a}_{ni}g^{c}_{jk}g^{d}_{ lm}\int_{PQ}F^{\mu}\Big{\{}\delta^{kl}\delta^{mn}\Delta^{ij}_{S}(P) \Delta^{cd}_{V}(Q)\big{[}\Delta^{R}(P)\Delta^{R}(P+Q)+\Delta^{A}(P)\Delta^{A} (P+Q)\big{]}\] \[+\delta^{mn}\delta^{cd}\Delta^{ij}_{S}(P)\Delta^{kl}_{S}(P+Q) \big{[}\Delta^{R}(P)\Delta^{A}(Q)+\Delta^{A}(P)\Delta^{R}(Q)\big{]}\] \[+\delta^{ij}\delta^{mn}\Delta^{cd}_{V}(Q)\Delta^{kl}_{S}(P+Q) \big{[}\Delta^{R}(P)\Delta^{A}(P)\big{]}\Big{\}}\,,\] (D.1) \[F^{\mu}=4ip^{\mu}\left\{(P+Q)^{2}+P^{2}-\frac{1}{2}Q^{2}\right\}.\]
Since the scalar-vector bubble give the same combination of couplings we can group the diagrams together. We find
\[\Pi^{\mu}_{1\hskip-1.0pt1}+\Pi^{\mu}_{1\hskip-1.0pt1}= -\frac{D+2}{8}\text{Tr}\big{[}g^{a}_{S}g^{b}_{S}g^{c}_{S}g^{c}_{S} \big{]}\int_{PQ}\frac{n_{B}(q)}{qp}\left\{\partial^{p}_{0}\big{[}N_{S}(p)- \overline{N}_{S}(p)\big{]}\,v^{\mu}_{p}-\frac{(v^{\mu}_{p}-n^{\mu})}{p}\left( N_{S}(p)-\overline{N}_{S}(p)\right)\right\}^{b}.\]
After performing the integrals we obtain
\[\Pi^{\mu\nu}_{1\hskip-1.0pt1}+\Pi^{\mu\nu}_{1\hskip-1.0pt1}= \frac{T^{2}(D+2)}{192\pi^{2}}\text{Tr}\big{[}g^{a}_{S}g^{b}_{S}g^{c}_{S}g^{c} _{S}\big{]}\Pi^{\mu\nu}_{2}.\] (D.2)
The scalar-bubble gives
\[\Pi^{\mu}_{1\hskip-1.0pt1}=\frac{1}{4}g^{a}_{li}\lambda^{jlmn} \int_{PQ}F^{\mu}\delta^{kl}\Delta^{ij}_{S}(P)\Delta^{mn}_{S}(Q)\big{[}\Delta^{ R}(P)+\Delta^{A}(P)\big{]}\] \[F^{\mu}=2p^{\mu},\] (D.3)
which simplify to
\[\Pi^{\mu}_{1\hskip-1.0pt1}=\frac{1}{8}\big{[}g^{a}_{S}g^{b}_{S}\big{]}_{jl} \,\lambda^{jlnn}\int_{PQ}\frac{n_{B}(q)}{qp}\left\{\partial^{p}_{0}\big{[}N_{ S}(p)-\overline{N}_{S}(p)\big{]}\,v^{\mu}_{p}-\frac{(v^{\mu}_{p}-n^{\mu})}{p} \left(N_{S}(p)-\overline{N}_{S}(p)\right)\right\}^{b}.\]
After performing the integrals we find
\[\Pi^{\mu\nu}_{1\hskip-1.0pt1}=-\frac{T^{2}}{192\pi^{2}}\text{Tr}\big{[}g^{a}_ {S}g^{b}_{S}\big{]}_{jl}\,\lambda^{jlnn}\Pi^{\mu\nu}_{2}.\] (D.4)
We can also have scalar-mass insertions from one loop diagrams:
\[\Pi^{\mu}_{2d}=ig^{a}_{ki}\mu^{jk}\int_{p}\Delta^{ij}_{S}(P)\left(\Delta^{R}(P)+ \Delta^{R}(P)\right),\] (D.5)
which gives
\[\Pi^{\mu}_{2d}=\left[g^{a}_{S}g^{b}_{S}\right]_{ij}\mu^{jj}\int_{p}\frac{1}{p} \left\{\partial^{p}_{0}\left[N_{S}(p)-\overline{N}_{S}(p)\right]\nu^{\mu}_{p}- \frac{(\nu^{\mu}_{p}-n^{\mu})}{p}\left(N_{S}(p)-\overline{N}_{S}(p)\right) \right\}^{b}.\]
After performing the integral we find
\[\Pi^{\mu\nu}_{2d}=-\frac{1}{8\pi^{2}}\text{Tr}\left[g^{a}_{S}g^{b}_{S}\right] _{ij}\mu^{jj}\Pi^{\mu\nu}_{2}.\] (D.6)
### Vector current
The scalar sunset gives
\[\Pi^{\mu}_{1g}=\frac{1}{2}g^{ace}_{V}g^{d}_{jn}g^{f}_{mi}\int_{PQ} F^{\mu}\left\{\delta^{nm}\delta^{ef}\Delta^{cd}_{V}(P)\Delta^{ij}_{S}(Q) \left[\Delta^{R}(P)\Delta^{R}(P+Q)+\Delta^{A}(P)\Delta^{A}(P+Q)\right]\right.\] \[\left.+\delta^{ef}\delta^{ij}\Delta^{cd}_{V}(P)\Delta^{nm}_{S}(P+ Q)\left[\Delta^{R}(P)\Delta^{A}(Q)+\Delta^{A}(P)\Delta^{R}(Q)\right]\right.\] \[\left.+\delta^{ef}\delta^{cd}\Delta^{ij}_{S}(Q)\Delta^{nm}_{S}(P+ Q)\left[\Delta^{R}(P)\Delta^{A}(P)\right]\right\},\] (D.7) \[F^{\mu}=i\left[(4q^{\mu}-2p^{\mu})(P+Q)^{2}+2P^{2}p^{\mu}-Q^{2}( 6p^{\mu}+2q^{\mu})\right],\]
or after simplifying
\[\Pi^{\mu\nu}_{1g}=-g^{ace}_{V}g^{bdc}_{V}\text{Tr}\left[g^{d}_{S} g^{f}_{S}\right]\int_{PQ}\frac{n_{B}(q)}{qp}\left\{\partial^{p}_{0}\left[N_{S}(p) -\overline{N}_{S}(p)\right]\nu^{\mu}_{p}-\frac{(\nu^{\mu}_{p}-n^{\mu})}{p} \left(N_{S}(p)-\overline{N}_{S}(p)\right)\right\}^{b}\] \[+\frac{1}{4}g^{ace}_{V}g^{bdc}_{V}\text{Tr}\left[g^{d}_{S}g^{f}_{ S}\right]\int_{PQ}(N_{V}(q)-\overline{N}_{V}(q))^{b}\frac{\nu^{\mu}_{q}}{p^{2}} \left\{d_{0}(N_{S}+\overline{N}_{S})-\frac{1}{p}(N_{S}+\overline{N}_{S})\right\}\] (D.8)
After performing the integrals we find
\[\Pi^{\mu\nu}_{1g}=\frac{T^{2}}{24\pi^{2}}g^{aec}_{V}g^{bdc}_{V}\text{Tr}\left[ g^{d}_{S}g^{e}_{S}\right]\Pi^{\mu\nu}_{2}-T^{2}I_{\text{SV}}g^{aec}_{V}g^{bdc}_{V} \text{Tr}\left[g^{d}_{S}g^{e}_{S}\right]\Pi^{\mu\nu}_{1},\] (D.9)
where
\[I_{\text{SV}}=\int_{PQ}n^{\prime}_{B}(q)\frac{1}{p^{2}}\left(n^{\prime}_{B}(p) -\frac{n_{B}(p)}{p}\right)=\left\{\frac{1}{48\pi^{2}\epsilon}+\frac{T^{2}(24 \log(A)+4\log\frac{\mu}{4\pi T}+2\gamma-1)}{48\pi^{2}}\right\},\] (D.10)
Finally, the scalar bubble gives
\[\Pi^{\mu}_{1k}=\frac{1}{2}g^{ace}_{V}H^{df}_{V,ij}\int_{PQ}F^{\mu}\delta^{ef} \Delta^{cd}_{V}(P)\Delta^{ij}_{S}(Q)\left[\Delta^{R}(P)+\Delta^{A}(P)\right], \quad F^{\mu}=-2(D-1)p^{\mu},\]
which after simplifying gives
\[\Pi^{\mu}_{1\mathbb{1}k}=\frac{(D-1)}{2}g^{aec}_{V}g^{bdc}_{V}\text{Tr}\big{[}g^{ d}_{S}g^{e}_{S}\big{]}\int_{PQ}\frac{n_{B}(q)}{qp}\left\{\tilde{\sigma}^{p}_{0} \big{[}N_{V}(p)-\overline{N}_{V}(p)\big{]}v^{\mu}_{p}-\frac{(v^{\mu}_{p}-n^{\mu} )}{p}\big{(}N_{V}(p)-\overline{N}_{V}(p)\big{)}\right\}^{b}.\]
After doing the integrals we find
\[\Pi^{\mu\nu}_{1g}+\Pi^{\mu\nu}_{1\mathbb{1}k}=-T^{2}\frac{(D-3)}{48\pi^{2}}g^{ aec}_{V}g^{bdc}_{V}\text{Tr}\big{[}g^{d}_{S}g^{e}_{S}\big{]}\Pi^{\mu\nu}_{2}-T^{2}I_ {S\text{V}}g^{aec}_{V}g^{bdc}_{V}\text{Tr}\big{[}g^{d}_{S}g^{e}_{S}\big{]}\Pi^ {\mu\nu}_{1}.\] (D.11)
## Appendix E Counter-term contributions
To renormalize we need wave-function and coupling counter-terms. These are all well known [38; 39; 40]. The anomalous dimensions are7
Footnote 7: We here omit all Yukawa couplings since their counter-term contributions cancel.
\[\gamma^{I}_{J}=\frac{1}{16\pi^{2}}\left\{-\big{[}g^{c}_{F}g^{c}_{F }\big{]}^{I}_{J}\right\},\] (E.1) \[\gamma^{ab}_{V}=\frac{1}{16\pi^{2}}\left\{-\frac{5}{3}\text{Tr} \big{[}g^{a}_{V}g^{b}_{V}\big{]}-\frac{2}{3}\text{Tr}\big{[}g^{a}_{F}g^{b}_{F }\big{]}+\frac{1}{6}\text{Tr}\big{[}g^{a}_{S}g^{b}_{S}\big{]}\right\},\] (E.2) \[\gamma^{ab}_{g}=-\frac{1}{16\pi^{2}}\left\{\frac{1}{2}\text{Tr} \big{[}g^{a}_{V}g^{b}_{V}\big{]}\right\},\quad\gamma^{ij}_{S}=\frac{1}{16\pi^{ 2}}\left\{-2\big{[}g^{c}g^{c}\big{]}^{ij}\right\}.\] (E.3)
Next the vector and fermion-vector trilinear couplings:
\[\delta g^{a,I}_{J}=\frac{1}{32\pi^{2}\epsilon}\left\{-2\big{[}g^ {c}_{F}g^{a}_{F}g^{c}_{F}\big{]}^{I}_{J}+6ig^{abc}_{V}\big{[}g^{b}_{F}g^{c}_{ F}\big{]}^{I}_{J}-\gamma^{ab}_{V}g^{b,I}_{F,J}-g^{a,I}_{F,K}\gamma^{K}_{J}-g^{a,K}_{ F,J}\gamma^{s,I}_{K}\right\}.\] (E.4) \[\delta g^{abc}=\frac{1}{32\pi^{2}\epsilon}\left\{-2\text{Tr} \big{[}g^{a}_{V}g^{b}_{V}g^{c}_{V}\big{]}-g^{abe}_{V}\gamma^{ec}_{V}-\gamma^{ae }_{g}g^{ebc}_{V}-\gamma^{be}_{g}g^{acc}_{V}\right\}.\] (E.5)
For the scalar-coupling we only need the combination \(H^{ab}_{ij}=g^{a}_{ik}g^{b}_{kj}+g^{b}_{ik}g^{a}_{kj}\). The counter-term is
\[\delta H^{ab}_{ij}=-\frac{1}{16\pi^{2}\epsilon}\left\{\frac{8}{3}g ^{aec}_{V}g^{bef}_{V}H^{cf}_{ij}+2\big{[}g^{c}_{S}(g^{a}_{S}g^{b}_{S}+g^{b}_{S} g^{a}_{S})g^{c}_{S}\big{]}_{ij}\right.\] (E.6) \[\left.-\frac{1}{2}\Big{[}\gamma^{in}_{S}H^{ab}_{nj}+\gamma^{jn}_ {S}H^{ab}_{in}+\gamma^{ac}_{V}H^{cb}_{ij}+\gamma^{bc}_{V}H^{ac}_{ij}\Big{]}\right\}\] (E.7)
### Vector loops
Using the counter-terms from section E we find
\[\Pi^{\mu\nu}_{\text{CT,V}}=-\frac{(D-2)}{4\pi^{2}\epsilon}g^{acd}g^{def}g^{cfn }g^{bnx}\int_{p}n^{\prime}_{B}(p)\Pi^{\mu\nu}_{1},\] (E.8)
where
\[\int_{p}n^{\prime}_{B}(p)=-\frac{T^{2}}{6}-\frac{T^{2}}{6}\Big{(}24\log(A)+2 \log\frac{\mu}{4\pi T}-1\Big{)}\epsilon+\mathcal{O}(\epsilon^{2}).\] (E.9)
### Fermion loops
Using the counter-terms from section E we find
\[\Pi^{\mu\nu}_{\text{CT},\text{F}}=-\frac{1}{4\pi^{2}\epsilon}g^{ ace}_{V}g^{bcd}_{V}\text{Tr}\big{[}g^{d}_{F}g^{e}_{F}\big{]}\int_{P}n^{\prime}_{F}(p) \Pi^{\mu\nu}_{1},\] (E.10)
where
\[\int_{P}n^{\prime}_{F}(p)=-\frac{T^{2}}{12}-\frac{T^{2}}{12}\Big{(}24\log(A)+2 \log\frac{\mu}{4\pi T}-1-\log 4\Big{)}\epsilon+\mathcal{O}(\epsilon^{2}).\] (E.11)
### Scalar loops
Using the counter-terms from appendix E we find
\[\Pi^{\mu\nu}_{\text{CT},\text{S}}=\frac{1}{8\pi^{2}\epsilon}g^{ ace}_{V}g^{bdc}_{V}\text{Tr}\big{[}g^{d}_{S}g^{e}_{S}\big{]}\int_{P}n^{\prime}_{B}(p) \Pi^{\mu\nu}_{1}.\] (E.12)
### Power corrections before field redefinitions
The scalar loop gives
\[g_{\mu\nu}\Pi^{\mu\nu,ab}_{S}(K)=\text{Tr}\big{[}g^{a}_{S}g^{b}_{ S}\big{]}\frac{K^{2}}{16\pi^{2}}\left\{\frac{1}{2\epsilon}+\log\frac{\mu e ^{\gamma}}{4\pi T}+k^{0}L(K)\right\},\] (E.13) \[\Pi^{00,ab}_{S}(K)=-\text{Tr}\big{[}g^{a}_{S}g^{b}_{S}\big{]} \frac{1}{3}\frac{k^{2}}{16\pi^{2}}\left\{\frac{1}{2\epsilon}+\log\frac{\mu e^{ \gamma}}{4\pi T}+1+\frac{(k^{0})^{2}}{k^{2}}(k^{0}L(K)-1)\right\}\] (E.14)
The fermion loop gives
\[g_{\mu\nu}\Pi^{\mu\nu,ab}_{F}(K)=\text{Tr}\big{[}g^{a}_{F}g^{b} _{F}\big{]}\frac{K^{2}}{16\pi^{2}}\left\{\frac{2}{\epsilon}+4\left(\log\frac{ \mu e^{\gamma}}{4\pi T}+\log 4\right)-2+4k^{0}L(K)\right\},\] (E.15) \[\Pi^{00,ab}_{F}(K)=-\text{Tr}\big{[}g^{a}_{F}g^{b}_{F}\big{]} \frac{1}{3}\frac{k^{2}}{16\pi^{2}}\left\{\frac{2}{\epsilon}+4\left(\log\frac{ \mu e^{\gamma}}{4\pi T}+\log 4\right)-2+2k^{0}\left(3-\frac{(k^{0})^{2}}{k^{2}} \right)L(K)+2\frac{(k^{0})^{2}}{k^{2}}\right\}.\]
For non-abelian diagrams we group ghosts and vectors together, the result is
\[g_{\mu\nu}\Pi^{\mu\nu,ab}_{V}(K)=\text{Tr}\big{[}g^{a}_{V}g^{b} _{V}\big{]}\frac{K^{2}}{16\pi^{2}}\left\{\frac{5}{\epsilon}+10\log\frac{\mu e ^{\gamma}}{4\pi T}-3+10k^{0}L(K)\right\},\] (E.16) \[\Pi^{00,ab}_{V}(K)=-\text{Tr}\big{[}g^{a}_{V}g^{b}_{V}\big{]} \frac{1}{3}\frac{k^{2}}{16\pi^{2}}\left\{\frac{5}{\epsilon}+10\log\frac{\mu e ^{\gamma}}{4\pi T}-1+2k^{0}\left(6-\frac{(k^{0})^{2}}{k^{2}}\right)L(K)+2 \frac{(k^{0})^{2}}{k^{2}}\right\}.\]
|
2303.02114
|
Lag selection and estimation of stable parameters for multiple
autoregressive processes through convex programming
|
Motivated by a variety of applications, high-dimensional time series have
become an active topic of research. In particular, several methods and
finite-sample theories for individual stable autoregressive processes with
known lag have become available very recently. We, instead, consider multiple
stable autoregressive processes that share an unknown lag. We use information
across the different processes to simultaneously select the lag and estimate
the parameters. We prove that the estimated process is stable, and we establish
rates for the forecasting error that can outmatch the known rate in our
setting. Our insights on the lag selection and the stability are also of
interest for the case of individual autoregressive processes.
|
Somnath Chakraborty, Johannes Lederer, Rainer von Sachs
|
2023-03-03T17:57:04Z
|
http://arxiv.org/abs/2303.02114v1
|
Lag selection and estimation of stable parameters for multiple autoregressive processes through convex programming
###### Abstract
Motivated by a variety of applications, high-dimensional time series have become an active topic of research. In particular, several methods and finite-sample theories for individual stable autoregressive processes with known lag have become available very recently. We, instead, consider multiple stable autoregressive processes that share an unknown lag. We use information across the different processes to simultaneously select the lag and estimate the parameters. We prove that the estimated process is stable, and we establish rates for the forecasting error that can outmatch the known rate in our setting. Our insights on the lag selection and the stability are also of interest for the case of individual autoregressive processes.
+
Footnote †: _Keywords and phrases:_ regularised least square, LASSO, autoregressive process, hierarchical-group norm, dual norm, stability, sample complexity.
## 1 Introduction
Today's world of acquisition of complex data in areas such diverse as macroeconomics and finance, everyday weather predictions, brain imaging, and many more, has called for intelligent model approaches that avoid needing to use potentially a (too) high number of model parameters per available sample size. Moreover, often these data are of high dimensionality - as they arise together in a panel or in the form of a multivariate vector. These stylized facts render the purpose of predicting the evolution of these data into the (near) future really challenging. To face this challenge choosing a data generating model that assumes some _common_ underlying structure relating the different components of the observed multivariate data set will not only turn out to be advantageous but reflects the observation that the different series do not behave independently from each other - they might actually be driven by latent (i.e. unobservable) mechanism (such as a leading economic indicator, or a global climate trend, etc, often modelled by a latent factor model). Moreover, we almost always observe _serial correlation_ between present and past observations, which traditionally has been modelled by assuming some sort of weak dependence over time (translating into dynamic latent factor models, e.g., Forni et al. (2000).
In this context, as factor modelling does not necessarily allow for component-wise prediction, the approach of (parametric) vector autoregression (VAR) has already for a long time become an overly prominent tool for modeling such multivariate time series - with in particular the idea that the common serial dependence is limited by the existence of a common maximal lag-order for all components. However, as the number of component series is increased, VAR models have the known tendency to become overparametrized. In the virtue of having to do with a high-dimensional parameter estimation problem, more recent possibilities to address this issue are _regularized_ approaches, such as the LASSO for estimating the parameters of these models (essentially by some kind of regularised least-squares approach, see, for example, Nardi and Rinaldo (2011)). This is in contrast to more traditional approaches (based
mostly on information criteria for lag-order selection such as AIC, BIC, etc.) which address overparametrization by selecting a low lag order, based on the assumption of short range dependence, assuming that a universal lag order applies to all components. For a good forecast performance in a high-dimensional context, these approaches turned out to fall behind the LASSO - which, until recently, did however not incorporate the notion of lag order selection. It has been only the recent work by Nicholson et al. (2020) that proposed a class of hierarchical lag structures that embed the notion of lag selection into a convex regularizer. The key modeling tool has been a group LASSO with nested groups which guarantees that the sparsity pattern of lag coefficients honors the VAR's ordered structure. For more details on the literature on dimension reduction methods which address the VAR's overparametrization problem we refer to Section 2 of the mentioned work by Nicholson et al. (2020). A clear shortcoming, however, of this approach is the necessity to model all components of the observed multivariate time series to be of the same data length, a constraint in classical VAR-modelling that cannot be circumvented.
Motivated by the approach of Nicholson et al. (2020), in this paper, we propose a method to analyse multiple stable autoregressive processes of (potentially) _different lengths_ in the framework of regularized LASSO, where the regularization is achieved via an overlapping group-norm that induces sparsity at the group level. Moreover, we show that, even in absence of any information on the maximum lag of the processes, the proposed framework estimates the true lag and the coefficients of the AR model. Finally, we show that the model fitted with the AR coefficients returned by this proposed method is stable. As our results on statistical guarantees are essentially of non-asymptotic nature - interesting even in the context of observing a single time series - we first review the (sparse) literature on those non-asymptotic results in a time series context, before we turn in more detail to the similarities and differences between our and the approach of Nicholson et al. (2020).
Most of the research in time series analysis -- until recently -- focused on deriving asymptotic behaviour of the predictors. This severely restricts applicability of these results, especially in the reign of low sample-to-predictable ratio. Popular approaches to overcome this nuisance of dependency have been using the assumption of stability (leading to stationarity) of the data-generating process. For example, both Negahban and Wainwright (2011) and Loh and Wainwright (2012) used stability in deriving the guarantees in small sample regimes; however, these works established these results under the condition that the coefficient matrices are severely norm-bounded (namely, the sum of the operator-norms of the coefficient matrices is smaller than 1), which is much stronger than stability of the process determined by those coefficients. Recently, Basu and Michailidis (2015) made a big stride towards understanding the effect of temporal and cross-sectional dependency in the small sample regime. The underlying hypothesis in that work was that data be amenable to modelling via stable vector autoregressive process; they tracked the restrictions on the spectral domain -- as enforced by stability of the process -- and derived non-asymptotic prediction guarantees for high dimensional vector autoregressive process with Gaussian white noise innovation. Several follow-up works (see _e.g._ Wong et al. (2020), Masini et-al (2022), and the references there-in) then extended their results to the case where the innovations are heavy-tailed, and moreover, they derived guarantees assuming only the stationarity and finiteness of second moment of the underlying process -- conditions weaker than stability.
In the remainder of this Introduction we go now into more details about the relation of our approach to the one of Nicholson et al. (2020). Essentially, the latter contributed by finding out that algorithmically the method by Mairal et al. (2011) (and its computationally faster amendment by Tseng (2009)) can be used in such a setting to address, in the presence of prior information on an upper bound \(L\) of the true unknown lag order, estimation of order \(L\) vector-autoregressive models of dimension \(M\), abbreviated VAR\({}_{M}(L)\) in the sequel. Our
work can now be seen as fitting a common (i.e. "diagonal vector") autoregressive model to a multivariate time series of dimension \(M\), i.e. a panel of \(M\) observed (univariate) time series of _in general not equal_ lengths \(n_{m},1\leq m\leq M\). We address the challenging question on how to choose, solely from the information available in the model for the observed data, a common appropriate lag order \(L\) that allows us to phrase and solve our problem via a penalised LASSO approach. We derive non-asymptotic bounds on the multivariate one-step ahead prediction error and estimate the collection of the autoregressive coefficients \(\beta\mathrel{\mathop{:}}=(\beta_{1}^{m},\ldots,\beta_{L}^{m})_{1\leq m\leq M}\) under the paradigm of sparseness. Assuming that there is a common true unknown lag order \(L_{0}\) that generated our \(M\) time series, our algorithm, akin Nicholson et al. (2020) (and Mairal et al. (2011)) for fitting a common model is based on a modification of a hierarchical group LASSO approach: We first determine an appropriate (minimal) upper bound \(L\geq L_{0}\) depending essentially on the sample size \(n_{\min}\) of the shortest observed time series component (and on \(M\), of course) from a thorough analysis of the theoretical complexity of our group-LASSO based approach. With this appropriate \(L\), necessary to embed our autoregression problem into the framework of high-dimensional multivariate regression, we transfer existing technology on LASSO estimation (with overlapping hierarchically constructed groups) to our problem. We derive statistical guarantees for the estimators \(\hat{\beta}\), solution of our aforementioned learning algorithm, and for the estimator \(\hat{L}_{0}\) (essentially taken from the support of \(\hat{\beta}\)). More specifically we deliver non-asymptotic bounds on the multivariate one-step ahead prediction error, on the estimation error of \(\beta\), on the false discoveries for the support of \(\beta\), and quite innovatively on the stability of the fitted model. For the latter, we show that the fitted autoregressive model of order \(\hat{L}_{0}\) with estimated coefficients \(\hat{\beta}\) fulfils the conditions of the true model for stability (via a more explicit concept of \(\varepsilon\)-stability that we introduce to asses the difficulty of the statistical estimation problem).
In the following paragraph we are even more explicit about the exact nature of our contributions motivated from the existing limitations of the current approaches we found in the literature.
Current limitations and our contributionsSome of the questions that are not sufficiently addressed in recent existing work on non-asymptotic time series analysis are as follows.
1. [label=L0,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,refref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,refref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,refref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,refref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,refref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,refref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,ref=L1,refref=L1,ref=L1,refref=L1,ref=L1,ref=L1,ref=L1,refref=L1,ref=L1,ref=L1,ref=L1,refref=L1,ref=L1,ref=L1,refref=L1,ref=L1,ref=L1,ref=L1,refref=L1,refref=L1,ref=L1,refref=L1,refref=L1,refref=L1,ref=L1,refref=L1,refref=L1,
transferring existing results from a VAR\({}_{M}(L)\) modeling approach can be cumbersome in practice for situations in which the number of available samples for each individual component time series is not the same: obviously needing to chop off the samples in order to work with the minimal individual sample size could result in possibly weaker theoretical guarantees and practical performances (see below).
* Applying a hierarchical group norm enables us to derive prediction guarantees that depend on the _average_ number of samples per component (instead of the minimum number of samples per model, as would be the case had we translated naively to a \(M\)-dimensional VAR model). More specifically, availability of perfect information on the true lag \(L_{0}\) and \(n_{1}=L_{0}+T_{1},\cdots,n_{m}=L_{0}+T_{M}\) (respective) number of samples for the \(M\) individual components yields the following: the VAR\({}_{M}(L)\) translation would result in the following provable error (see Nicholson et al. (2020)), stating that with high probability (1) \[\|\hat{\mathbf{\beta}}-\mathbf{\beta}\|_{2}=O\left(\sqrt{\frac{\log(M^{2}L_{0})}{n_{ \min}M}}\right)\,,\] whereas the error from the algorithm we describe is of the order (see equation (33) below) (2) Here \(D=T_{1}+\cdots+T_{M}\) is the total number of "postsamples" (\(T_{m}:=n_{m}-L_{0}\)). Furthermore, our strategy results in weaker dependency of the error on the behaviour of the reverse characteristic polynomial on the unit disk, unlike in the relevant VAR\({}_{M}(L)\) model translation (compare with Basu and Michailidis (2015)).
* Recall that a (univariate) autoregressive process \(X_{t}=a_{1}X_{t-1}+\cdots+a_{L}X_{t-L}+U_{t}\) is stable if the "reverse characteristics polynomial" \(1-a_{1}z-\cdots-a_{L}z^{L}\) has no complex roots on the closed unit disk; it is known that stability implies stationarity (see Section 2 below). But what about stability for _fitted_ autoregressive models? While this question has an affirmative answer in the special case of Yule-Walker estimation of the coefficients of a univariate autoregressive process (known however to be less efficient), the question of stability or stationarity of the process reconstructed from the parameters estimated by LASSO-based approaches does not seem to have been addressed in the literature. However, starting with a stable process to have generated the input observations of these devised algorithms, it is reasonable to expect that the reconstructed process be stable (and thus, multi-step predictions be reliable as well).
* We show, in Theorem 4.10, that the process reconstructed from the parameters returned by our algorithm is stable when the samples available as input are generated by stable processes. As Basu and Michailidis (2015) showed, a measure of stability for an autoregressive process is, equivalently, a boundedness criteria on the spectral density, and the boundedness in the Fourier domain translates in a sense to'smoothness' of the process in the temporal domain. Thus, as, intuitively, the stable autoregressive processes form a'smooth' subclass, it is desirable that any algorithm for learning the parameters of processes from this smooth subclass should return estimators lying in this subclass; in this paper, this is ensured by Theorem 4.10.
It is important to mention here that the results in this paper demonstrate that our overlapping group-lasso approach yields a stable process when the underlying process is stable as well. The aim of the paper is not to choose the most optimal tuning parameters or some absolute constants, but rather to show existence of these proposing reasonable candidates for such parameters/constants.
Organization of the paper.The paper is organized as follows. In Section 2, we recall relevant definitions from existing literature, and we set notations. In Section 3, (1) we specify the model and formulate the learning problem as a regularized group-LASSO with overlapping group norm, where the data matrix is a block-diagonal matrix -- each block of which consists of the data matrix that treats a least-squares problem corresponding to the associated component time series; (2) we present the learning algorithm based on the group-LASSO problem. Section 4 contains the bulk of the technical contents, in particular, the proof of the statements of the main results, already presented at the end of Section 3. This Section 4 is divided into four subsections: Subsection 4.1 presents an oracle inequality bounding the one-step-ahead prediction error (Theorem 4.2), as well as a high-probability bound on the effective noise of the model (Theorem 4.3). Subsection 4.2 starts with restricted eigenvalue bounds (Proposition 4.6) for the blocks of the data matrix, and goes on to integrate the blockwise results to finally arrive at an estimate (Theorem 4.9) of the error in estimating the AR-coefficients. Finally, combining the results from these subsections, stability of the estimated AR model (Theorem 4.10) has been established in Subsection 4.3. All proofs of auxiliary results are deferred to a series of Appendices.
## 2 Preliminaries
### General notations
* \(M_{d}(\mathbb{F})\) denotes the ring of \(d\times d\) matrices with entries in the field \(\mathbb{F}\in\{\mathbb{R},\mathbb{C}\}\), and for \(M\in M_{d}(\mathbb{F})\), we write \(M^{\top}\) for the transpose; if \(\mathbb{F}=\mathbb{C}\), then \(M^{\star}\) denotes the conjugate transpose.
* \(\mathbb{D}\) is the complex closed unit disk \(\mathbb{D}:=\{z\in\mathbb{C}:|z|\leq 1\}\), and its boundary is \(\partial\mathbb{D}:=\{z\in\mathbb{C}:|z|=1\}\).
* For any integer \(d>0\), if \(x\in\mathbb{R}^{d}\), then \(\|x\|_{2}=\sqrt{x^{\top}x}\), and \(\mathbb{S}^{d-1}:=\{x\in\mathbb{R}^{d}:\|x\|_{2}=1\}\); in particular, under standard identification \(\mathbb{C}=\mathbb{R}^{2}\), we have \(\mathbb{S}^{1}=\partial\mathbb{D}\).
* For integer \(n>0\), we will denote the set \(\{1,2,\cdots,n\}\) by \([n]\).
* For a vector \(\hat{\boldsymbol{\beta}}_{m}=(\hat{\boldsymbol{\beta}}_{m,1},\cdots,\hat{ \boldsymbol{\beta}}_{m,L})\) and an integer \(0<L_{0}\leq L\), we write (3) \[\hat{\boldsymbol{\beta}}_{m}(L_{0}):=(\hat{\boldsymbol{\beta}}_{m,1},\cdots, \hat{\boldsymbol{\beta}}_{m,L_{0}})\,.\]
### Notations for autoregressive process
**Conventions:**: We use \(X,Y,Z,\dots\) to denote random variables, and \(\boldsymbol{X},\boldsymbol{Y},\boldsymbol{Z},\dots\) to denote random vectors. On the other hand, we use \(a,b,c,\dots\) to denote real (or complex) constants, and \(\boldsymbol{a},\boldsymbol{b},\boldsymbol{c},\dots\) for vector-valued constants.
**Notations:**:
* For \(d\)-dimensional autoregressive process (4) \[X_{t}:=A_{1}X_{t-1}+\cdots+A_{L}X_{t-L}+U_{t}\,,\] and \(z\in\mathbb{C}\), we write (5) \[\mathcal{A}_{z}:=I-A_{1}z-\cdots-A_{L}z^{L}\,.\]
* \(L\) will denote an initially determined "ad-hoc" upper-bound on the true lag of the process in (4), and \(L_{0}\) the true lag.
**Definition 2.1** (Weak stationarity).: A \(d\)-dimensional time series \(\{X_{t}\}_{t\in\mathbb{Z}}\) is said to be _weakly stationary_ if the following holds: \(a)\)\(\mathbb{E}[\|X_{t}\|_{2}^{2}]<\infty\) for all \(t\in\mathbb{Z}\), \(b)\)\(\mathbb{E}[X_{t}]=\mu\) for all \(t\in\mathbb{Z}\), and \(c)\)\(\mathbb{E}[X_{t}X_{t-h}^{\top}]=\Gamma(h)\) for all \(t,h\in\mathbb{Z}\).
**Definition 2.2** (Strong stationarity).: A \(d\)-dimensional time series \(\{X_{t}\}_{t\in\mathbb{Z}}\) is said to be _strongly stationary_ if for each integer \(n>0\), and all integers \(t_{1},\cdots,t_{n},h\), the distributions of the vectors \((X_{t_{1}},\cdots,X_{t_{n}})\) and \((X_{t_{1}+h},\cdots,X_{t_{n}+h})\) are identical.
**Definition 2.3** (Autoregressive time series).: A \(d\)-dimensional time series \(\{X_{t}\}_{t\in\mathbb{Z}}\) is autoregressive of lag at most \(L>0\) if there are \(d\times d\) matrices \(A_{1},\cdots,A_{L}\) such that
\[X_{t}=A_{1}X_{t-1}+\cdots+A_{L}X_{t-L}+U_{t}\,. \tag{6}\]
holds for all \(t\in\mathbb{Z}\), for some random white noise process \(\{U_{t}\}_{t\in\mathbb{Z}}\).
Associated to each \(d\)-dimensional lag-\(L\) autoregressive process \(\{X_{t}\}_{t\in\mathbb{Z}}\) -- as in equation (6) -- is the associated order-1 process \(\mathbf{X}_{t}=\mathbf{A}\mathbf{X}_{t-1}+\mathbf{U}_{t}\), where
\[\mathbf{A}:=\begin{pmatrix}A_{1\to L},A_{L}\\ \mathbf{I}_{dL-d},\,\mathbf{0}\end{pmatrix}\,,\hskip 28.452756pt\mathbf{U}_{t}:= \begin{pmatrix}U_{t}\\ \mathbf{0}\end{pmatrix}\,, \tag{7}\]
and \(A_{1\to L}\) is the block matrix \((A_{1}\cdots A_{L-1})\).
**Definition 2.4** (Stability).: A \(d\)-dimensional lag-\(L\) autoregressive process \(\{X_{t}\}_{t\in\mathbb{Z}}\) -- as in equation 6 -- is said to be stable if \(\det(\mathbf{I}-\mathbf{A}z)\neq 0\) for \(|z|\leq 1\). Equivalently, the process is stable if \(\det(\mathcal{A}_{z})\neq 0\) for \(|z|\leq 1\).
**Definition 2.5** (Reverse characteristic polynomial).: The polynomial \(\det(\mathcal{A}_{z})\) is called the _reverse characteristic polynomial_ of the process in equation 6.
We note the equality \(\det(\mathbf{I}-\mathbf{A}z)=\det(\mathcal{A}_{z})\). In particular, the process in equation (6) is stable if and only if every eigenvalue of \(\mathbf{A}\) is inside the open unit disk.
**Definition 2.6** (\(\epsilon\)-stability).: A stable autoregressive process, as in equation (6), is said to be \(\epsilon\)-stable for an \(\epsilon\in(0,1)\) if the following holds:
\[\epsilon\leq\min_{|z|=1}|\det(\mathcal{A}_{\mathbf{z}})|\leq\max_{|z|=1}|\det (\mathcal{A}_{\mathbf{z}})|\leq\epsilon^{-1}\,. \tag{8}\]
_Remark._: By maximum modulus principle, this is equivalent to saying that
\[\epsilon\leq\min_{|z|\leq 1}|\det(\mathcal{A}_{\mathbf{z}})|\leq\max_{|z|\leq 1}|\det( \mathcal{A}_{\mathbf{z}})|\leq\epsilon^{-1}\,.\]
The lower-bound here is a convenient quantification of the notion of stability, which demands that \(\min_{|z|\leq 1}|\det(\mathcal{A}_{\mathbf{z}})|>0\). Note that we also require the upper bound to derive our statistical guarantees.
A well-known fact about autoregressive processes is the following: see Lutkepohl (2005, proposition 2.1) for details.
**Lemma 2.7** (Stability implies weak stationarity).: _A stable autoregressive process is weakly stationary._
## 3 Statistical Model and Estimator
This section introduces our statistical model and estimator, and presents an algorithm to learn the parameters of the model from observed samples.
### Statistical Model
We start with the model. Suppose that we observe time-samples generated by \(M\) univariate autoregressive process, for which we know a uniform upper-bound \(L\) of the true lag-order. Then, we can aggregate these \(M\) univariate lag at most \(L\) autoregressive processes
\[X_{t}^{1} = \beta_{1}^{1}X_{t-1}^{1}+\cdots+\beta_{L}^{1}X_{t-L}^{1}+U_{t}^{1}\,;\] \[\vdots \tag{9}\] \[X_{t}^{M} = \beta_{1}^{M}X_{t-1}^{M}+\cdots+\beta_{L}^{M}X_{t-L}^{M}+U_{t}^{M }\,.\]
In this paper, we work under the simplified assumption that the true lag of all the \(M\) component processes is identical, namely, \(L_{0}\), and that \(L\geq L_{0}\) is generic; neither \(L_{0}\) nor \(L\) is known _a priori_. Additionally, we assume mean-zero, Gaussian white-noise innovations; that is, for each \(m\in[M]\) the set \(\{U_{t}^{m}\}_{t\in\mathbb{Z}}\) consists of independent mean-zero, univariate Gaussians with coordinate-wise standard deviation \(\sigma_{m}\in(0,\infty)\). Additionally, we assume that for each \(t\in\mathbb{Z}\), the noise variables \(U_{t}^{1},\ldots,U_{t}^{M}\) are independent. We summarize the parameters of the model in a matrix \(\Theta\in\mathbb{R}^{M\times L}\) via \(\Theta_{ml}:=\beta_{l}^{m}\) to refer to groups of parameters more easily later. Our goal is 1. to estimate the parameters of _all_ models simultaneously; and 2. to assess the lag \(L_{0}\), which is assumed to be the same over all \(M\) processes.
We make the following assumption on the absolute value of the smallest \(\beta\)-coefficient, which is widely known as \(\beta\)_-min assumption_ in the LASSO literature; see, for example, Bunea (2008). We note that this assumption will only be needed to achieve the bound in Theorem 3.1 after \(\lambda\)-thresholding; in particular, when no thresholding is employed, the analysis in this paper does not require the assumption.
**Assumption 1** (\(\beta\)-_min assumption_).: _There is an absolute constant \(c_{\beta}>0\) such that the true autoregressive coefficient vector \(\boldsymbol{\beta}\) satisfies_
\[\boldsymbol{\beta}_{j}^{m}\neq 0\,\Rightarrow\,\boldsymbol{\beta}_{j}^{m} \geq c_{\beta}\,. \tag{10}\]
Broadly speaking, this assumption ensures that the non-zero coefficients can be detected in the first place. We now set out to define a regularizer. Let \(n_{j}\) denote the total number of samples from
\[X_{t}^{j}=\beta_{1}^{j}X_{t-1}^{j}+\cdots+\beta_{L}^{j}X_{t-L}^{j}+U_{t}^{j}\,,\]
and \(T_{j}:=n_{j}-L\). The main idea is as follows. Suppose that \(L\geq L_{0}\) is some integer, and for each \(t\in\{-L+1,\ldots,1,\ldots,T_{m}\}\) and \(m\in\{1,\ldots,M\}\), we have an observation \(x_{t}^{m}\) of \(X_{t}^{m}\). Denote \(\boldsymbol{\beta}_{m}:=(\beta_{1}^{m},\ldots,\beta_{L}^{m})^{\top}\) for each \(m\in[M]\), and let \(\boldsymbol{\beta}:=(\boldsymbol{\beta}_{1}^{\top},\ldots,\boldsymbol{\beta} _{m}^{\top})^{\top}\). We define \(\mathcal{G}_{1},\ldots,\mathcal{G}_{L}\subset S_{ML}:=\{1,2,\ldots,M\}\times\{1,2,\ldots,L\}\) by
\[\mathcal{G}_{l}\,:=\{1,2,\ldots,M\}\times\{l,l+1\ldots,L\} \tag{11}\]
for all \(l\in\{1,\ldots,L\}\). The groups are nested: \(\mathcal{G}_{1}\supset\cdots\supset\mathcal{G}_{L}\). Let \(\boldsymbol{\beta}_{\mathcal{G}_{l}}\in\mathbb{R}^{M\times(L-l+1)}\) be the submatrix of \(\boldsymbol{\Theta}\), consisting of columns having index larger or equal to \(l\). We set the group norm to be
\[\|\boldsymbol{\beta}\|_{\mathcal{G}} :=\sum_{l=1}^{L}\sqrt{M(L-l+1)}\,\|\boldsymbol{\beta}_{\mathcal{G }_{l}}\|_{\mathbb{F}}\,, \tag{12}\] \[\text{where}\qquad\|\boldsymbol{\beta}_{\mathcal{G}_{l}}\|_{ \mathbb{F}} :=\sqrt{\sum_{m=1}^{M}\sum_{j=l}^{L}|\beta_{j}^{m}|^{2}}\,.\]
is the Frobenius norm of \(\mathbf{\beta}_{\mathcal{G}_{t}}\). We will alternatively write \(\mathcal{N}(\mathbf{\beta})\) for the group norm \(\|\mathbf{\beta}\|_{\mathcal{G}}\), for the sake of notational ease.
The overall post-sample size is denoted
\[D:=T_{1}+\cdots+T_{M}\,.\]
In order to estimate the coefficient vector \(\mathbf{\beta}\), we propose solving the following constrained convex program
\[\text{minimize}\qquad\qquad\frac{1}{D}\sum_{m=1}^{M}\sum_{t=1}^{T_{m}}\bigl{(} x_{t}^{m}-\beta_{1}^{m}x_{t-1}^{m}-\cdots-\beta_{L}^{m}x_{t-L}^{m}\bigr{)}^{2}+ \lambda\|\mathbf{\beta}\|_{\mathcal{G}}\,, \tag{13}\]
with an appropriate tuning parameter \(\lambda>0\).
The objective function can be put in a concise form. For this, we define the vector \(\mathbf{y}\in\mathbb{R}^{D}\), the matrix \(X\in\mathbb{R}^{D\times(ML)}\), and the parameter \(\mathbf{\beta}\in\mathbb{R}^{ML}\), as follows:
\[\mathbf{y} :=(x_{1}^{1},\ldots,x_{T_{1}}^{1},x_{1}^{2},\ldots,x_{T_{2}}^{2}, \cdots,x_{1}^{M},\ldots,x_{T_{M}}^{M})^{\top}\,; \tag{14}\] \[X :=\begin{pmatrix}x_{1-1}^{1},\ldots,x_{1-L}^{1}&\\ \vdots&\\ x_{T_{1}-1}^{1},\ldots,x_{T_{1}-L}^{1}&\\ &x_{1-1}^{2},\ldots,x_{1-L}^{2}&\\ &\vdots&\\ x_{T_{2}-1}^{2},\ldots,x_{T_{2}-L}^{2}&\\ &&\ddots&\\ &&&x_{1-1}^{M},\ldots,x_{1-L}^{M}\\ &&&\vdots\\ &&&x_{T_{M}-1}^{M},\ldots,x_{T_{M}-L}^{M}\end{pmatrix};\] \[\mathbf{\beta} :=(\beta_{1}^{1},\ldots,\beta_{L}^{1},\beta_{1}^{2},\ldots,\beta_ {L}^{2},\cdots,\cdots,\beta_{1}^{M},\ldots,\beta_{L}^{M})^{\top}\,.\]
It is immediate that
\[\sum_{m=1}^{M}\sum_{t=1}^{T_{m}}\bigl{(}x_{t}^{m}-\beta_{1}^{m}x_{t-1}^{m}- \cdots-\beta_{L}^{m}x_{t-L}^{m}\bigr{)}^{2}\,=\,\|\mathbf{y}-X\mathbf{\beta}\|_{2}^{2}\,.\]
In conclusion, the above estimation program is equivalent to
\[\widehat{\mathbf{\beta}}\,\in\,\operatorname*{argmin}_{\mathbf{\beta}\in\mathbb{R}^{ ML}}\biggl{\{}\frac{1}{D}\|\mathbf{y}-X\mathbf{\beta}\|_{2}^{2}+\lambda\|\mathbf{\beta}\|_{ \mathcal{G}}\biggr{\}}\,. \tag{15}\]
Hence, the estimator can be cast as a modified group-lasso estimator, which means that we can use established group-lasso algorithms that allow for overlapping groups (Mairal et al., 2011). In essence, the above estimator generalizes the _elementwise_ estimator HLag\({}^{\text{E}}\) in Nicholson et al. (2020) to multiple time series. Note that, the ordinary LASSO estimator--as well as any group-LASSO estimators with non-overlapping groups--enforces sparsity by setting coefficients to zero without paying heed to the fact that when only an upper-bound to the true-lag \(L_{0}\) is an input to the regression--_all_ coefficients indexed between \(L_{0}+1\) and \(L\) are supposed to be zero before any coefficient with index smaller or equal to \(L_{0}\); however, the penalty obtained via the chained groups \(\mathcal{G}_{1}\supseteq\cdots\supseteq\mathcal{G}_{L}\) precisely achieves this feat.
Example (Regularizer).We consider the case of two univariate lag (at most) three autoregressive processes; that is, \(M=2\) and \(L=3\). The corresponding groups are the following:
\[\mathcal{G}_{1} =\{(1,1),(1,2),(1,3),(2,1),(2,2),(2,3)\}\;;\] \[\mathcal{G}_{2} =\{(1,2),(1,3),(2,2),(2,3)\}\;;\] \[\mathcal{G}_{3} =\{(1,3),(2,3)\}\,.\]
Thus, the group norm of \(\boldsymbol{\beta}\in\mathbb{R}^{2\times 3}\) is
\[\|\boldsymbol{\beta}\|_{\mathcal{G}}=\sqrt{6}\sqrt{(\beta_{1}^{1 })^{2}+(\beta_{1}^{2})^{2}+(\beta_{2}^{1})^{2}+(\beta_{2}^{2})^{2}+(\beta_{3}^ {1})^{2}+(\beta_{3}^{2})^{2}}\] \[\qquad\qquad\qquad\qquad\qquad+\,\sqrt{4}\sqrt{(\beta_{2}^{1})^{2 }+(\beta_{2}^{2})^{2}+(\beta_{3}^{1})^{2}+(\beta_{3}^{2})^{2}}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\,\sqrt{2}\sqrt {(\beta_{3}^{1})^{2}+(\beta_{3}^{2})^{2}}\,.\]
Notice that, when the (regularized) LASSO sets a certain group (say the second group above) to \(\mathbf{0}\), it automatically sets all the following groups to \(\mathbf{0}\) as well. More specifically, when the (regularized) LASSO sets \(\boldsymbol{\beta}_{\mathcal{G}_{l}}\mathbf{0}\), then the hierarchical structure \(\mathcal{G}_{l}\supseteq\mathcal{G}_{l+1}\supseteq\cdots\) means that for all \(r>0\), each coordinate of \(\boldsymbol{\beta}_{\mathcal{G}_{l+r}}\) comes as a coordinate of \(\boldsymbol{\beta}_{\mathcal{G}_{l}}\), thus ensuring \(\boldsymbol{\beta}_{\mathcal{G}_{l+r}}=\mathbf{0}\) for each \(r\geq 0\).
In what follows, we use the following notations. For \(m\in[M]\) and \(l\in[L]\), and \(t\leq T_{m}\), we write
\[\boldsymbol{U}^{(m)} :=(U_{1}^{m},\cdots,U_{T_{m}}^{m})^{\top}\,;\] \[X^{(m,l)} :=(X_{1-l}^{m},\ldots,X_{T_{m}-l}^{m})^{\top}\,; \tag{16}\] \[X_{t}^{(m)} :=(X_{t-1}^{m},\ldots,X_{t-L}^{m})\,;\] \[X^{(m)} :=(X^{(m,1)},\ldots,X^{(m,L)})\,.\]
Moreover, we will write \(X_{<j}^{(m)}\) to denote any of the variables \(X_{j^{\prime}}^{m}\) for \(j^{\prime}<j\).
### Estimation Pipeline
Learning autoregressive coefficients of multiple time series of potentially different lengths and identical true lag is more complex than just the usual group-LASSO problem, where the groups form a partition of the index set. From a methodological perspective, some immediate technical challenges are 1. deciding on what \(L\) should be taken in the formulation of the convex problem (13), 2. how to disentangle the dual of the group norm (in order to apply Holder's inequality to derive oracle prediction guarantees as in subsection 4.1 below); from a practical perspective, the challenge lies in incorporating the varying number of samples into the convex problem.
We now give a high-level overview of our estimation pipeline (described below) that takes as input the multiple time series, and forms the appropriate convex problem in the form of (15), and solves this convex problem via stochastic proximal gradient method as in Nicholson et al. (2020). In essence, the idea is to start looking into the data set to first find the samples corresponding to the component which has the minimum number of samples (breaking ties arbitrarily). We then use this component to find the initial input lag, \(L\), as described in (38). In the next step, we solve the regularized least-squares problem in (13) -- where the penalty function is the group norm \(\mathcal{N}\), discussed further below (see (12)). The rate of convergence of the procedure is quadratic in number of computational steps as discussed in Nicholson et al. (2020).
The following is the main theorem on the theoretical properties of the output of the estimation pipeline. This theorem is essentially a summary of the results contained in Section 4, and will be discussed in all detail in subsections 4.2.2 and 4.3, as indicated below.
**Theorem 3.1** (Main Theorem).: _Let \(\hat{\boldsymbol{\beta}}\) be the output of the group-LASSO in (15) above, with_
\[\lambda=24(84Ae)^{\frac{1}{2}}\zeta^{-1}\sigma_{\max}^{2}C_{ \sharp}^{\frac{3}{2}}(1+\epsilon^{-2}+\epsilon^{-4})\sqrt{\frac{L\log\left( \frac{ML}{\delta}\right)}{DM}}\,,\]
_where \(\zeta=6^{-3}\epsilon^{4}\), and \(L\geq L_{0}\) satisfying_
\[n_{\min}=L+84Ae\zeta^{-2}L\log\left(\frac{ML}{\delta}\right)\,.\]
_Suppose that the \(\beta\)-min condition (10) holds with \(c_{\beta}=\lambda\). If the total number \(D\) of post-samples satisfies_
\[D\geq 3^{9}\cdot(84Ae)C_{\sharp}^{5}(1+\epsilon^{-2}+\epsilon^{-4}) ^{2}\left(\frac{\sigma_{\max}}{\sigma_{\min}}\right)^{4}(\epsilon^{3}\zeta)^{ -2}M^{a}L_{0}^{2}L^{3}\log\left(\frac{ML}{\delta}\right)\log(2L)\,,\]
_and if \(T_{\min}\geq 84eA\zeta^{-2}L_{0}\log L\), then the following holds with high probability:_
1. _the estimation error is given by_ \[\|\hat{\boldsymbol{\beta}}-\boldsymbol{\beta}\|_{2}\leq\frac{81(84Ae)^{\frac{ 1}{2}}LL_{0}\sigma_{\max}^{2}C_{\sharp}^{\frac{3}{2}}(1+\epsilon^{-2}+\epsilon ^{-4})}{\zeta\alpha\epsilon^{2}}\sqrt{\frac{\log\left(\frac{ML}{\delta} \right)}{D}}\,;\]
2. _if_ \(S_{\lambda}:=\{j:|\hat{\boldsymbol{\beta}}_{j}|>\lambda\}\)_, then the false discovery is bounded by the following inequality:_ \[|S_{\lambda}\setminus\mathrm{supp}(\boldsymbol{\beta})|\,\leq\frac{243(84Ae)^ {\frac{1}{2}}LL_{0}^{\frac{3}{2}}\sigma_{\max}^{2}C_{\sharp}^{\frac{3}{2}}(1+ \epsilon^{-2}+\epsilon^{-4})}{\zeta\alpha\epsilon^{2}\lambda}\sqrt{\frac{ \log\left(\frac{ML}{\delta}\right)}{D}}\,.\]
3. _the AR-models -- fitted with coefficients_ \(\hat{\boldsymbol{\beta}}_{0}\) _returned by the Algorithm_ 1 _("AR Coefficient Estimation Pipeline"). -- are stable, with high probability._
Proof of Theorem 3.1.: Subject to the stated \(\beta\)-min condition (10), this follows immediately from Theorem (4.9) in combination with Theorem (4.10).
In the pipeline above, it is necessary to consider two distinct stability parameters \(A\geq 1\) and \(\delta\), as it is not possible to integrate them into a single parameter -- due mainly to the different number of samples from the component processes in our set-up. Also, we separately mention the two cases (of \(\boldsymbol{\beta}_{m}\)'s being identical (or not) for all \(m\)) in order to specifically emphasize that in the first case, our algorithm requires a smaller number of samples than in the later case (which allows savings of a factor of \(M\) in the sample complexity).
Decent algorithms like the proximal gradient method used in the subroutine above might produce small non-zero valued parameters as numerical artifacts. One could consider other types of algorithms instead, but the required number of computational steps could be much higher. Moreover, those artifacts, and statistical false positives more generally, can also be controlled by standard \(\lambda\)-thresholding as mentioned in the algorithm.
## 4 Statistical Guarantees
This section contains the main theoretical results of this paper. We begin with deriving bounds for the one-step ahead prediction error, first formulated by an oracle inequality (see Theorem 4.2), which depends on the tuning parameter \(\lambda\) of our least-squares penalisation approach. Then we control this tuning parameter by controlling the effective noise of our Lasso-optimisation problem. Both things together will finally yield more explicit rates of our one-step ahead prediction error. The second part of this section treats control of the estimated autoregressive coefficients, including control of false discovery for their support (Theorem 4.9). In the end we present our result on stability of the estimated AR-model (Theorem 4.10).
To start with, we briefly recall the notion of the dual norm of our group-LASSO norm \(\mathcal{N}\) defined above.
Our underlying space is \(\mathbb{R}^{ML}\). Observe that
\[\boldsymbol{\beta}\mapsto\mathcal{N}(\boldsymbol{\beta}):=\sum\sqrt{|\mathcal{ G}_{l}|}\cdot\|\boldsymbol{\beta}_{\mathcal{G}_{l}}\|_{2}\]
is a norm for any \(\mathcal{G}=\{\mathcal{G}_{1},\cdots,\mathcal{G}_{l}\}\) that covers \([ML]\). However, this norm is singular at any point where all the coordinates in a group vanish. Moreover, all the coordinates in all of the smaller-sized groups will vanish, which is important in our analysis, since we will basically care about sparse solutions. Thus, we can not appeal to differential techniques to get bounds on the norm, but rather need to appeal to the dual norm approach.
**Definition 4.1** (Dual norm).: For a norm \(\mathcal{N}\) on \(\mathbb{R}^{ML}\), the dual norm \(\mathcal{N}_{\star}(\boldsymbol{\alpha})\) of \(\boldsymbol{\alpha}\in\mathbb{R}^{ML}\) is the optimum solution of the following convex program:
maximize \[\langle\boldsymbol{\alpha},\boldsymbol{\beta}\rangle\] subject to \[\mathcal{N}(\boldsymbol{\beta})\leq 1\,.\]
The dual norm is used to encapsulate the effective noise of LASSO. More explicitly, the dual norm shows up in the form \(\mathcal{N}_{\star}(X^{\top}\boldsymbol{U})\), called the _effective noise_ of the LASSO in equation (15). Recall that the effective noise vector \(X^{\top}\boldsymbol{U}\) can be thought of as the "projection" of the noise vector \(\boldsymbol{U}\) on the column space of \(X\), and thus, \(\mathcal{N}_{\star}(X^{\top}\boldsymbol{U})\) is a measure of the "true" noise present in the data.
In Appendix D, we obtain a generic bound on the dual norm, which will be used to obtain the statistical guarantees of this paper.
### Prediction Error
Here we now give the announced theoretical results on non-asymptotic bounds for the one-step ahead prediction error, first formulated by an oracle inequality (see Theorem 4.2), then in the following subsection including control of the tuning parameter \(\lambda\) by control of the effective noise of our Lasso-optimisation problem. This enables us to formulate concrete rates of the prediction error. Note that this approach delivers an explicit way of how to select \(L\) (via equation (24), and subsequently (38)) the input parameter for Step 2 of Algorithm 1 ("AR Coefficient Estimation Pipeline").
#### 4.1.1 Oracle Prediction Error
The problem (15) has a solution because, given any specific realization of the time series (thus, effectively, fixing \(\boldsymbol{y}\) and \(X\)) and any \(\lambda>0\), the convex function
\[f_{\lambda}\left(\boldsymbol{\beta}\right)=\frac{1}{D}\|\boldsymbol{y}-X \boldsymbol{\beta}\|_{2}^{2}+\lambda\|\boldsymbol{\beta}\|_{\mathcal{G}} \tag{17}\]
is continuous, and
\[\lim_{\|\boldsymbol{\beta}\|_{2}\to\infty}f_{\lambda}(\boldsymbol{\beta})= \infty\.\]
This also shows that we can consider this as a convex program on a compact domain, because we should (at least in theory) be able to restrict the domain of minimization to be an \(\ell^{2}\)-ball of suitable radius, say \(R_{r,X,\boldsymbol{y}}\in(0,\infty)\). Let \(\hat{\boldsymbol{\beta}}\) be a solution of this convex program.
Now write the time series observations in the matrix form as \(\mathcal{X}\). We are interested in an estimate of the 'risk', equivalently, the in-sample, one-step-ahead mean squared forecast error \(\mathbb{E}[\|\boldsymbol{y}-X\hat{\boldsymbol{\beta}}\|_{2}^{2}/D\mid\mathcal{X}]\). In the derivations below, we follow a well-known approach for its control, as appeared (for example) in (Lederer, 2021, Chapter 6). We defer the proof to Appendix A.
**Theorem 4.2** (Prediction Guarantee): _Suppose that \(\lambda\geq\frac{2}{D}\mathcal{N}_{\star}(X^{\top}\boldsymbol{U})\), where \(\mathcal{N}(\boldsymbol{\alpha})=\sum_{l=1}^{L}\sqrt{|\mathcal{G}_{l}|}\cdot \|\boldsymbol{\alpha}_{(\geq l)}\|_{2}\) as in Proposition D.3. Write \(\overline{\sigma}^{2}=D^{-1}(T_{1}\sigma_{1}^{2}+\cdots+T_{M}\sigma_{M}^{2})\); then_
\[\frac{1}{D}\mathbb{E}\left[\|\boldsymbol{y}-X\hat{\boldsymbol{\beta}}\|_{2}^{ 2}\mid\mathcal{X}\right]\leq\overline{\sigma}^{2}+\min_{\alpha\in\mathbb{R}^{ ML}}\left(\frac{1}{D}\|X(\boldsymbol{\beta}-\boldsymbol{\alpha})\|_{2}^{2}+2 \lambda\mathcal{N}(\boldsymbol{\alpha})\right),\]
_In particular, the following inequality holds:_
\[\mathbb{E}\left[\|\boldsymbol{y}-X\hat{\boldsymbol{\beta}}\|_{2}^{2}\mid \mathcal{X}\right]-\mathbb{E}\left[\|\boldsymbol{y}-X\boldsymbol{\beta}\|_{2 }^{2}\mid\mathcal{X}\right]\leq\min_{\alpha\in\mathbb{R}^{ML}}\left(\|X( \boldsymbol{\beta}-\boldsymbol{\alpha})\|_{2}^{2}+2D\lambda\mathcal{N}( \boldsymbol{\alpha})\right)\,.\]
_If, moreover, \(\lambda\geq\frac{4}{D}\mathcal{N}_{\star}(X^{\top}\boldsymbol{U})\), then_
\[\frac{1}{D}\mathbb{E}\left[\|\boldsymbol{y}-X\hat{\boldsymbol{\beta}}\|_{2}^ {2}\mid\mathcal{X}\right]\leq\overline{\sigma}^{2}+\frac{\lambda}{2}\min\left\{ 3\mathcal{N}(\boldsymbol{\beta}-\hat{\boldsymbol{\beta}}),3\mathcal{N}( \boldsymbol{\beta})-\mathcal{N}(\hat{\boldsymbol{\beta}})\right\}. \tag{18}\]
_Consequently,_
\[\frac{1}{D}\mathbb{E}\left[\|\boldsymbol{y}-X\hat{\boldsymbol{\beta}}\|_{2}^ {2}\mid\mathcal{X}\right]\leq \overline{\sigma}^{2}+2\lambda\sum_{\ell=1}^{L_{0}}\sqrt{|\mathcal{G}_{\ell} |}\cdot\|\boldsymbol{\beta}-\hat{\boldsymbol{\beta}}\|_{\mathcal{G}_{\ell}}. \tag{19}\]
This result is in the form of standard oracle inequalities in high-dimensional statistics (Lederer, 2021, Chapter 6). It shows that the estimator minimizes the one-step-ahead mean-squared forecast risk up to a complexity term that is linear in the tuning parameter and the model complexity. Hence, the above bound yields an upper bound for the rate convergence once we can control the tuning parameter \(\lambda\) via an upper bound on the effective noise.
Thanks to Theorem 4.2 and Proposition D.3, in order to find the smallest tuning parameter \(\lambda\) fulfilling \(\lambda\geq 4\mathcal{N}_{\star}(X^{\top}\boldsymbol{U})/D\) it suffices to derive a high-probability upper-bound on \(D^{-1}L^{-\frac{1}{2}}\|X^{\top}\boldsymbol{U}\|_{\infty}\) (which is precisely the bound on the dual norm of \(X^{\top}U\)).
#### 4.1.2 Control of the Effective Noise for bounding the tuning parameter
We can prove the following high-probability bound on the tuning parameter (equivalently, on the effective noise). Again, its proof appears in Appendix B. Note that this is a major inequality, and while the arguments are well-known, we have applied those arguments to the case where the sparsity enforcing regularizer is induced by the overlapping group norm.
**Theorem 4.3** (Bound on the Effective Noise).: _Let \(C_{\sharp}:=T_{\max}/T_{\min}\). For any \(\eta>0\) satisfying_
\[\eta\geq\frac{8C_{\sharp}\sigma_{\max}^{2}(1+\epsilon^{-2}+\epsilon^{-4})}{M}\,, \tag{20}\]
_and any \(\delta>0\), if \(D=T_{1}+\cdots+T_{m}\) satisfies_
\[D\geq\frac{8\sigma_{\max}^{2}(1+\epsilon^{-2}+\epsilon^{-4})}{c_{0}\eta}\log \left(\frac{ML}{\delta}\right)\,, \tag{21}\]
_then the following inequality holds:_
\[\mathbb{P}\left[\frac{2}{D}\mathcal{N}_{\star}(X^{\top}\mathbf{U})\geq\frac{3\eta }{2\sqrt{L}}\right]\ \leq\ \delta\,. \tag{22}\]
_Here \(c_{0}>0\) is the absolute constant from the Gaussian concentration inequality in proposition E.1._
The interpretation of the above result is that for the LASSO oracle inequality (Theorem 4.2) to hold with high probability, it is sufficient to choose \(\lambda\) to be just as large as \((3\eta)/(2\sqrt{L})\), but it is not necessary to take it larger. Indeed, equation (22) shows that the probability that \(2/D\mathcal{N}_{\star}(X^{T}U)\) is larger than \(3\eta/2\sqrt{L}\) is small; thus, we may assume \(2/D\mathcal{N}_{\star}(X^{T}U)<3\eta/2\sqrt{L}\), to hold with probability \(1-\delta\).
The above result bounds the tails of the effective noise. For any \(A\geq 1\), we will now set
\[\eta=\ C_{0}\sqrt{AC}C_{\epsilon}C_{\sharp}^{\frac{3}{2}}\sigma_{\max}^{2} \sqrt{\frac{L\log\left(\frac{ML}{\delta}\right)}{DM}}\,,\]
for some absolute constant \(C_{0}>0\) and parameter \(C_{\epsilon}>0\) that depends only on the stability parameter \(\epsilon\). Henceforth, we take
\[\eta:=8(84Ae)^{\frac{1}{2}}\zeta^{-1}(1+\epsilon^{-2}+\epsilon^{-4})C_{\sharp }^{\frac{3}{2}}\sigma_{\max}^{2}\sqrt{\frac{L\log\left(\frac{ML}{\delta} \right)}{DM}}\,, \tag{23}\]
which is obtained by plugging-in values of \(C_{0}\) and \(C_{\epsilon}\) -- as in the proof of Lemma 4.4 below -- into Equation (23); we do not attempt to optimize these constants.
**Lemma 4.4** (Data-dependent selection of \(L\)).: _There is an absolute constant \(C\in(0,\infty)\) and a parameter \(C_{\epsilon}>0\) that depends only on the stability parameter \(\epsilon>0\) such that if \(\eta\) is as in (23) and_
\[T_{\min}\leq 84Ae\zeta^{-2}L\log\left(\frac{ML}{\delta}\right)\,, \tag{24}\]
_then the inequality (20) holds._
This results shows that there is a suitable \(\eta\) for our theories to hold. The question of finding such an \(\eta\) in practice will need to be discussed in more applied future work.
Proof.: We set \(C_{0}:=(12)^{3}\sqrt{84e}\), and
\[C_{\epsilon}:=\epsilon^{-4}(1+\epsilon^{-2}+\epsilon^{-4})\,.\]
Note that
\[C_{\sharp}T_{\min}=T_{\max}\geq\ \frac{D}{M}\,. \tag{25}\]
Now, with \(\eta\) as in (23), the inequality (20) is ensured by the following sequence of inequalities:
\[(84Ae)^{\frac{1}{2}}\zeta^{-1}\sqrt{L\log\left(\frac{ML}{\delta} \right)} \geq\sqrt{T_{\min}}\] \[\Rightarrow (84Ae)^{\frac{1}{2}}\zeta^{-1}C_{\sharp}^{\frac{1}{2}}\sqrt{L \log\left(\frac{ML}{\delta}\right)} \geq\sqrt{\frac{D}{M}}\] \[\Rightarrow 8(84Ae)^{\frac{1}{2}}\zeta^{-1}(1+\epsilon^{-2}+\epsilon^{-4}) C_{\sharp}^{\frac{3}{2}}\sigma_{\max}^{2}\sqrt{\frac{L\log\left(\frac{ML}{ \delta}\right)}{DM}}\geq\frac{8C_{\sharp}\sigma_{\max}^{2}(1+\epsilon^{-2}+ \epsilon^{-4})}{M}\,.\]
The inequality (24) provides for the theoretical support of our choice of \(L\) in formulating the penalized least squares program in (15).
The explicit nature of the parameters in the proof above is crucial here, in order to satisfy both (20) and (21) above, as well as remaining compatible with the requirements involving the restricted eigenvalue property (as in Corollary 4.7) below. Note that the current LASSO-based literature often ignores combining the requirements coming from standard oracle inequality type result (Theorem 4.2) and the restricted eigenvalue type results.
Remark (On choosing \(L\) via equation (24)).The use of such an upper-bound \(L\) conforms with the (by now) well-known restricted isometry property of sub-sampled Gaussian matrices (that is, matrices whose entries are iid Gaussian) in the compressed-sensing literature; for more details, see (for example) (Candes and Tao, 2006, section 1.E) and the references therein.
For \(D\) as in (21), this gets us the following bound:
\[\mathbb{P}\left[\mathcal{N}_{\star}\left(\frac{2}{D}X^{\top}\mathbf{U}\right)>12( 84Ae)^{\frac{1}{2}}\zeta^{-1}\sigma_{\max}^{2}C_{\sharp}^{\frac{3}{2}}(1+ \epsilon^{-2}+\epsilon^{-4})\sqrt{\frac{L\log\left(\frac{ML}{\delta}\right)}{ DM}}\right]\leq\delta,\]
In view of Theorem 4.3, we correspondingly set
\[\lambda=24(84Ae)^{\frac{1}{2}}\zeta^{-1}\sigma_{\max}^{2}C_{\sharp}^{\frac{3} {2}}(1+\epsilon^{-2}+\epsilon^{-4})\sqrt{\frac{L\log\left(\frac{ML}{\delta} \right)}{DM}} \tag{26}\]
to force (18) to be true.
Remark.From the above, we note that \(\lambda\) decreases when any of \(M,D\) increases (as the function \(x^{-1}\log x\to 0\) when \(x\to\infty\)). This is intuitive since larger \(M\) requires that LASSO must not set too many coefficients to 0, and larger the \(D\) better the non-regularized estimator is as approximation of the true coefficients.
Together with Theorem 4.2, the above yields the following bound:
**Corollary 4.5** (Concrete rates of Prediction Error ).: _Suppose that_
\[\eta=\ 8(84Ae)^{\frac{1}{2}}\zeta^{-1}(1+\epsilon^{-2}+\epsilon^{-4})C_{\sharp}^{ \frac{3}{2}}\sigma_{\max}^{2}\sqrt{\frac{L\log\left(\frac{ML}{\delta}\right)}{ DM}}\,,\]
_as set in Equation (23) above, and_
\[D\geq\frac{8\sigma_{\max}^{2}(1+\epsilon^{-2}+\epsilon^{-4})}{c_{0}\eta}\log \left(\frac{ML}{\delta}\right)\,.\]
_Then, the following inequality holds with probability at least \(1-\delta\):_
\[\mathbb{E}\left[\frac{1}{D}\|\boldsymbol{y}-X\hat{\boldsymbol{ \beta}}\|_{2}^{2}\,|\,\mathcal{X}\right]-\overline{\sigma}^{2}\] \[\leq \min_{\alpha\in\mathbb{R}^{ML}}\left(\frac{1}{D}\|X(\boldsymbol {\beta}-\boldsymbol{\alpha})\|_{2}^{2}+\frac{24}{\zeta}(84Ae)^{\frac{1}{2}} \sigma_{\max}^{2}C_{\sharp}^{\frac{3}{2}}(1+\epsilon^{-2}+\epsilon^{-4}) \sqrt{\frac{L\log\left(\frac{ML}{\delta}\right)}{DM}}\mathcal{N}(\boldsymbol {\alpha})\right)\,.\]
### Estimating error of parameters, false discoveries
We now deliver the treatment of assertions 1. and 2. on estimation error and support (via control of false discoveries) of our Main Theorem 3.1. For this we need to cope with the _Restricted Eigenvalue Property_ as it typically arises in LASSO analysis (see Basu and Michailidis (2015)).
#### 4.2.1 Control of the restricted eigenvalue property
Our prediction guarantees stated so far did not impose restrictions on the minimum number of samples from each component. For estimation and stability guarantees, however, we need to demand a minimum number of those samples. In particular, we will need the following proposition, on the restricted eigenvalue property of the Gram matrix \(X^{\top}X\). The proof presented in Appendix C follows the same lines of arguments as in Basu and Michailidis (2015). Because of the non-uniform nature of the block dimensions of the data matrix in (14), as well as the individual weights assigned to the components of the coefficient \(\boldsymbol{\beta}\), we can only hope for a block-wise result as stated below.
The following proposition delivers a lower bound on the error-expression \(\|X_{m}\boldsymbol{v}_{m}\|_{2}^{2}\) with \(\boldsymbol{v}_{m}\) the \(m-\)th component of \(\hat{\boldsymbol{\beta}}-\boldsymbol{\beta}\).
**Proposition 4.6** (Bound on the Restricted Eigenvalue).: _For any confidence parameter \(\delta>0\) and all vectors \(\boldsymbol{v}=(\boldsymbol{v}_{1}^{\top},\ldots,\boldsymbol{v}_{M}^{\top})^{ \top}\in(\mathbb{R}^{L})^{M}\), the following inequality holds with \(\zeta:=\frac{\epsilon^{4}}{216}\) and \(s_{m}\) even positive integers:_
\[\mathbb{P}\left[\forall\ m\in[M]\inf_{\boldsymbol{v}_{m}\in \mathbb{R}^{L}}\left(\|X_{m}\boldsymbol{v}_{m}\|_{2}^{2}-\frac{T_{m}\sigma_{m} ^{2}\epsilon^{2}}{2}\left(\|\boldsymbol{v}_{m}\|_{2}^{2}-\frac{2}{s_{m}}\| \boldsymbol{v}_{m}\|_{1}^{2}\right)\right)\geq 0\right] \tag{27}\] \[\geq 1-2\sum_{m=1}^{M}\exp\left(-\frac{T_{m}\min\{\zeta,\zeta^{2} \}}{2}+s_{m}\min\{\log L,\log(21eL/s_{m})\}\right)\,.\]
A proof of Proposition 4.6 appears in Appendix C.
Setting
\[s_{m}:=2\Big{\lfloor}\frac{T_{m}\zeta^{2}}{8\log L}\Big{\rfloor}\,, \tag{28}\]
we get the following immediate corollary to Proposition 4.6:
**Corollary 4.7** (Bound on the Restricted Eigenvalue for specific \(s_{m}\)).: _If_
\[T_{\min}\geq 84e\zeta^{-2}\log L\,, \tag{29}\]
_where \(\zeta=6^{-3}\epsilon^{4}\), then the following holds:_
\[\mathbb{P}\left[\forall m\in[M]\inf_{\mathbf{v}_{m}\in\mathbb{R}^{L} }\left(\|X_{m}\mathbf{v}_{m}\|_{2}^{2}-\frac{T_{m}\sigma_{m}^{2}\epsilon^{2}}{2} \|\mathbf{v}_{m}\|_{2}^{2}\left(1-\frac{8L\log L}{T_{m}\zeta^{2}}\right)\right) \geq 0\right] \tag{30}\] \[\geq 1-2\sum_{m=1}^{M}e^{-\frac{T_{m}\zeta^{2}}{4}}\,.\]
Proof.: We use \(\|\mathbf{v}_{m}\|_{1}^{2}\leq L\|\mathbf{v}_{m}\|_{2}^{2}\) in Proposition 4.6.
#### 4.2.2 Bounds on estimation error of the autoregressive coefficients, and false discovery
This section contains the bound on the error of estimation of the autoregressive coefficients, and the false discovery.
The following is a compact notation used in the proof of the theorem below (also in the proof of Theorem 4.2).
**Definition 4.8** (Notation).: (31) \[\mathcal{N}_{\leq L_{0}}(\mathbf{v}):=\sum_{\ell=1}^{L_{0}}\sqrt{|\mathcal{G}_{ \ell}|}\cdot\|\mathbf{v}\|_{\mathcal{G}_{\ell}}\,.\]
The theorem below bounds the \(\ell^{2}\)-error in estimating the autoregressive coefficients \(\mathbf{\beta}\), as well as the size of the set of false positives, using the penalized LASSO formulation as in (15).
**Theorem 4.9** (Bounds on the Estimation Error and False Discovery).: _Let \(\hat{\mathbf{\beta}}\) be the solution to the group-regularized LASSO in (15) with \(\lambda\) as in Equation (26) above -- namely,_
\[\lambda=24(84Ae)^{\frac{1}{2}}\zeta^{-1}\sigma_{\max}^{2}C_{\sharp}^{\frac{3}{ 2}}(1+\epsilon^{-2}+\epsilon^{-4})\sqrt{\frac{L\log\left(\frac{ML}{\delta} \right)}{DM}}\]
_where \(\zeta=6^{-3}\epsilon^{4}\); if_
\[\alpha:=\min_{m\in[M]}\frac{T_{m}\sigma_{m}^{2}}{D}\,, \tag{32}\]
_and_
\[D\geq\frac{8\sigma_{\max}^{2}(1+\epsilon^{-2}+\epsilon^{-4})}{c_{0}\eta}\log \left(\frac{ML}{\delta}\right)\,,\]
_then, for any confidence parameter \(\delta>0\), the inequality_
\[\|\hat{\mathbf{\beta}}-\mathbf{\beta}\|_{2}\leq\frac{81(84Ae)^{\frac{1}{2}}LL_{0} \sigma_{\max}^{2}C_{\sharp}^{\frac{3}{2}}(1+\epsilon^{-2}+\epsilon^{-4})}{ \zeta\alpha\epsilon^{2}}\sqrt{\frac{\log\left(\frac{ML}{\delta}\right)}{D}} \tag{33}\]
holds with probability at least_
\[(1-\delta)\left(1-2\sum_{m=1}^{M}L^{-\frac{T_{m}}{4\log L}}\right)\,, \tag{34}\]
_provided \(T_{\min}\geq 84eA\zeta^{-2}L_{0}\log L\) with \(\zeta=6^{-3}\epsilon^{4}\)._
_Moreover, introducing the notation \(S_{\lambda}:=\{j:|\hat{\boldsymbol{\beta}}_{j}|>\lambda\}\), then the false discovery is bounded by the following inequality holding with probability as in (34) above:_
\[|S_{\lambda}\setminus\operatorname{supp}(\boldsymbol{\beta})|\,\leq\frac{243( 84Ae)^{\frac{1}{2}}LL_{0}^{\frac{3}{2}}\sigma_{\max}^{2}C_{\sharp}^{\frac{3}{ 2}}(1+\epsilon^{-2}+\epsilon^{-4})}{\zeta\alpha\epsilon^{2}\lambda}\sqrt{ \frac{\log\left(\frac{ML}{\delta}\right)}{D}}\,. \tag{35}\]
Remark.In the above, the term inside the square root decreases asymptotically, and the terms outside may change or stay fixed (depending on how the number of samples for each component increases).
Proof of Theorem 4.9.: For ease of notation, write
\[\boldsymbol{v}:=\hat{\boldsymbol{\beta}}-\boldsymbol{\beta}\,.\]
By Proposition 4.6 above (which can be applied since \(T_{\min}\geq 84e\zeta^{-2}\log L\)), one has
\[\frac{1}{D}\|X\boldsymbol{v}\|_{2}^{2} \geq\frac{1}{D}\sum_{m=1}^{M}\|X_{m}\boldsymbol{v}_{m}\|_{2}^{2}\] \[\geq\frac{\epsilon^{2}}{2}\sum_{m=1}^{M}\frac{\sigma_{m}^{2}T_{m }}{D}\|\boldsymbol{v}_{m}\|_{2}^{2}\left(1-\frac{8L\log L}{T_{m}\zeta^{2}}\right)\] \[>\frac{4\alpha\epsilon^{2}}{9}\|\boldsymbol{v}\|_{2}^{2}\,.\]
To this, we now apply (see (45) in Appendix A) the inequality
\[\frac{1}{D}\|X(\boldsymbol{\beta}-\hat{\boldsymbol{\beta}})\|_{2}^{2}\leq 3 \lambda\mathcal{N}_{\leq L_{0}}(\boldsymbol{\beta}-\hat{\boldsymbol{\beta}})\,,\]
to obtain
\[\|\boldsymbol{v}\|_{2}^{2} \leq\frac{27\lambda}{8\alpha\epsilon^{2}}\mathcal{N}_{\leq L_{0} }(\boldsymbol{v})\] \[\leq\frac{27\lambda L_{0}\sqrt{ML}}{8\alpha\epsilon^{2}}\| \boldsymbol{v}\|_{2}\] \[\Rightarrow \|\boldsymbol{v}\|_{2} \leq\frac{27\lambda L_{0}\sqrt{ML}}{8\alpha\epsilon^{2}}\]
With the choice of \(\lambda\) as in equation (26), namely,
\[\lambda=24(84Ae)^{\frac{1}{2}}\zeta^{-1}\sigma_{\max}^{2}C_{\sharp}^{\frac{3} {2}}(1+\epsilon^{-2}+\epsilon^{-4})\sqrt{\frac{L\log\left(\frac{ML}{\delta} \right)}{DM}}\,,\]
it now follows that
\[\|\boldsymbol{v}\|_{2}\leq\frac{81(84Ae)^{\frac{1}{2}}LL_{0}\sigma_{\max}^{2} C_{\sharp}^{\frac{3}{2}}(1+\epsilon^{-2}+\epsilon^{-4})}{\zeta\alpha\epsilon^{2}} \sqrt{\frac{\log\left(\frac{ML}{\delta}\right)}{D}}\,,\]
as claimed. To get the bound on the false discovery, write \(S:=\operatorname{supp}(\boldsymbol{\beta})\); the bound on the false discovery can be obtained as follows:
\[|S_{\lambda}\setminus S| =\sum_{j\in[ML]\setminus S}1_{|\hat{\boldsymbol{\beta}}_{j}|> \lambda}(\hat{\boldsymbol{\beta}})\] \[=\sum_{j\in[ML]\setminus S}1_{|\hat{\boldsymbol{v}}_{j}|> \lambda}(\hat{\boldsymbol{v}})\] \[\leq\frac{1}{\lambda}\sum_{j\in[ML]\setminus S}|\hat{\boldsymbol {v}}_{j}|\] by remark (A) \[\leq\frac{3}{\lambda}\sum_{j\in S}|\hat{\boldsymbol{v}}_{j}|\] \[\leq\frac{3\sqrt{L_{0}}}{\lambda}\|\hat{\boldsymbol{v}}\|_{2}\]
which yields the bound in (35) by inequality (33).
Remark.For the explicit choice of \(\lambda\) as in equation (26), namely,
\[\lambda=24(84Ae)^{\frac{1}{2}}\zeta^{-1}\sigma_{\max}^{2}C_{\sharp} ^{\frac{3}{2}}(1+\epsilon^{-2}+\epsilon^{-4})\sqrt{\frac{L\log\left(\frac{ML}{ \delta}\right)}{DM}}\]
with \(\zeta=6^{-3}\epsilon^{4}\), the inequality (35) becomes
\[|S_{\lambda}\setminus\operatorname{supp}(\boldsymbol{\beta})|\ \leq\frac{81(ML)^{\frac{1}{2}}L_{0}^{\frac{3}{2}}}{8\alpha\epsilon^{2}}\,. \tag{36}\]
### Stability of the estimated AR model
We finally prove that the coefficients estimated as per the overlapping-group-LASSO in (15), with \(\lambda\) as in (26), lie in the region for stability of univariate lag-\(L\) autoregressive processes, even when the number of post-samples is 'not too large'. More specifically, we have the following theorem.
**Theorem 4.10** (Stability Guarantee).: _Let \(A\geq 1\) be a confidence parameter. Let \(\hat{\boldsymbol{\beta}}\) be the output of the group-LASSO in (15) above, using \(D\) post-samples, where_
\[D\geq 3^{9}\cdot(84Ae)C_{\sharp}^{5}(1+\epsilon^{-2}+\epsilon^{-4}) ^{2}\left(\frac{\sigma_{\max}}{\sigma_{\min}}\right)^{4}(\epsilon^{3}\zeta)^{ -2}M^{a}L_{0}^{2}L^{3}\log\left(\frac{ML}{\delta}\right)\log(2L). \tag{37}\]
_If \(T_{\min}\geq 84eA\zeta^{-2}L_{0}\log L\), and \(L\) satisfies_
\[n_{\min}=L+84Ae\zeta^{-2}L\log\left(\frac{ML}{\delta}\right)\,, \tag{38}\]
_with \(\zeta=6^{-3}\epsilon^{4}\), then the AR-models -- fitted with coefficients \(\hat{\boldsymbol{\beta}}_{0}\) returned by Algorithm 1 ("AR Coefficient Estimation Pipeline") -- are stable, with probability as in (34) -- in the following scenarios:_
1. _when all the time series are different realizations of a unique underlying stochastic process,_ \(a\geq 1\)_, and_ \[\hat{\boldsymbol{\beta}}_{0}:=\frac{1}{M}\sum_{m=1}^{M}\hat{\boldsymbol{\beta }}_{m}\,;\]
2. _when all the time series are realizations of different underlying stochastic processes (equivalently, all the_ \(\boldsymbol{\beta}_{m}\)_'s are different),_ \(a\geq 2\)_, and_ \(\hat{\boldsymbol{\beta}}_{0}:=\hat{\boldsymbol{\beta}}\)_._
The lower bound on \(T_{\min}\) aligns with the upper bound in Equation (24) as \(L\geq L_{0}\). The stated value of \(n_{\min}\) chooses the smallest value of \(L\) to satisfy both the upper and lower bound on \(T_{\min}\).
Proof of Theorem 4.10.: 1. We start with the first case. The idea of the proof is as follows. All the component \(\hat{\boldsymbol{\beta}}_{m}\)'s of \(\hat{\boldsymbol{\beta}}\) are approximations of the same underlying \(\boldsymbol{\beta}_{0}:=\boldsymbol{\beta}_{m}\) for all \(m\in[M]\). Therefore, by the bound in Proposition 4.9 above and by convexity of the square function, their mean must be \(\ell^{2}\)-close to \(\boldsymbol{\beta}_{0}\), with \(D=O(ML_{0}^{2}L^{3}\log(ML)\log(2L))\) many samples. Moreover, the \(0.5L^{-1}\epsilon\) perturbation of the coefficients preserves stability of the \(\epsilon\)-stable process.
More explicitly, we have
\[\left\|\left(\sum_{m=1}^{M}\frac{1}{M}\hat{\boldsymbol{\beta}}_{m }\right)-\boldsymbol{\beta}_{0}\right\|_{2}^{2} =\left\|\frac{1}{M}\sum_{m=1}^{M}(\hat{\boldsymbol{\beta}}_{m}- \boldsymbol{\beta}_{m})\right\|_{2}^{2}\] \[\text{convexity}\,\Rightarrow \leq\frac{1}{M}\sum_{m=1}^{M}\|\hat{\boldsymbol{\beta}}_{m}- \boldsymbol{\beta}_{m}\|_{2}^{2}\] \[=\frac{1}{M}\|\hat{\boldsymbol{\beta}}-\boldsymbol{\beta}\|_{2} ^{2},\]
and by Theorem 4.9, this yields
\[\left\|\left(\sum_{m=1}^{M}\frac{1}{M}\hat{\boldsymbol{\beta}}_{m }\right)-\boldsymbol{\beta}\right\|_{2} \leq\frac{1}{\sqrt{M}}\|\hat{\boldsymbol{\beta}}-\boldsymbol{ \beta}\|_{2} \tag{39}\] \[\leq\frac{81(84Ae)^{\frac{1}{2}}LL_{0}\sigma_{\max}^{2}C_{ \sharp}^{\frac{3}{2}}(1+\epsilon^{-2}+\epsilon^{-4})}{\zeta\alpha\epsilon^{2} }\sqrt{\frac{\log\left(\frac{ML}{\delta}\right)}{MD}}\]
with high probability. Note that
\[\alpha=\min_{m\in[M]}\frac{T_{m}\sigma_{m}^{2}}{D}\geq\frac{T_{\min}\sigma_{ \min}^{2}}{D}\geq\frac{T_{\min}\sigma_{\min}^{2}}{MT_{\max}}\geq\frac{\sigma_ {\min}^{2}}{MC_{\sharp}}\,. \tag{40}\]
When
\[D\geq 3^{9}\cdot(84Ae)C_{\sharp}^{5}(1+\epsilon^{-2}+\epsilon^{-4})^{2} \left(\frac{\sigma_{\max}}{\sigma_{\min}}\right)^{4}(\epsilon^{3}\zeta)^{-2} ML_{0}^{2}L^{3}\log\left(\frac{ML}{\delta}\right)\log(2L)\,,\]
this yields
\[\left\|\left(\sum_{m=1}^{M}\frac{1}{M}\hat{\boldsymbol{\beta}}_{m }\right)-\boldsymbol{\beta}\right\|_{2} \leq\frac{81(84Ae)^{\frac{1}{2}}LL_{0}\sigma_{\max}^{2}C_{\sharp}^ {\frac{3}{2}}(1+\epsilon^{-2}+\epsilon^{-4})}{\zeta\alpha\epsilon^{2}}\sqrt{ \frac{\log\left(\frac{ML}{\delta}\right)}{MD}}\] \[\leq\frac{\epsilon\sigma_{\min}^{2}}{\sqrt{3}C_{\sharp}M\alpha} \sqrt{\frac{1}{L\log(2L)}} \tag{40}\] \[\Rightarrow <\frac{\epsilon}{\sqrt{3}}\sqrt{\frac{1}{L\log(2L)}}\,.\]
Since \(\epsilon\in(0,1)\), the stability follows by the triangle inequality. More explicitly, if \(\hat{\mathbf{\beta}}_{0}:=M^{-1}(\hat{\mathbf{\beta}}_{1}+\cdots+\hat{\mathbf{\beta}}_{M})\), then for every \(z\in\mathbb{D}\), we have : \[|\mathbf{f}_{\hat{\mathbf{\beta}}_{0}}(z)| =|1-\hat{\mathbf{\beta}}_{0}\cdot(z,z^{2},\cdots,z^{L})|\] \[=|1-\mathbf{\beta}_{0}\cdot(z,z^{2},\cdots,z^{L})-(\hat{\mathbf{\beta}}_{ 0}-\mathbf{\beta}_{0})\cdot(z,z^{2},\cdots,z^{L})|\] \[\geq|\mathbf{f}_{\mathbf{\beta}_{0}}(z)|-\|\hat{\mathbf{\beta}}_{0}-\mathbf{\beta }_{0}\|_{2}\cdot\|(z,z^{2},\cdots,z^{L})\|_{2}\] \[>\epsilon-\frac{\epsilon}{\sqrt{3}}\sqrt{\frac{1}{\log(2L)}}\] \[>0\,.\] Evidently, this shows that the autoregressive process with \(\hat{\mathbf{\beta}}_{0}\) coefficients is stable.
2. The arguments are similar in the second case. Since \[D\geq 3^{9}\cdot(84Ae)C_{\sharp}^{5}(1+\epsilon^{-2}+\epsilon^{-4})^{2}\left( \frac{\sigma_{\max}}{\sigma_{\min}}\right)^{4}(\epsilon^{3}\zeta)^{-2}M^{2}L_ {0}^{2}L^{3}\log\left(\frac{ML}{\delta}\right)\log(2L)\,,\] by Theorem 4.9, for any \(m\in[M]\) we have \[\|\hat{\mathbf{\beta}}_{m}-\mathbf{\beta}_{m}\|_{2} \leq\|\hat{\mathbf{\beta}}-\mathbf{\beta}\|_{2}\] \[\leq\frac{81(84Ae)^{\frac{1}{2}}LL_{0}\sigma_{\max}^{2}C_{\sharp} ^{\frac{3}{2}}(1+\epsilon^{-2}+\epsilon^{-4})}{\zeta\alpha\epsilon^{2}}\sqrt{ \frac{\log\left(\frac{ML}{\delta}\right)}{D}}\] (40) \[\Rightarrow <\frac{\epsilon}{\sqrt{3}}\sqrt{\frac{1}{L\log(2L)}}\,.\] As in the first case, this implies stability for each of the components.
## 5 Algorithmic Aspects
In this section we briefly introduce the main gazette of our algorithm -- namely, the proximal operator -- to understand the procedure of solving LASSO as was done in Nicholson et al. (2020). To sketch the outline of the standard procedure for solving regularized LASSO penalized with an overlapping group-norm, we start with the following definition (see Mairal et al. (2011), for example).
**Definition 5.1** (Proximal operator).: Given a norm \(\mathcal{N}\) on \(\mathbb{R}^{ML}\), and a tuning parameter \(\lambda\), the associated proximal operator \(\operatorname{Prox}_{\mathcal{N},\lambda}\) is defined for every \(\mathbf{\alpha}\in\mathbb{R}^{ML}\) as the optimum value of the following convex problem:
\[\operatorname*{Prox}_{\mathcal{N},\lambda}(\mathbf{\alpha})\coloneqq\operatorname* {argmin}_{\mathbf{\beta}}\left\{\frac{1}{2}\|\mathbf{\beta}-\mathbf{\alpha}\|_{2}^{2}+ \lambda\mathcal{N}(\mathbf{\beta})\right\}\,.\]
By Zhao et al (2009), the proximal operator \(\operatorname{Prox}_{\mathcal{N},\lambda}\) -- for \(\mathcal{N}\) as defined in (12) -- is the composition
\[\operatorname*{Prox}_{\mathcal{G}_{1},\lambda}\circ\cdots\circ\operatorname* {Prox}_{\mathcal{G}_{L},\lambda}\]
of the proximal operators for the individual groups and can be computed inductively -- starting from \(\operatorname{Prox}_{\mathcal{G}_{L},\lambda}\), which is the well-known soft-thresholding -- in \(O(ML)\) computational
steps. By Combettes and Wajs (2005, Proposition 3.1(iii)b), the solutions to the regularized LASSO problem in (15) are precisely the fixed points of the operator
\[\boldsymbol{\beta}\longmapsto\operatorname*{\mathrm{Prox}}_{\mathcal{N}, \lambda}\left(\boldsymbol{\beta}+\frac{2\lambda}{D}\boldsymbol{X}^{\top}( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta})\right)\,. \tag{41}\]
Here we use an appropriate \(\lambda\) (as given in (26) above). As Nicholson et al. (2020) observed, the proximal operator can be evaluated via duality. The proximal gradient method of Mairal et al. (2011) then finds the fixed point (which exists and is unique by convexity of (15)) of this proximal operator. For pseudo-code of this procedure, see Nicholson et al. (2020), where an accelerated version of the proximal descent method was employed for achieving quadratic convergence rate.
## 6 Conclusion
We have established a set-up (see Equation (15) and the discussion preceding this equation) in which LASSO -- regularized with a hierarchical group norm -- can be used to derive statistical guarantees in terms of the one-step ahead prediction error (Theorem 4.2) in the realms of multiple \(\epsilon\)-stable (Definition (2.6)) univariate autoregressive processes of different lengths but identical true lag \(L_{0}\). The results presented here assume no prior knowledge of the true lag (or any upper-bound of the true lag); in fact, we show that the sample size itself suggests a certain lag \(\hat{L}\) to be used for the group-LASSO, and given an appropriately large sample size, such that \(\hat{L}\) will be an upper-bound of the true lag \(L_{0}\). Moreover, this \(\hat{L}\) will be of the order that is required for our theoretical guarantees to hold. We proved that the group-LASSO formulated with a suitable tuning parameter \(\lambda\) estimates the AR coefficients with an arbitrarily high degree of accuracy. We also showed the support of the estimated coefficient-set approximately matches the support of the original parameters (Theorem 4.9). Finally, we proved that the fitted models with coefficients as estimated by the group-LASSO are \(\epsilon\)-stable (Theorem 4.10), a property that is known in the literature solely for Yule-Walker estimates of the parameters of univariate autoregressive processes.
From a theoretical perspective, it will be interesting to investigate adaptations of the group-LASSO method to the case of multiple decoupled AR processes with multivariate components. We expect that this will require, among others, (1) additional techniques to deal with the group-norm, and (2) integrating the stability issues and the restricted eigenvalue issues; these will be technically far more demanding in the multivariate components settings. On the practical front, the most important question is to get a better hold on the tuning parameter \(\lambda\) (see Equation (26)), which requires better constant/parameters than, for example, those appearing in the proof of Lemma 4.4. A better control on these will enable a more realistic estimate of \(\hat{L}\) -- to be used by the group-LASSO as an upper-bound on the true lag \(L_{0}\).
## Funding
S.C. and J.L. acknowledge funding from the Deutsche Forschungsgemeinschaft (DFG) under grant number 451920280. S.C. was partially supported by a 'Research Assistantship Fellowship for Early Postdocs' from Ruhr-Universitat Bochum Research School by means of the German Academic Exchange Service (DAAD) STIBET funds.
|
2310.15550
|
PET Synthesis via Self-supervised Adaptive Residual Estimation
Generative Adversarial Network
|
Positron emission tomography (PET) is a widely used, highly sensitive
molecular imaging in clinical diagnosis. There is interest in reducing the
radiation exposure from PET but also maintaining adequate image quality. Recent
methods using convolutional neural networks (CNNs) to generate synthesized
high-quality PET images from low-dose counterparts have been reported to be
state-of-the-art for low-to-high image recovery methods. However, these methods
are prone to exhibiting discrepancies in texture and structure between
synthesized and real images. Furthermore, the distribution shift between
low-dose PET and standard PET has not been fully investigated. To address these
issues, we developed a self-supervised adaptive residual estimation generative
adversarial network (SS-AEGAN). We introduce (1) An adaptive residual
estimation mapping mechanism, AE-Net, designed to dynamically rectify the
preliminary synthesized PET images by taking the residual map between the
low-dose PET and synthesized output as the input, and (2) A self-supervised
pre-training strategy to enhance the feature representation of the coarse
generator. Our experiments with a public benchmark dataset of total-body PET
images show that SS-AEGAN consistently outperformed the state-of-the-art
synthesis methods with various dose reduction factors.
|
Yuxin Xue, Lei Bi, Yige Peng, Michael Fulham, David Dagan Feng, Jinman Kim
|
2023-10-24T06:43:56Z
|
http://arxiv.org/abs/2310.15550v1
|
# PET Synthesis via Self-supervised Adaptive Residual Estimation Generative Adversarial Network
###### Abstract
Positron emission tomography (PET) is a widely used, highly sensitive molecular imaging in clinical diagnosis. There is interest in reducing the radiation exposure from PET but also maintaining adequate image quality. Recent methods using convolutional neural networks (CNNs) to generate synthesized high-quality PET images from 'low-dose' counterparts have been reported to be'state-of-the-art' for low-to-high image recovery methods. However, these methods are prone to exhibiting discrepancies in texture and structure between synthesized and real images. Furthermore, the distribution shift between low-dose PET and standard PET has not been fully investigated. To address these issues, we developed a self-supervised adaptive residual estimation generative adversarial network (SS-AEGAN). We introduce (1) An adaptive residual estimation mapping mechanism, AE-Net, designed to dynamically rectify the preliminary synthesized PET images by taking the residual map between the low-dose PET and synthesized output as the input, and (2) A self-supervised pre-training strategy to enhance the feature representation of the coarse generator. Our experiments with a public benchmark dataset of total-body PET images show that SS-AEGAN consistently outperformed the state-of-the-art synthesis methods with various dose reduction factors.
Low-Dose PET, high-quality PET synthesis, GAN, Residual Estimation, self-supervised pre-training.
## I Introduction
Positron Emission Tomography (PET), an ultrasensitive and non-invasive molecular imaging technique, is considered as the main imaging instrument for oncology [1], neurology [2], and cardiology [3]. Compared with other imaging modalities such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT), PET can image the functional properties of living tissue and detect disease-related functional activity within organs by injecting radioactive tracers into the body [4]. Unfortunately, the ionizing radiation dose from the injected radioactive tracer increases the chance of patients' radiation exposure and therefore limits its application [5]. According to the dose level of the tracer, the reconstructed PET images can be classified as standard-dose PET (_sPET_) that refers to the commonly used imaging protocol for PET scans, which typically involves injecting a patient with a radiotracer dose of about 3-5 mCi (millicuries) and acquiring images over a period of 30-60 minutes and low-dose PET (_IPET_) images that uses a lower radiation dose compared to standard-dose PET. These 'high quality' _sPET_ images contain better structural details and have higher signal-to-noise (SNR) ratios of the radiotracer distribution when compared with the _IPET_ counterpart. However, _sPET_ requires higher cumulative radiation exposure, which thereby raises potential health risks, resulting in restricted usage e.g., among children [48]. Motivated by these challenges, there have been great research interests in developing new image analysis methods that allow to reconstruct _sPET_ images using _IPET_ images [9]-[13]. One category of studies has been focusing on reconstructing high-quality PET from sinogram data with low dose sinogram [9-11]. Although these algorithms achieved good results, their applicability may be constrained due to the slow convergence and longer scan time. An alternative is to reconstruct _sPET_ as a post-reconstruction process from _IPET_. Denoising methods, including combining complementary wavelet and curvelet transforms [12], and non-local means [13], were utilized to improve _IPET_ image quality. However, these methods aimed at reducing the noise of _IPET_ by removing unwanted distortions such as random noise, artifacts, and interference.
Deep learning methods based on convolutional neural networks (CNNs) have achieved great success in medical image analysis-related tasks e.g., automated tumor segmentation and classification [14]-[16]. Motivated by the great capacity of feature representation from CNNs, investigators have attempted to use it for reconstructing _sPET_ from _IPET_ images. Xiang _et al_. [16] integrated multiple CNN modules following an auto-context strategy to estimate _sPET_ from _IPET_. Kim _et al_. [17] proposed a local linear fitting (LLF) function and a denoising convolutional neural network (DnCNN) to enhance image quality from _IPET_. Spuhler _et al_. [18] adopted a U-Net architecture and replaced commonly used convolutional kernels with dilated kernels to increase the receptive field.
To address these challenges, investigators used generative adversarial network (GAN) to preserve detailed information. GAN extends the CNNs by adding an additional discriminator network to distinguish between real/synthetic images [19]. Bi _et al_. [20] developed a multi-channel GAN to synthesize high quality PET. Wang _et al_. [21], leveraged a conditional GANs
model and an adversarial training strategy to recover _sPET_ images from _IPET_. A GAN-based model with feature matching and task-specific perceptual loss was proposed by Ouyang _et al_. [22] to accurately yield _sPET_ from _IPET_ counterpart. Unfortunately, these GAN-based approaches have difficulties in recovering high dimensional details, e.g., contextual information, and clinically significant texture features e.g., intensity values of an image. This is mainly attributed to the fact that they have not explored the spatial distribution correlations between the _sPET_ and _IPET_ images. Furthermore, existing GAN methods, when applied to PET images, tend to produce artifacts due to the use of a sequential up-sampling process. To compensate for the unsatisfied synthesized PET results from the GAN model, Luo _et al_. [33] adopted a residual estimation module to predict the residual between the preliminarily synthesized PET and the real PET images. However, the residual estimation network only accounted for the preliminary synthesized results as the input such that the quality of the residual mapping was heavily reliant on the preliminary synthesized PET. In addition, [33] proposed a 2D-based synthesis model that may not fully use the spatial dependencies and structural information along the three views.
Besides, the existing methods have not explored to employ self-supervised pre-training to improve the generalizability of synthesis model which can facilitate domain adaptation by learning representations that are invariant to differences in imaging protocols, scanner types, and other acquisition variables.
In this study, we propose a self-supervised adaptive generative adversarial network (SS-AEGAN) to recover _sPET_ images from corresponding _IPET_ images. When compared to the state-of-the-art methods, we introduce the following contributions:
1) A 3D input-involved adaptive residual estimation module, AE-Net, by taking the difference map between the preliminary _sPET_ and the _IPET_ as the input, such that it allows the network to learn the residual mapping between the _sPET_ and the _IPET_. When compared with the existing residual learning methods, our _IPET_-embedded rectification can potentially correct the artifact brought by the initial results.
2) A self-supervised pre-training strategy to initialize the encoder of the coarse synthesis generator with multiple upstream tasks to aid the downstream task - high-quality _sPET_ synthesis. Our proposed multiple upstream tasks allow for enhancement of the feature representation capability such that the context and structural information of the synthesized images can be iteratively reconstructed at a finer convolutional layer.
This work is an extension of our preliminary work where we introduced the ability to reconstruct high-quality PET from low-dose counterpart via a classification-guided generative adversarial network with super-resolution refinement (CG-SRGAN) [51]. We have made the following additional contributions to improve the overall performance and overcome its limitations: (a) when compared to CG-SRGAN, our new SS-AEGAN innovates by integrating an adaptive residual estimation mechanism - AE-Net and self-supervised pre-training heads; (b) Our proposed SS-AEGAN differs from CG-SRGAN in that it employs a new residual estimation learning approach to overcome the inherent dependency on input PET image quality. This approach can refine the synthesized output without relying heavily on the quality of the input PET images. Furthermore, we included a self-supervised pre-training step to enhance the generalizability of our model, thereby improving its effectiveness in synthesizing PET images using three different tracers across two scanners; and (c) we conducted additional experiments with total-body PET images acquired from the Siemens scanner to validate the cross-scanner generalizability of the proposed SS-AEGAN.
## II Related Work
Our work is closely related to three tasks in medical image analysis, which are (a) PET image synthesis, (b) residual estimation and (c) self-supervised pre-training.
### _A. PET Image Synthesis_
Fig. 1: Our proposed SS-AEGAN framework to synthesize high-quality _sPET_ from _IPET_.
PET image synthesis involves the reconstruction of high-quality PET images from denoised PET data. U-Net-based structures are commonly used backbones. Xu _et al_. [22] proposed a modified 2.5D U-Net model which takes multi-slice PET as input to reconstruct _sPET_. Chen _et al_. modified 3D-UNet by adding pixel unshuffled and shuffle layer on the first encoder block and last decoder block to obtain high-quality PET which won first place in the 2022 MICCAI challenge [23]. Inspired by the success achieved by the GAN model [19], investigators have also attempted to adopt GAN for PET image synthesis. Wang _et al_. [21] proposed 3D-cGAN to generate _sPET_ by taking 3D low-dose PET as input. Wang _et al_. [24] stacked multiple 3D-cGAN together to build stack-GAN for high-quality PET synthesis which proved that multi-GAN structures can outperform single 3D-cGAN. To explicitly encourage embedding the semantic information of PET images to latent space, Cycle-GAN has been widely applied to _sPET_ synthesis [25][26] which combines three types of loss: adversarial loss, cycle-consistency loss, and identity loss to train two pairs of generator and discriminator.
In this study, we adopted a 3D-GAN-like structure as the backbone, which utilizes a modified 3D-UNet as the generator. This approach was chosen due to the strong capability of the modified 3D-UNet to capture both global and local features. Different from the existing methods, we employ a self-supervised pre-training strategy to enhance the feature generalization capability of the generator so that synthetic PET can retain more semantic and structural features.
### _Residual Estimation Mapping_
Residual learning [27] is initially incorporated into CNNs to optimize the training process. It demonstrates that adding residual block into the CNNs can effectively address the degradation problem on various image processing tasks e.g., image super-resolution [28], image denoising [29][30], and so on. Nie et al. [31] applied long-term residual connection to the generator by adding a skip connection from the input to the final layer and then performing an element-wise addition on the 3T MRI to 7T MRI synthesis task. However, the above work leveraged residual learning for the purpose of assisting the training process and preventing gradients from vanishing. In addition, residual learning can also be used to identify the connections between the input image and the target image on the synthesis task. Wu _et al_. [32] developed a residual learning structure by employing convolution layers to fuse images, which improved the cross-modality synthesis task. Despite applying this kind of skip-connection-based residual learning that can bridge the gap between the input and the target, it still fails to learn the residual mapping between them. Multiplicative residual scheme has been leveraged for attenuation corrected PET by setting divided mapping between input (uncorrected PET) and target (corrected PET) as the learning objective [34]. AR-GAN [33] proposed by Luo _et al_. first proposed a separate residual estimation network that took the synthetic PET from the previous generator as the input to predict residual mapping between the _IPET_ and the _sPET_.
Our proposed method extends the definition of residual estimation mapping with: (1) when compared to the commonly used 2D-based approaches, we introduce a 3D-based residual estimation approach that ensures all the spatial and contextual information can be captured; and (2) we adopt the input guided residual estimation strategy to overcome the artifacts brought by the GAN structure that allows to further boost the PET synthesis outcomes while producing trustworthy results.
### _Self-supervised Pre-training_
Self-supervised learning aims to achieve supervised feature learning where the tasks for the supervision are produced by the data itself. There are various self-supervised learning strategies for medical images. One is the prediction of relative
Fig. 3: The self-supervised pre-training strategy for 3D-AEGAN. The _IPET_ with DRF4 to DRF100 are pre-processed by random rotation and mask operation. The augmented data are sent into the Pixel-Net encoder for the up-stream tasks: DRL classification, rotation angle prediction, contrastive coding, and self-restoration to learn more comprehensive content image features.
Fig. 2: Two residual learning mechanisms are distinguished from the input of residual estimator and target residual mapping. (a) proposed by [33] in which input is the first-stage synthetic _sPET_ with target residual mapping of first stage _sPET_ to real _sPET_. (b) proposed _IPET_ involved residual learning method, where input is a residual between _IPET_ and the first stage _sPET_ with target residual mapping of _IPET_ to real _sPET_.
positions (PR) between patches [35] which is motivated by the intrinsic position relations among the divided parts of an object of interest. Another example of a PR task is image rotation prediction [40]. Because the PR method is based on patches that do not learn global context representation, it can only provide limited improvements for the tasks that require global context such as classification. Image context recovery based on self-supervised learning shows better feature representation ability. The idea is to train CNNs to 'inpaint' missing information in the images with randomly removed patches [36][37]. Recently, contrastive coding has been adopted in self-supervised pre-training (SSP) which learns high-dimensional shared features by maximizing the mutual information from the encoded representation of positive image pairs [38][39]. Inspired by [41], three sub-tasks, including rotation prediction, contrastive coding, and image inpainting, drive our SSP method. The pre-trained head is embedded with the encoder of the generator to boost feature extraction capability for the subsequent PET synthesis task.
## III Methods
The framework of our proposed SS-AEGAN is shown in **Fig.1**. It consists of three components: a) a synthesis network - Pixel-Net to generate an initial prediction result that closely resembles the actual _sPET_ image; b) a refinement network - AE-Net to estimate the residual between _sPET_ and _IPET_ by taking residual mapping of the difference between first stage result and _IPET_; and c) a discriminator to distinguish the veracity of the refined PET. SS-AEGAN is trained by two stages: one is self-supervised pre-training on the encoder of Pixel-Net by four upstream tasks; the second stage is to train the whole synthesis model in an end-to-end manner. Specifically, the adjacent AE-Net uses the residual map of the preliminary _sPET_ and _IPET_ as input to learn adaptive residual parameters after Pixel-Net first generates preliminary estimated _sPET_ (P-_sPET_) from _IPET_. After that, the estimated residual can be calculated by multiplying the output of the AE-Net with the input residual map. As a result, the estimated residual and preliminary Pixel-Net output is incorporated into the final refined _sPET_. Finally, the rectified synthetic/real _sPET_ image and the associated _IPET_ image are sent into the discriminator, which is then trained to distinguish between the actual and synthetic image pairs.
### _Coarse Generator Network - Pixel-Net_
Due to the strong ability to capture the texture and semantic features, U-Net [43] has become a commonly used network as an image synthesis generator. Therefore, a 3D U-Net-based network Pixel-Net is applied as a coarse generator to reconstruct _sPET_ from _IPET_ at the pixel level. The encoder is composed of five down-sampling blocks, and each of them adopts a 3x3x3 convolutional kernel with stride 2. The encoder blocks are introduced in the form of LeakyReLu - Convolutional layer - Batch Normalization (BN). The decoder structure also contains five up-sampling blocks with convolutional kernel 3x3x3 and stride 2. The decoder blocks are made up of three sequential parts: ReLu, a transposed convolutional layer, and BN. Note that, the first encoder block only contains one convolutional layer itself and the last block of both the encoder and decoder removes BN operation to preserve more original texture and structure details for better synthesis output. Besides, the skip connection of U-Net is also utilized in our Pixel-Net to efficiently replenish the low-dimensional information that could be lost during the up-sampling process between the encoder blocks and the related decoder blocks.
### _Adaptive Residual Estimation - AE-Net_
Due to the limited synthesis ability of Pixel-Net, the preliminary _sPET_ is still different from the true _sPET_ in local texture and global structure. To address this problem, we propose an adaptive residual estimation network - AE-Net to refine the first-stage synthesis results from Pixel-Net, inspired by the AR-Net of [33]. **Fig.2** illustrates two residual mapping strategies: AR-Net [33] and the proposed AE-Net which also compares the input of the estimator and target residual mapping. AR-Net assumes that more realistic _sPET_ can be obtained by incorporating the residual to preliminary prediction so that the residual estimator takes preliminary prediction as input to generate a residual mapping between real _sPET_ and preliminary prediction shown in **Fig.2a**. However, our hypothesis is that residual mapping of preliminary synthetic _sPET_ and _IPET_ can better incorporate shared data distribution between _sPET_ and _IPET_. **Fig.2b** shows the residual estimation pipeline of AE-Net which takes the residual map between the prediction result of Pixel-Net and _IPET_ as input to generate a refined residual matrix as output. The target residual map is obtained by multiplying input with the refined output matrix, then the final reconstructed _sPET_ is the sum of the residual map and preliminary _sPET_. AE-Net is a symmetrical V-shaped network composed of eight encoder/decoder blocks with Conv - BN - Leaky-ReLU/ReLU components and all the convolutional layers apply 3x3x3 kernels. For the encoder section, the down-sampling is accomplished by Max-pooling operations with a stride of 2. The decoder part uses transposed convolutions with a 3x3x3 filter of stride 2 for up-sampling.
### _Discriminator_
Pixel-Net and AE-Net jointly operate as the generator in the GAN model. Another crucial component of GAN, the discriminator network, seeks to differentiate between the synthesized _sPET_ and target _sPET_. The discriminator takes either the fake image pair of synthesized _sPET_ and _IPET_ or the real image pair of _sPET_ and _IPET_ as input and differentiates whether the input is real or fake. When the generator and discriminator engage in adversarial learning, the synthesis results are improved by Pixel-Net and AE-Net, and the discriminator is boosted in the opposite direction to identify the real/fake image pair. The discriminator contains five blocks in the form of Conv - Leak ReLU - BN with the last block's activation function replaced by sigmoid.
### _Self-supervised pre-training strategy_
To maximize the feature representation ability, we applied a self-supervised pre-training strategy on the encoder of Pixel-Net with four upstream tasks. Dose reduction level (DRL)
classification, rotation prediction, and contrastive coding tasks with the objective of learning global features e.g., anatomical information. PET restoration task, on the other hand, aims to learn context-level features by inpainting perturbated images. As illustrated in **Fig.3**, mixed _IPET_s will be randomly augmented to produce sub-patches with two perspectives, then projected into latent embedding space by Pixel-Net encoder in parallel. For the up-scream tasks, the encoder is coupled to four sub-branch heads. Specifically, for the DRL classification task, the sub-branch head predicts the DRF label of input _IPET_ by impacting multi-scale features to a sequence of linear layers. Rotation angle classification is also utilized for representation learning with the objective of predicting the rotated degree of augmented _IPET_ in which high-dimensional feature representation extracted from Pixel-Net encoder was projected to a four-dimension vector by Identity-Linear operation head. To encode more underlying shared information of high-dimensional features and eliminate low-level noise, contrastive predicting coding (CPC) head is used to maximally preserve mutual information of perturbated 3D patches from the same _IPET_. The CPC branch has a similar structure to the Rotation branch, but with 512 channels output instead. The anatomical pattern for organ shape, edge information, and texture are also expected to be displayed through encoded features. By integrating a decoder with a Pixel-Net encoder, the self-restoration branch is employed to learn anatomically related representation with the goal of inpainting cut-off _IPET_ patches. The structure of self-restoration branch is consistent with the structure of the proposed Pixel-Net.
### _Objective Functions_
The data flow of our proposed SS-AEGAN is from Pixel-Net to AE-Net, then to discriminator, each of which would be optimized by a loss function. The self-supervised pre-training strategy is applied before SS-AEGAN is trained in an end-to-end manner.
#### a) Self-Supervised Pre-training for Pixel-Net Encoder
_IPET_ from training datasets is divided into three classes according to their dose reduction level: \(k_{1}\), _IPET_ with DRF 4 and DRF 10; \(k_{2}\), _IPET_ with DRF 20 and DRF 50; \(k_{3}\), _IPET_ with DRF 100. The category prediction \(\hat{y}\) is yielded by DRL classification branch. It is trained with penalty of cross-entropy between ground-truth \(y\) and prediction:
\[L_{classification}=-\sum_{k_{i}}^{K}y^{(k_{i})}\log\hat{y}^{(k_{i})} \tag{1}\]
During SSP, the input _IPET_ will be augmented by random rotation operation with a rotating angle of 0\({}^{\circ}\), 90\({}^{\circ}\), 180\({}^{\circ}\), and 270\({}^{\circ}\). The rotation prediction branch will project encoded features f into prediction probability \(\hat{p}\) for each class c by rotation head. Multi-class cross-entropy loss is used to regulate the training process as follow:
\[L_{Rotation}=-\sum_{c}^{C}y^{c}\log\hat{p}^{c} \tag{2}\]
Where \(y\) indicates the real rotation angle of input instance.
Contrastive predicting coding (CPC) branch is expected to be fruitful for learning the high-level shared information between the augmented patches from the same source data. Encoder \(G_{enc}\) maps input volume \(x_{i}\) into a latent representation \(z_{i}=G_{enc}(x_{i})\).
Further projecting \(z_{i}\) into a latent context space with CPC head \(G_{Cpc}\) results in constastive coding representation \(c_{i}=G_{Cpc}(z_{i})\). Mutual information between a pair of representations is measured by the cosine similarity distance (CS). CPC loss is used to maximize the mutual information of positive pairs \(c_{i}\) and \(c_{j}\) which are augmented from the same input and minimize the mutual information of negative pairs \(c_{i}\) and \(c_{k}\) which have different views. Overall, the CPC loss is defined as:
\[L_{CPC}=-\log\frac{\exp\left(\frac{CS\big{(}c_{i},c_{j}\big{)}}{\sigma}\right)} {\sum_{k}^{2N}I_{\Omega}\exp\left(\frac{CS(c_{i},c_{k})}{\sigma}\right)} \tag{3}\]
where \(\sigma\) is the normalization scale and \(I_{\Omega}\) is the indicator function with sample space \(\Omega=\{k\neq i\}\). N denotes training batch size.
The self-restoration branch is used to generate the masked patches. During augmentation, a cut-out operator \(\psi\) is applied on input 3D _IPET_ volume \(\nu\) to obtain perturbed patches \(\hat{\nu}=\psi(\nu)\). Cut-out operator \(\psi\) includes operations of random dropout 30% volume, local-shuffling, and out-painting which are proposed by [44, 45]. The self-restoration branch is optimized by minimizing the L1 loss between the input volume and restored output:
\[L_{Ress}=\frac{1}{N}\sum_{i}^{N}\|\nu_{i}-\mathcal{F}(\hat{\nu}_{i})\|_{1} \tag{4}\]
where N is batch size and \(\mathcal{F}\) represents the self-restoration process.
The above four branch heads are integrated with a shared encoder to be optimized by the total self-supervised pre-training loss function:
\[L_{SSP}=\lambda_{1}*L_{classification}+\lambda_{2}*L_{Rotation}\] \[+\lambda_{3}*L_{CPC}+\lambda_{4}*L_{Ress} \tag{5}\]
#### a) Loss Function for 3D-AEGAN
We modify the objective function beyond the standard adversarial loss of a GAN to incorporate voxel-wise content loss along with image-wise loss, to ensure spatial alignment of the enhanced full dose images with the ground truth.
Specifically, the input low-dose PET \(V_{L}\) will be sent into the first-stage generator Pixel-Net \(P\) to generate preliminary synthesis result \(P(V_{L})\). To enforce the Pixel-Net to predict results aligned with real _sPET_ at the pixel level, image content loss is introduced, which is formulated as follows:
\[\mathcal{L}_{content}=\mathbb{E}_{V_{L}-P_{L}}[\|V_{S}-P(V_{L})\|_{1}] \tag{6}\]
The target residual \(r\) is obtained by elementwise subtracting the real _sPET_\(V_{S}\) from input _IPET_\(V_{L}\). The input of AE-Net \(R\) is the difference map \(\tilde{r}\) between preliminary result \(P(V_{L})\) and \(V_{L}\)
and the output is the adaptative residual parameters matrix \(R(\bar{r})\). The estimated residual map is then produced by performing an element-wise multiplication of the input residual with the residual parameter matrix. As the regularization term, an L1 loss is used to punish residual error, which is defined as follows:
\[\mathcal{L}_{residual}=E_{\bar{r}\sim p_{\bar{r}}}[\|r-R(\bar{r})\ast\bar{r} \|_{1}] \tag{7}\]
The estimated residual and corresponding _IPET_ are incorporated to generate the final synthesis results. To further improve image quality, we applied an additional adversarial objective function, which was defined as follows:
\[\mathcal{L}_{adv}=E_{\nu_{L}-\nu_{L}\nu_{S}-P_{S}}[(D(\nu_{L},V_{S })-1)^{2}]\] \[+E_{\nu_{L}-\nu_{L}}[D(\nu_{L},R(\bar{r})\ast\bar{r}+\nu_{L})^{2}] \tag{8}\]
The final objective function of our proposed 3D-AEGAN consists of three types of loss function: content loss (\(\mathcal{L}_{content}\) ), residual loss (\(\mathcal{L}_{residual}\)) and adversarial loss (\(\mathcal{L}_{adv}\)). Thus, the overall loss function was described as:
\[\mathcal{L}_{total}=\lambda_{5}\mathcal{L}_{content}+\lambda_{6}\mathcal{L}_{ residual}+\lambda_{7}\mathcal{L}_{adv} \tag{9}\]
### _Implementation Details_
Due to the differences between the two PET scanners in spatial resolution and data distribution, we trained and tested two datasets separately. Specifically, each dataset was randomly divided into training, validation, and test sets with a ratio of 0.8: 0.1: 0.1, respectively. We used overlapping patches from _IPET_ and _sPET_ to reduce computational costs. For supervised learning, _IPET_ and _sPET_ images were arbitrarily cropped into patches of \(256\times 256\times 16\). ALL the PET scans were converted into SUV to normalize the images. The final recovered images were obtained by merging the overlapping patches.
Self-supervised pre-training was applied on the encoder of the Pixel-Net with four sub-branches by using a batch size of 4 and AdamW optimization [46]. The hyperparameters \(\lambda_{1-4}\) in Eq. (5) were set to 1 empirically.
The 3D-AEGAN was trained using a batch size of 4 with the Adam optimization [47]. We empirically set \(\lambda_{5}=300,\lambda_{6}\)\(=10\), and \(\lambda_{7}=1\) for the hyperparameters defined in Eq. (4) and were fixed in the subsequent tests. We trained the proposed method for 100 epochs The learning rate was initially set as 2e-4, which was then linearly decreased with a factor of 0.1 and patience of 5 epochs. To avoid overfitting, an early stopping strategy was applied and was used to terminate the training process when the learning rate exceeds 2e-6. All the experiments were conducted on an 11GB NVIDIA GeForce RTX 2080Ti GPU, with the PyTorch framework.
## IV Experimental Results
### _Datasets Description_
We evaluated our method with the Ultra-Low Dose Imaging Challenge [23] dataset. We used the datasets that were released for the first-round challenge which consists of 398 studies where 117 studies were acquired from the Siemens Biograph Vision Quadra scanner and 281 studies were acquired from the United Imaging uEXPLORER scanner. All data were acquired in list mode allowing for the rebinding of data to simulate different acquisition times. Each simulated low statistics corresponding to _IPET_ with a certain dose reduction factor (DRF) was reconstructed from the counts of a time window resampled at the middle of the acquisition with reduced time. _IPET_ images were provided with DRF at 4, 10, 20, 50, and 100, as well as full-dose images. All these _IPET_ were produced by subsampling a portion of the full scan, such that they are aligned with the full-dose PET. The original Siemens PET scan size is 440 \(\times\) 440 \(\times\) 644 with a voxel spacing of 1.65 \(\times\) 1.65 \(\times\) 1.65 \(mm^{3}\), and the final acquired uEXPLORER PET image has a size of 360 \(\times\) 360 \(\times\) 673 with a voxel spacing of 1.667 \(\times\) 1.667 \(\times\) 2.886 \(mm^{3}\).
### _Evaluation Metrics_
To assess PET synthesis performance, we adopted three evaluation metrics: Normalized root mean squared error (NRMSE), peak signal-to-noise ratio (PSNR), and structural similarity index measurement (SSIM). The higher the DRF of the _IPET_, the more challenging it is for the models to recover it to the quality of the _sPET_. Along with the evaluations on individual datasets with DRFs, an overall evaluation score was also measured where different weights were applied to each low-dose PET at different DRFs, according to:
\[Score_{avg}=w_{1}\ast score_{\mathit{BPF100}}+w_{2}\ast score_{ \mathit{DRF50}}\] \[+w_{3}\ast score_{\mathit{DRF20}}+w_{4}\ast score_{\mathit{DRF10}}\] \[+w_{5}\ast score_{\mathit{DRF4}} \tag{10}\]
where the _score_ can be either of the evaluation metrics. \(w_{1}\), \(w_{2}\), \(w_{3}\), \(w_{4}\) and \(w_{5}\) represent 35%, 25%, 20%, 15% and 5% respectively.
### _Influence of Generalized Model_
The challenge dataset contains five individual _IPET_ sub-datasets with certain dose reduction factors (DRF=4, 10, 20, 50, and 100). We employed two ways of training. Firstly, for the individual model, the network was trained with a paired set of images at the standard dose and a given DRF. The trained individual models were later tested only on corresponding DRF test datasets. Secondly, the generalized model was trained by mixing the image pairs of different DRF. Xue _et al._[42] claimed that combining training images from multiple levels of DRF can be viewed as a data augmentation technique, which has been proved to be useful in other applications and been proven that the generalized model outperformed the individual models on cross-scanner or cross-tracer applications. However, only one combination of different DRF datasets from 4 to 20 DRF was explored in their study.
To investigate the optimal generalization capability, we combined different DRF images to train the generalized models for each DRF PET. The combination strategy is based on the proximity principle since the data distributions from the neighboring DRF are the closest. Specifically, generalized models are trained on four sub-datasets: (a) DRF 4 to DRF 20, (b) DRF 10 to DRF 50, (c) DRF 10 to DRF 100, and (d) DRF 4 to DRF 100.
The quantitative results of four generalized models and an individual model are illustrated in **Table I**. As expected, the generalized model outperformed the individual model on all DRFs. We note that the optimal dataset combination varied for different DRFs. Specifically, for _IPET_ with DRF 100, the training dataset containing _IPET_ with DRF 10 to 100 achieved the overall best performance with 53.079 dB PSNR and 0.288% NRMSE. However, the generalized model trained on DRF 10 to 50 resulted in the best PSNR and NRMSE with DRF 20 as the test, with 56.752 dB and 0.188% respectively, and achieves the optimal test results on DRF 50 with 54.765 dB of PSNR and 0.235% of NRMSE. Regarding the test performances of DRF 10, the generalized model trained on _IPET_ data from DRF 10 to DRF 50 showed a slightly narrower margin in terms of PSNR and SSIM compared to the model trained on DRF 4 to DRF 20, with an improvement of 0.115 dB and 0.003 in PSNR and SSIM, respectively. However, the NRMSE was found to be 0.023% higher than the former model. Overall, training datasets consisting of DRF 4 to DRF 20 show the overall best results on DRF 10 testing and so does it on DRF 4 with 60.344 dB PSNR and 0.123% NRMSE.
### _Comparison Results_
#### a) Quantitative Results on Siemens Dataset
We extensively compared our proposed SS-AEGAN with the state-of-the-art medical image synthesis methods including the top two methods on the Ultra-Low Dose challenge [23] Leader board: SF-UNet and IBRB. Our preliminary work based on SS-AEGAN was ranked third place on the challenge leaderboard. Another five advanced benchmarks were also included in our comparison. **Table II** shows the quantitative synthesis results of all synthetic models using the Siemens dataset. Our SS-AEGAN outperformed the other methods from DRF 4 to DRF 100 in all three-evaluation metrics. Only the AR-GAN, IBRN and SF-UNet improved the quality from the baseline _IPET_ on all DRF datasets. SS-AEGAN surpassed AR-GAN, IBRB, and SF-UNet by 1.522dB, 1.062dB, and 0.896dB in average PNSR score. Distinct NRMSE dropping can be observed on DRF 100 from 0.857% to 0.288% with an optimal average NRMSE score of 0.170%. SS-AEGAN demonstrates moderate superiority on SSIM, which increased the value from 0.979 to 0.998 on the DRF 20 dataset with an overall best SSIM score of 0.9973.
Visual comparison results from Siemens test dataset with DRF 100 are shown in **Fig.4** (upper part) where the first row presents the transverse view of brain, the second row presents the coronal view of body, and the third row displays the transverse view of liver region. We observed that the proposed SS-AEGAN produced optimal qualitative results in visual comparison. From the second row and the third row of **Fig.4** (upper part), we can see that the synthetic _sPET_ of the proposed SS-AEGAN exhibits a high level of similarity to the real _sPET_, in terms of heterogeneity of liver, image contrast,
and overall structure. From the region indicated by the red circle in the first row, SS-AEGAN can recover more detailed information than other methods.
_b) Quantitative Results on uEXPLORER Dataset_
We also note that SS-AEGAN outperformed other methods on PSNR and NRMSE measurements. Specifically, SS-AEGAN shows noticeable improvements when compared with commonly used model, 3D-UNet by raising the overall PSNR score from 50.182dB to 57.367dB and dropping the average NRMSE from 0.744% to 0.330%. Although SF-UNet slightly outperforms SS-AEGAN with an overall SSIM performance by 0.004, the proposed method shows relatively higher advantages on other two indicators by increasing PNSR by 0.509dB and decreasing NRMSE from 0.189% to 0.175%.
**Table II** and **Table III** show that the _IPET_ images with DRF 50 and DRF 100 from uEXPLORER exhibit lower PSNR values (47.206dB and 42.490dB) and higher NRMSE values (0.597 and 1.402) compared to the corresponding _IPET_ images from the Siemens datasets (48.069dB and 44.484dB for PSNR; 0.558 and 0.857 for NRMSE. Despite the more challenging situation posed by the uEXPLORER dataset, SS-AEGAN recovered _IPET_ with DRF 50 and DRF 100 from uEXPLORER dataset to 54.024dB and 52.013dB with an overall PSNR score of 57.367dB which is comparable to the quality of the synthesized _sPET_ derived from the Siemens dataset. Although the second-best method, SF-UNet achieved relatively consistent performance on uEXPLORER and Siemens datasets with DRF 50 (54.488dB and 53.377dB) and DRF 100 (52.578dB and 51.032dB), the cross-scanner performance gap is larger compared to the proposed SS-AEGAN.
**Fig.4** (lower part) shows the example of visual comparison results acquired from the uEXPLORER test set. We observe that our method outperformed all other comparison methods regarding enhancing image contrast, shown as the texture of liver in the third row of **Fig.4** (lower part) and preserving structural information e.g., spine and organs shown in the second row of **Fig.4** (lower part). Based on the regions highlighted by the red circle in **Fig.4** (lower part), it becomes apparent that the synthetic images produced by SS-AEGAN closely resembled the real _sPET_, exhibiting a higher degree of preservation of detailed information when compared to other synthesized results.
To verify the significance of the observed improvements of proposed SS-AEGAN, we conducted paired t-test between rival results and our results on both datasets. As reported in the **Table S1** and **S2**, most of the p-values are less than 0.05, indicating statistical significance in the performance improvement achieved by our method. Details are illustrated in the supplementary materials.
### _Cross-scanner and Cross-tracer Generalizability_
To validate the cross-scanner generalizability of the proposed SS-AEGAN, a series of experiments were conducted in which the model was trained with data from one scanner and then applied to data from another scanner, shown in **Table IV** where D1 denotes uEXPLORER and D2 denotes Siemens.
The PSNR values are generally higher when the training and testing datasets are the same (D1-D1 and D2-D2), indicating that the model performs better when trained and tested on data from the same scanner. However, even when the training and testing datasets are different (D1-D2 and D2-D1), the PSNR decreases less than 1dB for DRF4 to DRF 10 and less than 2dB for DRF50 to100, suggesting that the model is capable of generalizing to new scanners.
The uEXPLORER dataset has three different tracers, FDG (259 cases), DOTA (7 cases), and 68-Ga (15 cases). DOTA and 68-Ga only are used in the test stage, the diverse-tracer dataset - uEXPLORER, and the single-tracer dataset - Siemens achieved comparable performance on three metrics as illustrated in **Table II** and **Table III**. The cross-tracer visualization examples are shown in **Fig.5**, all three tracer types consistently show that the synthesized _sPET_ images of DRF4 to DRF20 closely resemble the true _sPET_ images, especially in high-intake regions. Despite the poor image quality of input _IPET_, which inevitably leads to some missing information in the synthetic results of DRF50 and DRF100, SS-AEGAN is still able to recover most of the structures and high-intake regions in all three types of tracer PET.
### _Ablation Study_
To evaluate the effectiveness of individual components of our SS-AEGAN, we conducted multiple ablation studies on self-supervised pre-training strategy and 3D-AEGAN module with DRF 100.
_a) Synthesis Network-3DAEGAN:_
To investigate how each component of 3D-AEGAN improves the synthetic image quality, we decoupled the 3D-AEGAN into (a) baseline, synthesis results of Pixel-Net; (b) baseline with adaptive residual estimation network (\(\text{Pix}+\text{AE}\)); (c) baseline with discriminator (\(\text{Pix}+\text{Dis}\)); and (d) baseline with AE-Net and discriminator (\(\text{Pix}+\text{AE}+\text{Dis}\)).
As shown in the first row and second row in **Table V**, the baseline Pixel-Net improved _IPET_ of uEXPLORER from 42.490 dB to 46.019 dB on PSNR; increased SSIM from 0.970 to 0.990 and dropped NRMSE from 1.402% to 0.475%. For _IPET_ of Siemens, Pixel-Net increased image quality by 4.394 dB on PSNR, 0.044 on SSIM, and 0.378% NRMSE correction.
To verify the residual mapping ability of the proposed AE-Net, we compared the results between baseline with/without AE-Net. The quantitative results are shown in the second and third rows of **Table V**. The AE-Net improved the baseline Pixel-Net by 3.156 dB, 0.001, and 0.114% in PSNR, SSIM, and NRMSE respectively on DRF 100 from uEXPLORER. Similarly, AE-Net also shows its advantages on Siemens datasets by improving the synthetic results of Pixel-Net by 2.045 dB, 0.001, and 0.166% in PSNR, SSIM, and NRMSE. It indicated that the proposed AE-Net was effective for high-quality PET synthesis, especially in spatial information recovery.
To further assess the performance of AE-Net, we compared the results between the baseline with AE-Net and the baseline with discriminator, shown as the third row and fourth row of two sub-tables in **Table V**. The baseline with the AE-Net outperformed the baseline with the discriminator by 0.861dB and 1.306 dB in PSNR on uEXPLORER and Siemens respectively. Similarly, AE-Net with baseline noticeably surpassed baseline with discriminator on the pixel-wise correction and decreased NRMSE from 0.441% to 0.361% on uEXPLORER and from 0.443% to 0.313% on Siemens. It indicates that the proposed AE-Net is a powerful tool for high-quality image synthesis, especially for very-low-dose PET images e.g., DRF 100.
By adding a discriminator into the baseline + AE-Net, PSNR improved to 51.288dB from 49.175dB and NRMSE dropped to 0.352% on uEXPLORER; PSNR increased to 52.316dB and NRMSE decreased to 0.301% on Siemens. Making a synthesis model into a GAN-similar structure showed the advantages of structural level recovery by increasing SSIM to 0.992 with the uEXPLORER scanner and to 0.994 with the Siemens scanner.
raising from 51.288dB to 52.013dB for uEXPLORER and from 52.316dB to 53.079dB for Siemens. The second largest gap shows in DRF 50, the SSP strategy improved both test results by 0.716dB and 0.678dB for uEXPLORER and Siemens respectively. The test results of DRF 20 and DRF 10 demonstrate substantial improvements, as the SSP strategy increased PSNR by 0.653 dB and 0.443 dB for uEXPLORER and boosted PSNR by 0.541 dB and 0.452 dB for Siemens. Even with DRF 4, minor performance optimization by SSP can still be observed on both scanners.
_c) Efficacy of Self-Supervised Up-stream Tasks:_
To investigate the impact of individual upstream tasks in the self-supervised pre-training strategy, we conducted multiple experiments as shown in **Table VI**.
The self-restoration task-guided self-pretraining achieved the best performance by improving 0.464 dB in PSNR and 0.01% in NRMSE. When employing all the up-streams tasks in the SSP, it obtains the optimal test results with PSNR improvement of 0.725dB.
### _Clinical Assessment on Liver ROIs_
To evaluate the robustness of the synthesized results, we measured the homogeneity of a region of interest (ROI) in the liver structure. Such ROI measurement can be used to quantify image quality as sections of the liver is expected to be homogeneous [53]. We manually annotated spherical regions of interest (ROIs) with a diameter of 20\(\pm\)1 mm within lesion-free and homogeneous sections of the right liver lobe [52]. Our annotation process avoided sections of the liver that includes prominent blood vessels and partial volume effect. Fig.S1 illustrates a case of this annotation process. We conducted an analysis involving both the rival methods and our method where we measured SUVmax and SUVmean. We calculated the accuracy of SUVmax and SUVmean within ROIs in reference to the standard dose PET, quantifying this accuracy in terms of percentage error. This evaluation was conducted using the uEXPLORER dataset with DRF 100 among all 28 patient studies. This data was selected as the uEXPLORER dataset has more cases compared to 11 cases of Siemens dataset and DRF 100 represents the most challenging cases.
The percentage errors for SUVmax and SUVmean are visually presented in Fig.6. Notably, our SS-AEGAN achieved the lowest percentage error in terms of SUVmax. Additionally, our method ranks as the second-best in SUVmean. These results underscore the robustness of our method in comparison to rival methods.
### _Model Variability Assessment_
To validate the stability of our model performance, we implemented a 5-fold cross-validation during training and
Fig. 8: Ablation study between using self-supervised pre-training (SSP) strategy and training from scratch strategy tested on (a) uEXPLORER dataset with DRF 100 to DRF 4 and (b) Siemens dataset with DRF 100 to DRF 4.
Fig.6: Percentage error of SUVmax (a) and SUVmean (b) of uEXPLORER test dataset with DRF100.
Fig.7: Quantitative comparison of 5-fold cross validation results on PSNR (a) and NRMSE (b), a significant difference compared with the proposed SS-AEGAM is indicated by * (p \(<\) 0.005).
validation. Individual tests were conducted on a separate test dataset to ensure a fair comparison. We used DRF 4 and DRF 100 from the uEXPLORER dataset to represent the lowest dose and highest dose reduction factors, respectively. The baseline model 3D-UNet was used as the comparison method. Quantitative results are presented in Fig.7 in terms of PSNR and NRMSE, demonstrating that our SS-AEGAN exhibits lower standard deviations, and the observed superior performance is statistically significant, as indicated by the low p-values associated with the results.
## V Discussion
Our main findings are that: (1) our SS-AEGAN consistently outperformed the state-of-the-art methods, in particular, for DRF 100, where the image characteristics suffered from loss of structure information and usually presented low signal-to-noise ratio; (2) we identified that our method has promising generalizability through cross-scanner, cross-tracer and cross model analysis; (3) self-supervised pre-training increases discrimination power of the derived features; and (4) input-involved residual estimation can effectively narrow down the difference between synthesis _sPET_ and _IPET_.
**Table II** and **Table III** show that our method achieved the best results across the two scanners. The improvement of 3D-GAN over 3D-UNet is likely attributed to the use of adversarial learning; adversarial learning has a discriminator that allows it to distinguish between the synthesized and real images, enabling it to generate higher quality images when compared with the 3D-UNet structure. The further improvement of 3D-CyclGAN is attributed to the cycle consistency loss which encourages the generator to produce output images that can be mapped back to the original input domain. StackGAN achieved better performance than CycleGAN, this was attributed to its multi-stage architecture, which generates images in multiple resolutions and incorporates a conditioning auxiliary variable at each stage. This approach allows StackGAN to generate high-quality images with greater anatomical detail. However, StackGAN still suffers from feature representation limitations in PET image generation due to the reuse of the same network structure in each stage of the generator. AR-GAN effectively resolves this issue by using the residual estimation module as the second stage component to refine the synthetic output from the first stage. Both SF-UNet and IBRB achieved competitive performance when compared to our proposed method. However, both trained a single-stage model from scratch which may limit feature representation in generating fine-grained details and in recovering complex textures e.g., tumors with inhomogeneous textures. In contrast, our method SS-AEGAN can minimize these limitations by adopting self-supervised pre-training with a second-stage residual estimation module - AE-Net.
We also investigated the effect of trainable parameters against all comparison methods. Fig.S2 and Fig.S3 present the comparisons. The SS-AEGAN is third least in the number of parameters used yet outperformed all comparison methods, in both PSNR and NRMSE, even to methods that used three times more parameters.
For the cross-scanner and cross-tracer evaluation, the results in **Table IV** indicate that the SS-AEGAN has promising generalizability across different scanners. We suggest that the model's generalizability is mainly attributed to the adopted self-supervised pre-training strategy (SSP). First, using the self-supervised method that involves inpainting (such as self-restoration) is expected to improve a model's ability to handle noisy or incomplete data during inference. Contrastive coding can further aid a model to learn useful and robust features that are generalizable across different datasets or tasks.
Compared with models trained from scratch, the SSP model shows overall better synthesis performance on all DRFs. The proposed four upstream self-supervised pre-training tasks boost the feature representation by learning multidimensional information, and the individual influence of each task can be observed in **Table VI**. The classification task involves training the model to predict the dose reduction level of an input image. This task can encourage the model to learn more abstract and higher-level features that are invariant to changes in appearance or context. In the context of PET image synthesis, we identified that this process could help the trained model to better capture the underlying characteristics of PET images that are relevant to the synthesis task.
Generalizability of proposed method also shows in the integrability of the proposed AE-Net and SSP components. AE-Net and SSP are claimed to be easily incorporated with any synthesis generator and likely be able to boost the synthesis results. We conducted an additional ablation study by combining the proposed AE-Net and SSP with commonly used generator, 3D-GAN, with the results shown in Table VII. The outcomes illustrate the effectiveness of our individual component on improving synthesis outcomes.
The proposed AE-GAN assisted in high-quality PET synthesis as shown in **Table V**. Compared to the current residual estimation method AR-GAN [33], the advantages of AE-GAN were as follows. The previous residual estimation method used synthetic PET as input to predict the difference map between synthetic PET and _sPET_. However, due to domain unalignment between synthetic PET and _sPET_, synthetic PET as input may not be capable of providing all the information needed to estimate the residual. In contrast, we replaced the single-dimensional input, synthetic PET, with two-dimensional input, the difference map between _IPET_ and synthetic PET, to mimic the residual between low-dose PET and standard-dose PET. The input-involved residual estimation method compensates for incomplete information in the synthetic image and bridges the domain gap between _IPET_ and _sPET_ in a more direct manner. Further, when compared to the 2D method in [33], AE-GAN using a 3D adaptive residual estimation method was able to capture spatial dependencies and structural information along the three views.
In this study, we mainly focused on introducing the difference in residual map for reconstructing the standard dose PET images. Therefore, a standard additive residual scheme was used. In our future work, we will compare different residual scheme such as the multiplicative residual scheme proposed by Guo _et al_. [34] further evaluate the performance of the proposed method.
Besides, by leveraging recent advancements in AI-driven reconstruction methods [9-11] that directly reconstruct high
quality PET images from low-dose sinograms, we aim to explore the incorporation of the end-to-end structure of our model, which inherently possesses the capability to take sinograms as input, into the domain of high-quality PET reconstruction from low-dose sinograms.
## Acknowledgment
All authors declare that they have no known conflicts of interest in terms of competing financial interests or personal relationships that could have an influence or are relevant to the work reported in this paper.
## VI Conclusion
In this paper, we designed a self-supervised pre-trained adaptive residual estimation-based generative adversarial network (SS-AEGAN) for high-quality standard-dose PET synthesis from low-dose PET. To enhance the model generalizability and feature representation, a self-supervised pre-training is introduced with four up-stream tasks to assist high-quality _sPET_ synthesis. Moreover, a novel _IPET_-involved residual estimation module is proposed to further narrow the distribution misalignment between _sPET_ and _IPET_. Experimental results with a large public benchmark dataset demonstrated that our method surpassed the current state-of-the-art methods. As part of our future work, we plan to incorporate prior knowledge e.g., anatomy derived from CT, and MRI to further improve the quality and sensitivity of the synthesized PET images.
|
2303.15468
|
Epistemic Injustice in Technology and Policy Design: Lessons from New
York City's Heat Complaints System
|
This paper brings attention to epistemic injustice, an issue that has not
received much attention in the design of technology and policy. Epistemic
injustices occur when individuals are treated unfairly or harmed specifically
in relation to their role as knowers or possessors of knowledge. Drawing on the
case of making heat complaints in New York City, this paper illustrates how
both technological and policy interventions that address epistemic injustice
can fail or even exacerbate the situations for certain social groups, and
individuals within them. In bringing this case to the workshop, this paper
hopes to provide another generative and critical dimension that can be utilised
to create better technologies and policies, especially when they deal with
diverse and broad range of social groups
|
Mohsin Yousufi, Charlotte Alexander, Nassim Parvin
|
2023-03-25T04:14:00Z
|
http://arxiv.org/abs/2303.15468v1
|
# Epistemic Injustice in Technology and Policy Design:
###### Abstract.
This paper brings attention to epistemic injustice, an issue that has not received much attention in the design of technology and policy. Epistemic injustices occur when individuals are treated unfairly or harmed specifically in relation to their role as knowers or possessors of knowledge. Drawing on the case of making heat complaints in New York City, this paper illustrates how both technological and policy interventions that address epistemic injustice can fail or even exacerbate the situations for certain social groups, and individuals within them. In bringing this case to the workshop, this paper hopes to provide another generative and critical dimension that can be utilised to create better technologies and policies, especially when they deal with diverse and broad range of social groups.
Key words and phrases:Credibility Boosters, Epistemic Injustice, Housing Justice, Civic Technologies, Complaints, Home +
Footnote †: journal: Computer Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Footnote †: journal: Computer supported cooperative work
+
Recognizing the ubiquity and magnitude of the problem the non-profit Heat Seek, developed a technological intervention, an IoT temperature sensor that documents heat violations in a home. The sensors been highly effective at combating this lack of heat and forcing landlords to turn up the heat. This intervention, currently applied to 58 buildings (Hamburg et al., 2017), has helped tenants avoid Housing Court altogether and, according to Heat Seek's annual reports, made the landlords more cautious in cutting off heat (Hamburg et al., 2017). By allowing the tenants to document the heating violations in their homes, Heat Seek has made the tenants more credible.
Why is Heat Seek so effective at holding landlords accountable? We argue that what Heat Seek does, by means of providing quantitative data, is giving tenants a "credibility boost", thus making their complaint legitimate and legible in the eyes of attorneys, landlords and the Court. This occurs because, inherently, the situation of an unresolved, unheard heating complaint stems not only from institutional mechanics (Brocker et al., 2016) but from epistemic injustice. In the following sections, the paper will briefly explore the notion of epistemic injustice as theorized by Miranda Fricker. By following the process of a heat complaint, it will then illustrate how the existing policy and legal frameworks discount the tenants knowledge. The paper will then use Heat Seek to show how a technology can respond to such epistemic injustices and also not necessarily resolve it. We close with a brief discussion on how consideration for epistemic injustice and justice are relevant to the discourse for technology and policy, and how they can inform better practices for the designers.
## 2. Epistemic Injustices
Philosopher Miranda Fricker defines epistemic injustice as "a wrong done to someone specifically in their capacity as a knower" (Fricker, 2016). Fricker further identifies two types of epistemic injustice: testimonial injustice and hermeneutical injustice, both of which are particularly relevant for our case. Testimonial injustices occur when a person's testimony is not trusted or given less credibility due to social biases or prejudices against the person or the social group they belong to. In other words, the person seems sufficiently credible; they suffer from a credibility deficit. Hermeneutical injustice occurs when a person does not possess the knowledge to communicate their experience due to them not possessing the shared vocabulary to express it. For instance, a non-native speaker might lack sufficiently technical or legal vocabulary in English to persuasively articulate their complaint.
Along with Fricker, others such as Jose Medina, Gaile Polhaus, Micheal Sullivan and Ian James Kidd have documented how epistemic injustices occur in various situations, experiences and areas of life such as medicine, law and education, thereby privileging the knowledges of certain groups over others (Fricker, 2016; Kidd, 2017; Kidd, 2017). These injustices are a critical limitation to creating a more equitable and just society in which we can account for the experiences of those that are different from us. Within the HCI community, there is limited work on how technologies can perpetuate or even account for epistemic injustices. This paper attempts to introduce epistemic injustice as a significant concern and an area of inquiry that is relevant for HCI, Law and Policy.
For our case, epistemic injustice explains both, how and why the tenants' heating complaints do not get resolved. We documented through a series of interviews with attorneys, Heat Seek and legal literature review, the process of making a heat complaint in New York. We show how the tenants' testimonies are not believed at various points while making the heating complaint. Instead, the tenants are actively disregarded because they suffer from a credibility deficit which partly stems from their social standing. While inadequate heat is a pervasive problem across all five boroughs of the city, some neighborhoods, such as the Bronx, are disproportionately affected each year. These are the places where marginalized and minority communities live, and the complaints from these areas make up a bulk of the heating complaints (Kidd, 2017).
## 3. Heating complaint
Each year, the Housing and Preservation Department (HPD) receives in excess of 150,000 heating complaints (Hung et al., 2017). Most of these complaints are closed quickly (Hung et al., 2017). For instance, we found that once HPD has received a complaint, it will follow up with the landlord and close the complaint based on the landlord's word regardless if the heating has been restored or not. Here, the landlord's word is assigned a higher epistemic value than the tenant's complaint, i.e. the tenant suffers from a credibility deficit. Similarly, a review of the court documents illustrates how the court itself discounts certain instances of the tenant's experience because they are not specific enough (Hung et al., 2017).
Once HPD receives the complaint, and it has not been closed based on landlord's responses, HPD will send a building inspector to assess the legitimacy of the complaint. If there is no heat, the inspector can issue a heating violation that can then be used as a basis for a court proceeding. Unfortunately, getting this violation is also a difficult task. For instance, before the building inspector visits the apartment, they will have to inform the landlord but not the tenant. Landlords tend to turn up the heating for the impending visit, only to turn it off once the inspector has left, thus avoiding the violation (and also costing the city money). Also, as the tenant is not usually informed of the visit, they have to accommodate this surprise visit in their daily routines, which is especially strenuous for people working multiple jobs, performing caretaking duties or are differently-abled. Even when the inspector is able to access the apartment, they might not issue the heating violation if the outside temperature is not low enough on that specific day, meaning that if the inspector visits on an unusually hot day or time, there will be no violation, regardless of how much time the tenant has spent without heat. Often, the tenant's testimony is systemically disregarded in the favor of the landlord's, during the inspection process, simply because they are "tenants" - a social and economic class (Hung et al., 2017). We can trace such instances of tenants being subjected to testimonial and hermeneutical injustice all the way to the workings of the Housing Courts.
Susana Blankely, a community organizer in New York, documents the tenant's experience in a housing court. Here, the tenants are frequently cornered by the landlord's attorney, are misinformed about their cases or even completely dismissed (Ballall et al., 2017). For most of the tenants, they have to navigate this process alone without any help from an attorney because they usually are unable to afford one. Being asked to conduct a fairly complex legal process without any experience or guidance is an example of hermeneutical injustice. Proving any effect in the court of law requires "specificity of dates and the relative severity" of the inadequate heat (Hung et al., 2017). There are specific rules of evidence, that establish the evidence to be reliable and verifiable. Thus, the inability to prove to lack of heat, an inherently embodied and physical experience, partly due to lack of documentation and partly due to limited understanding for the rules of evidence, the tenants suffer from both testimonial and hermeneutical injustices. There are other such instances where the tenant is constantly and intentionally wronged in their capacity as a knower, simply due to their identity as a tenant, and a person of color, a specific gender or even age. Policy solutions such as HPD's protocols for handling complaints and inspections, or even the temperature limit itself, do not account for the credibility differentials between individuals and social groups. This creates a situation which fails the people that need help the most. Heat Seek's response and its success is further evidence of epistemic injustice.
## 4. Heat Seek
Heat Seek's temperature sensor is a simple piece of technology. It is a temperature sensor connected to the internet that collects hourly readings of the inside temperature, pulls the outside temperature from the internet, documents the date and time, and calculates if the inside temperature is in violation or not of the legal limit. Tenants, and their attorneys, can access the logs and use it to
monitor compliance with the temparture limits. This simple tool has been extremely effective in making sure landlords keep the heat on to at least the minimum limits. Based on our interviews and news reports, Heat Seek has helped tenants who had been without heat for years and other recourse had failed them.
Examining t it through the lens epistemic injustice, we can see why Heat Seek is so effective. By providing the tenants with quantitative, tangible data in the form of temperature logs, Heat Seek has made the tenants more credible. Instead of relying solely on the testimonies of tenants against the landlord, now the court and attorneys can also assess the heating log produced by a third party. Heat Seek actively positions itself as an "objective" and "neutral" party to the situation [7]. This lends Heat Seek - and by proxy, tenants - increased credibility when making heat complaints. All data, as reported by Heat Seek and corroborated by interviews with attorneys, points to the fact that tenants are taken more seriously when they rely on Heat Seek's data. Heat Seek has also partnered with community organizations and legal aid clinics to provide the sensors to the people who can benefit the most from them.
Before Heat Seek, tenants making a heat complaint were asked by their attorneys to build a temperature log [12]. Interestingly, the epistemic move becomes clear when one considers that the data from Heat Seek is by all means identical to the one that a tenant would fill in their own log. Both these logs have the same data points (see fig 1), only differing in the frequency of data. However the manual logs were not nearly as effective as the logs from the Heat Seek sensor in getting the heat restored. The manual logs seem to possess a lower epistemic value because the tenant that produced them suffers from a credibility deficit, unlike Heat Seek which is a "legitimate", "credible" and a neutral third party. While on surface it might appear as if the tenant's epistemic value has increased through the credibility boost from the presence of Heat Seek, a deeper look reveals that their epistemic standing remains the same; only by relying on the credibility of a "technofix" the tenants can receive a desired outcome, and those without it carry on with the same fate. This not to criticize how Heat Seek operates, but to illustrate the severity of epistemic injustice,
Figure 1: The manual log (left) vs the Heat Seek log (right)
that even interventions that provide credibility boosts can not entirely alleviate the credibility deficits and the boosts might even be temporary and ad-hoc.
## 5. Discussion & Conclusion
Analyzing Heat Seek through the lens of epistemic injustice illustrates the critical and generative capacity that the examinations of epistemic injustice offers for the design and policy interventions. By examining the case of making a heat complaint in New York from the perspective of epistemic injustice, one can identify the points and interactions where the process fails, such as when the complaints are closed based on landlords' word or when the inspectors do not inform the tenants of an inspection. These failures provide an opportunity to design interventions such as Heat Seek. Understanding Heat Seek as an attempt to bring epistemic justice allows us to examine how the technology operates, its effectiveness and how to account for its shortcomings in different manner. This perspective emphasizes how the knowledge of certain groups and individuals, when not accounted for, can create harmful effects in the everyday lived experience of the people and even socially conscious interventions can have limited effect.
It is important to understand that this paper is not arguing that epistemic injustice is the only concern that should be taken into consideration when designing societal level technologies. Instead, it is attempting to add epistemic injustice to the critical lens of a technologist or a policy maker. Even with the heat complaint, the issues do not arise solely from epistemic inequalities. Instead there are other structural and societal factors at play that contribute to this problem, such as landlords wanting to push tenants out, the limited design of Courts and housing shortage in urban New York. Interventions that take into account a more holistic view of the problem are going to fare better than those that do not. Heat Seek as a response also considers this scale of the problem, understanding that the tenant's inability to get heat restored is also due to the lack of resources and help available to them. As a result, they actively partner with community organizations, legal
Figure 2. Heat Seek’s sensor placed on a wall. The tamper-evident tape is visible in the image on the right. Image Courtesy of Heat Seek
aid clinics and tenant associations to get the sensors to the tenants that would benefit the most from them.
Civic technologies, and technologies in general, have potential to trigger policy changes. For instance, Heat Seek may seem like a small project (limited to 58 buildings from the hundreds of thousands in New York), but it has actually enabled the City to create a temperature sensor program [16, 18]. Extending considerations of epistemic injustice to this situation, we might ask if introducing sensors to all apartments is effective or even feasible? Do we run the danger of stratifying access to justice through means of technology, so those without the sensors are _always_ dismissed? Instead, should we not focus on improving the infrastructures in a way that respect and respond to the epistemic positions of the individuals within them? Epistemic positions are not just critical for technology but also for policy based solutions, and perhaps more so because policies are a hermeneutical resource.
This paper has presented the case of heating complaints as a way to invite attention to the issues of epistemic injustices prevalent around us in many forms. It seeks to engage in conversations around how these epistemic injustices are perpetuated and can be remedied by technologies and policies. We argue that a key part of designing better technologies and policies is to consider an individual as a unique knower and accounting for the limits of their knowledge. We hope that the participants at the workshop critically examine, reflect and apply the sensibilities of epistemic injustices to their own practices.
|
2305.11580
|
Approximate Distance Sensitivity Oracles in Subquadratic Space
|
An $f$-edge fault-tolerant distance sensitive oracle ($f$-DSO) with stretch
$\sigma \ge 1$ is a data structure that preprocesses a given undirected,
unweighted graph $G$ with $n$ vertices and $m$ edges, and a positive integer
$f$. When queried with a pair of vertices $s, t$ and a set $F$ of at most $f$
edges, it returns a $\sigma$-approximation of the $s$-$t$-distance in $G-F$.
We study $f$-DSOs that take subquadratic space. Thorup and Zwick [JACM 2005]
showed that this is only possible for $\sigma \ge 3$. We present, for any
constant $f \ge 1$ and $\alpha \in (0, \frac{1}{2})$, and any $\varepsilon >
0$, a randomized $f$-DSO with stretch $ 3 + \varepsilon$ that w.h.p. takes
$\widetilde{O}(n^{2-\frac{\alpha}{f+1}}) \cdot O(\log n/\varepsilon)^{f+2}$
space and has an $O(n^\alpha/\varepsilon^2)$ query time. The time to build the
oracle is $\widetilde{O}(mn^{2-\frac{\alpha}{f+1}}) \cdot O(\log
n/\varepsilon)^{f+1}$. We also give an improved construction for graphs with
diameter at most $D$. For any positive integer $k$, we devise an $f$-DSO with
stretch $2k-1$ that w.h.p. takes $O(D^{f+o(1)} n^{1+1/k})$ space and has
$\widetilde{O}(D^{o(1)})$ query time, with a preprocessing time of
$O(D^{f+o(1)} mn^{1/k})$.
Chechik, Cohen, Fiat, and Kaplan [SODA 2017] devised an $f$-DSO with stretch
$1{+}\varepsilon$ and preprocessing time $O(n^{5+o(1)}/\varepsilon^f)$, albeit
with a super-quadratic space requirement. We show how to reduce their
preprocessing time to $O(mn^{2+o(1)}/\varepsilon^f)$.
|
Davide Bilò, Shiri Chechik, Keerti Choudhary, Sarel Cohen, Tobias Friedrich, Simon Krogmann, Martin Schirneck
|
2023-05-19T10:40:25Z
|
http://arxiv.org/abs/2305.11580v4
|
# Approximate Distance Sensitivity Oracles in Subquadratic Space+
###### Abstract
An \(f\)_-edge fault-tolerant distance sensitive oracle_ (\(f\)-DSO) with stretch \(\sigma\geqslant 1\) is a data structure that preprocesses a given undirected, unweighted graph \(G\) with \(n\) vertices and \(m\) edges, and a positive integer \(f\). When queried with a pair of vertices \(s,t\) and a set \(F\) of at most \(f\) edges, it returns a \(\sigma\)-approximation of the \(s\)-\(t\)-distance in \(G-F\).
We study \(f\)-DSOs that take subquadratic space. Thorup and Zwick [JACM 2005] showed that this is only possible for \(\sigma\geqslant 3\). We present, for any constant \(f\geqslant 1\) and \(\alpha\in(0,\frac{1}{2})\), and any \(\varepsilon>0\), an \(f\)-DSO with stretch \(3+\varepsilon\) that takes \(\widetilde{O}(n^{2-\frac{\alpha}{f+1}}/\varepsilon)\cdot O(\log n/\varepsilon )^{f+1}\) space and has an \(O(n^{\alpha}/\varepsilon^{2})\) query time. We also give an improved construction for graphs with diameter at most \(D\). For any constant \(k\), we devise an \(f\)-DSO with stretch \(2k-1\) that takes \(O(D^{f+o(1)}n^{1+1/k})\) space and has \(\widetilde{O}(D^{o(1)})\) query time, with a preprocessing time of \(O(D^{f+o(1)}mn^{1/k})\).
Chechik, Cohen, Fiat, and Kaplan [SODA 2017] devised an \(f\)-DSO with stretch \(1+\varepsilon\) and preprocessing time \(O_{\varepsilon}(n^{5+o(1)})\), albeit with a super-quadratic space requirement. We show how to reduce their preprocessing time to \(O_{\varepsilon}(mn^{2+o(1)})\).
Introduction
_Distance Oracles_ (DOs) are fundamental data structures that store information about the distances of an input graph \(G=(V,E)\).1 These oracles are used in several applications where one cannot afford to store the entire input, but still wants to quickly retrieve the graph distances upon query. Therefore, DOs should provide reasonable trade-offs between space consumption, query time, and _stretch_, that is, the quality of the estimated distance.
Footnote 1: Throughout, we assume the graph \(G\) to be undirected and unweighted. We use \(n\) for the number of vertices and \(m\) for the number of edges.
We are interested in the design of DOs that additionally can tolerate multiple failures of edges in \(G\). An _\(f\)-edge fault-tolerant distance sensitivity oracles_ (\(f\)-DSO) is able to report an estimate \(\widehat{d}_{G-F}(s,t)\) of the distance \(d_{G-F}(s,t)\) between \(s\) and \(t\) in the graph \(G-F\), where \(F\subseteq E\) is a set of at most \(f\) failing edges, when queried with the triple \((s,t,F)\). The parameter \(f\) is the _sensitivity_ of the DSO. We say that the _stretch_ of the \(f\)-DSO is \(\sigma\geqslant 1\) if \(d_{G-F}(s,t)\leqslant\widehat{d}_{G-F}(s,t)\leqslant\sigma\cdot d_{G-F}(s,t)\) holds for every query \((s,t,F)\).
Several \(f\)-DSOs with different size-stretch-time trade-offs have been proposed in the last decades, some of which can only deal with a very small number \(f\in\{1,2\}\) of failures [6, 8, 9, 13, 18, 22, 23, 24, 27, 28, 34]. In the following, we focus on \(f\)-DSOs that deal with multiple failures \(f\geqslant 3\). The Monte Carlo \(f\)-DSO of Weimann and Yuster [37] computes exact distances w.h.p.2 and gives adjustable trade-offs depending on some parameter \(\alpha\in(0,1)\). More precisely, the \(f\)-DSO can be built in \(\widetilde{O}(mn^{2-\alpha})\) time, has a query time of \(\widetilde{O}(n^{2-2(1-\alpha)/f})\), and uses \(\widetilde{O}(n^{3-\alpha})\) space.3 The \(f\)-DSO of Duan and Ren [25] requires \(O(fn^{4})\) space, returns exact distances in \(f^{O(f)}\) query time, but the preprocessing algorithm that builds takes \(n^{\Omega(f)}\) time. The \(f\)-DSO of Chechik, Cohen, Fiat, and Kaplan [19] can handle up to \(f=o(\log n/\log\log n)\) failures but has a stretch of \(1+\varepsilon\), for any constant \(\varepsilon>0\). In turn, the oracle is more compact requiring \(O_{\varepsilon}(n^{2+o(1)}\log W)\) space, where \(W\) is the weight of the heaviest edge of \(G\), has query time \(O_{\varepsilon}(f^{5}\log n\log\log W)\), and can be build in \(O_{\varepsilon}(n^{5+o(1)}\log W)\) preprocessing time. Note that the aforementioned \(f\)-DSOs all have a super-quadratic space requirement, that is, they take up _more_ space than the original input graph, which is prohibitive in settings where we cannot even afford to store \(G\). The \(f\)-DSO of Chechik, Langberg, Peleg, and Roditty [21] addresses this issue with a space requirement of \(O(fkn^{1+1/k}\log(nW))\), where \(k\geqslant 1\) is an integer parameter. Their data structure has a fast query time of \(\widetilde{O}(|F|\log\log d_{G-F}(s,t))\) but guarantees only a stretch of \((8k-2)(f+1)\), that is, depending on the sensitivity \(f\).
Footnote 2: An event occurs _with high probability_ (w.h.p.) if it has probability at least \(1-n^{-c}\) for some constant \(c>0\).
Footnote 3: The space is measured in the number of machine words on \(O(\log n)\) bits. For a function \(g\) of the input and parameters, we use \(\widetilde{O}(g)\) to denote \(O(g\cdot\mathsf{polylog}(n))\).
Another way to provide approximate pairwise replacement distances under edge failures is that of fault-tolerant spanners [31]. An _(\(f\)-edge) fault-tolerant \(\sigma\)-spanner_ is a subgraph \(H\) of \(G\) such that \(d_{H-F}(s,t)\leqslant\sigma\cdot d_{G-F}(s,t)\), for every triple \((s,t,F)\), with \(s,t\in V\) and \(F\subseteq E\), \(|F|\leqslant f\). There is a simple algorithm by Chechik, Langberg, Peleg, and Roditty [20] that computes, for any positive integer \(k\), a fault-tolerant (\(2k{-}1\))-spanner with \(O(fn^{1+1/k})\) edges. Constructions by Bodwin, Dinitz, and Robelle [15, 16] recently reduced the size to \(f^{1/2}n^{1+1/k}\cdot\mathsf{poly}(k)\) for even \(k\), and \(f^{1/2-1/(2k)}n^{1+1/k}\cdot\mathsf{poly}(k)\) for odd \(k\). They also showed an almost matching lower bound of \(\Omega(f^{1/2-1/(2k)}n^{1+1/k}+fn)\) for \(k>2\), and \(\Omega(f^{1/2}n^{3/2})\) for \(k=2\), assuming the Erdos girth conjecture [26]. The space is also the main problem with this approach as it translates to a high query time. Currently, the most efficient way to retrieve the approximate distance between a given pair of vertices is to compute the single-source distance from one of them in time that is at least linear in the size of the spanner.
All the results above for multiple failures either require \(\Omega(n^{2})\) space, have a stretch depending on \(f\), or superlinear query time. If one wants a truly constant stretch and fast query time simultaneously, one currently has to pay \(\Omega(n^{2})\) space. Thorup and Zwick [36] showed that, even when not supporting a single failure, breaking the quadratic barrier is impossible for directed graphs; and for undirected graphs this requires a stretch of at least \(3\). In this paper, we discuss the case of unweighted graphs and constant sensitivity. We give a subquadratic-space DSO with near-optimal stretch \(3+\varepsilon\) and an arbitrarily small polynomial query time.
**Theorem 1**.: _Let \(f\geqslant 2\) be a positive integer and \(0<\alpha<\nicefrac{{1}}{{2}}\) a constant. For any undirected, unweighted graph \(G\) with unique shortest paths and any \(\varepsilon>0\), there is a \((3+\varepsilon)\)-approximate \(f\)-DSO for \(G\) that takes space \(\widetilde{O}(n^{2-\frac{\alpha}{f+1}}/\varepsilon)\cdot O(\log n/\varepsilon) ^{f+1}\), has query time \(O(n^{\alpha}/\varepsilon^{2})\), and preprocessing time \(\widetilde{O}(n^{2-\frac{\alpha}{f+1}}(\frac{m}{\varepsilon}+\frac{1}{ \varepsilon^{2}}))\cdot O(\log n/\varepsilon)^{f}\)._
Very recently, Bilo, Choudhary, Cohen, Friedrich, Krogmann, and Schirneck [11] addressed the same problem for weighted graphs and \(f=o(\log n/\log\log n)\). For any integer \(k\geqslant 2\) and constant \(0<\alpha<1\) their construction achieves a stretch of \(2k-1\) with space \(O(n^{1+\frac{1}{k}+\alpha+o(1)}\) and a \(\widetilde{O}(n^{1+\frac{1}{k}-\frac{\alpha}{k(f+1)}})\) query time. So in comparison, their space requirement is smaller for the price of a query time that is always at least linear (albeit smaller than running a single-source shortest path algorithm on a spanner). The construction in [11] also differentiates between long and short paths, which is common for fault-tolerant data structures, and employs the distance oracle of Thorup and Zwick [36]. Besides that, they use techniques that are different from the ones presented here.
The assumption in Theorem1 of shortest paths being unique in the base graph \(G\) can be achieved by very slightly perturbing the edge weights of the input (keeping the characteristics of an essentially unweighted graph). For unweighted graphs, this results in weighted graphs where every edge weight is very close to \(1\), which is a sufficient alternative condition for all the places in the paper where we assume that the graph is unweighted. Alternatively, we can compute a set of unique paths via _lexicographic pertubation_[17] in time \(O(mn+n^{2}\log^{2}n)\). To obtain Theorem1, we develop several new techniques. For the remainder of this section, we highlight the novelties. A more detailed overview of our construction can be found in Section2.
**Tree Sampling for Short Paths.** It is a common approach in the design of fault-tolerant data structures to first give a solution for short paths and then combine them into one for all distances, see [18, 30, 27, 28, 34, 37]. We also focus first on \(f\)-DSOs for short paths. Let \(L\) be the cut-off parameter.4 We say a path is _short_ if it has at most \(L\) edges. An \(f\)-DSO for short paths only needs to report the correct answer for a query \((s,t,F)\) if \(G-F\) contains a shortest path from \(s\) to \(t\) with at most \(L\) edges. Designing such an oracle with good query-space-preprocessing trade-offs is the first step towards improving general \(f\)-DSOs. Let \(d_{G-F}^{\leq L}(s,t)\) be the minimum length over all \(s\)-\(t\)-paths in \(G-F\) with at most \(L\) edges; if there are none, then \(d_{G-F}^{\leq L}(s,t)=+\infty\). Note that \(d_{G-F}^{\leq L}(s,t)=+\infty\) may hold for pairs \((s,t)\) that are connected in \(G-F\).
Footnote 4: The cut-off point will eventually turn out to be \(L=n^{\alpha/(f+1)}\), where \(\alpha\in(0,\frac{1}{2})\) is the parameter from Theorem1.
**Theorem 2**.: _Let \(f,k\) be positive integers. There exists a data structure that, when given an undirected, unweighted graph \(G=(V,E)\), and a positive integer \(L\) (possibly dependent \(n\) and \(m\)), preprocesses \(G\) and answers queries \((s,t,F)\) for vertices \(s,t\in V\) and sets of edges \(F\subseteq E\) with \(|F|\leqslant f\). W.h.p. over all queries, the returned value \(\widehat{d^{<L}}(s,t,F)\) satisfies \(d_{G-F}(s,t)\leqslant\widehat{d^{\leq L}}(s,t,F)\leqslant(2k-1)\cdot d_{G-F}^ {\leq L}(s,t,F)\). The data structure takes space \(\widetilde{O}(L^{f+o(1)}n^{1+1/k})\), has query time \(\widetilde{O}(L^{o(1)})\), and preprocessing time \(\widetilde{O}(L^{f+o(1)}mn^{1/k})\)._
We compare Theorem2 with previous work on \(f\)-DSOs for short paths. Weimann and Yuster [37] presented a construction with \(\widetilde{O}(L^{f}mn)\) preprocessing time, \(\widetilde{O}(L^{f}n^{2})\) space, and \(\widetilde{O}(L^{f})\) query time. It laid the foundation for many subsequent works, see [5, 14, 12, 30, 34]. When using the fault-tolerant trees described in [19, Appendix A], one can reduce the query time of the oracle to \(O(f^{2})\). However, storing all of these fault-tolerant trees still requires \(\Omega(L^{f}n^{2})\) space. For small enough \(L\), sub-quadratic space suffices for our data structure, while still providing a better query time than [37].
In order to prove Theorem2, we extend the sampling technique from [37]. It consists of first constructing \(\widetilde{O}(L^{f})\) copies of \(G\) and then, in each one, remove edges with probability \(\nicefrac{{1}}{{L}}\). One can show that w.h.p. each short replacement path survives in one of the copies, where a _replacement path_ is the respective shortest path after at most \(f\) edge failures. Instead of having all those graphs be independent of each other, we develop hierarchical tree sampling. This allows us to quickly find the copies that are relevant for a given query, reducing the query time to \(\widetilde{O}(L^{o(1)})\). We further sparsify the resulting graphs for a better space complexity.
From Theorem2, we immediately get an \(f\)-DSO for graphs with bounded diameter. Afek, Bremler-Barr, Kaplan, Cohen, and Merritt [2] proved that for undirected, unweighted graphs \(G\) any shortest path in \(G-F\) is a concatenation of up to \(|F|+1\) shortest paths in \(G\). If \(G\) has diameter at most \(D\) and \(|F|\leqslant f\), the diameter of \(G-F\) is thus bounded by \((f{+}1)D\). This gives the following corollary.
**Corollary 3**.: _Let \(f\) and \(k\) be positive integers. There exists a \((2k{-}1)\)-approximate \(f\)-DSO for undirected, unweighted graphs with diameter bounded by \(D\), that takes space \(\widetilde{O}(D^{f{+}o(1)}\,n^{1{+}1/k})\), has query time \(\widetilde{O}(D^{o(1)})\), and preprocessing time \(\widetilde{O}(D^{f{+}o(1)}\,mn^{1/k})\)._
**Fault-Tolerant Trees with Granularity.** We employ fault-tolerant trees5 (FT-trees) introduced by Chechik et al. [19] to combine the solutions for short paths. Those are trees in which every node is associated with a path in a subgraph \(G-A\) where \(A\subseteq E\) is a set of edges, but possibly much more than the sensitivity \(f\). Each path is partitioned into segments whose sizes increase exponentially towards the middle. This is done to encode the paths more space efficient than edge-by-edge. We have to take some additional compression steps to fit them in subquadratic space. For example, instead of building a tree \(FT(s,t)\) for every pair of vertices \(s,t\), we only do so if one of them is from a set of randomly selected _pivots_. But even this gives only a sub-linear query time. To improve it further to \(\widetilde{O}_{\varepsilon}(n^{\alpha})\) for an any constant \(\alpha\in(0,\frac{1}{2})\), we generalize the FT-trees by adding what we call _granularity_\(\lambda\geqslant 0\).6 That means the first and last \(\lambda\) edges of each path are their own segment and do not fall into the regime of exponential increase. The original construction in [19] corresponds to granularity 0. Intuitively, the larger the value of \(\lambda\), the better the fault-tolerant tree \(FT_{\lambda}(u,v)\) with granularity \(\lambda\) approximates the shortest distance from \(u\) to \(v\) in \(G-F\), but the larger the size of each node of the tree becomes.
Footnote 5: FT-trees are not related to the tree sampling mentioned before.
Footnote 6: In the proof of Theorem1, we set \(\lambda=\varepsilon L/c\), for an ad-hoc constant \(c>1\).
The idea to answer a query \((s,t,F)\) is to scan balls of a certain radius around \(s\) and \(t\) in \(G-F\) for pivots and query the respective FT-tree together with the oracle for short paths in Theorem2. W.h.p. one of the pivots hits the replacement path from \(s\) to \(t\) ensuring that this gives (an approximation of) the right distance. The bottleneck is the case when there are too many vertices in the vicinity of both \(s\) and \(t\) since then these balls also receive many pivots. Instead, we sample a second type of much more scarce pivots, which are used to hit the dense neighborhoods. In that case, we can find a scarce pivot \(b_{s}\) near \(s\) and a scarce pivot \(b_{t}\) near \(t\), but we can no longer assume that they hit the sought replacement path. The fault-tolerant tree \(FT_{\lambda}(b_{s},b_{t})\) with
granularity \(\lambda\), however, allows us to get a good approximation, as long the starting points \(b_{s}\) and \(b_{t}\) are at distance at most \(\lambda\) from the real endpoints.
The trees \(FT_{\lambda}(b_{s},b_{t})\) are much larger than their classical counterparts \(FT(s,t)\). This is compensated by the fact that we require much fewer of those. We verify that several of the key lemmas from [19] transfer to fault-tolerant trees with granularity \(\lambda>0\).
**Efficient Computation of Expaths.** Since fault-tolerant trees are crucial for our work, we revisit the approach used in [19] to construct them (with granularity 0). It turns out that their algorithm can be improved. The preprocessing in [19] invokes many calls to all-pairs shortest path computations (APSP) in different subgraphs \(G-F\), each of which is associated with a node of the fault-tolerant trees. They also invoke \(O(n)\) calls to Dijkstra's algorithm on suitable dense graphs with \(O(fn^{2})\) edges. We prove that many of those APSP calls can be avoided by instead re-using the distances in the original graph \(G\), which can be obtained by a single APSP computation. More precisely, the paths associated with the nodes of the fault-tolerant trees (later referred as \((2f+1)\)-expaths) are the concatenation of \(O(f\log(nW))\) original shortest paths. The distances in \(G\) can be integrated into a single Dijkstra run on a specially built graph with \(\widetilde{O}(fm)\) edges to compute such an expath in time \(\widetilde{O}(fm)\). This technique implies an improved preprocessing time for our own subquadratic \(f\)-DSO. Moreover, when plugged in the preprocessing algorithm in [19], it improves the overall time complexity from \(O_{\varepsilon}(fn^{5+o(1)})\) to \(O_{\varepsilon}(fmn^{2+o(1)})\).
**Theorem 4**.: _Let \(G\) be an undirected weighted graph with maximum edge weight \(W=\mathsf{poly}(n)\) and unique shortest paths. For any positive integer \(f=o(\log n/\log\log n)\), and \(\varepsilon\geqslant 1/nW\), there exists an \((1+\varepsilon)\)-approximate \(f\)-DSO for \(G\) that takes \(\widetilde{O}(fn^{2})\cdot O\Big{(}\frac{\log(nW)}{\varepsilon}\Big{)}^{f}=O( \varepsilon^{-f})\cdot n^{2+o(1)}\) space, has query time \(O(f^{5}\log n)\), and preprocessing time \(\widetilde{O}(fmn^{2})\cdot O\Big{(}\frac{\log(nW)}{\varepsilon}\Big{)}^{f}= O(\varepsilon^{-f})\cdot mn^{2+o(1)}\)._
**Open Problems.** As an open question, we ask whether one can further improve the query time from \(\widetilde{O}_{\varepsilon}(n^{\alpha})\) to poly-logarithmic in \(n\) and \(\nicefrac{{1}}{{\varepsilon}}\) while keeping the space truly subquadratic. The converse open problem is to further reduce the space without affecting the query time. Finally, we can currently only handle unweighted graphs where the length of the path corresponds to the number of edges. Some of the sampling-based ideas break down if long paths can consist of only a few heavy edges. For all the open problems the bottleneck is the case of long paths. For short distances, our \(f\)-DSO has asymptotically almost optimal size and very low query time that can easily be adapted to the weighted case.
## 2 Overview
**Fault-tolerant Trees.** Our distance sensitivity oracle is built on the concept of fault-tolerant trees [19]. This is a data structure that reports, for a fixed pair of vertices \(s,t\in V\) and any set \(F\subseteq E\) of up to \(f\) edge failures, the replacement distance \(d_{G-F}(s,t)\). Consider a shortest path \(P\) from \(s\) to \(t\) in the original graph \(G\). FT-trees draw from the fact that only failures on \(P\) can influence the distance from \(s\) to \(t\). In its simplest form, the tree \(FT(s,t)\) consists of a root node that stores the path \(P\) and the distance \(d(s,t)=|P|\). It has a child for each edge \(e\in E(P)\) which in turn holds a shortest \(s\)-\(t\)-path in \(G-e\). Iterating this construction until depth \(f\) ensures that all relevant failure sets for the pair \((s,t)\) are covered. If some set of edge failures disconnect the two vertices, this is represented by a leaf node that does not store any path. Let \(P_{\nu}\) denote the path in some node \(\nu\). Given a failure set \(F\), the algorithm checks in each node \(\nu\) starting with the root whether it is a leaf or \(F\cap E(P_{\nu})=\emptyset\), with the latter meaning that the path \(P_{\nu}\) exists in \(G-F\). If so, its length \(|P_{\nu}|\) is reported; otherwise, the search recurses on the child node corresponding to an
(arbitrary) edge \(e\in F\cap E(P_{\nu})\). Let \(FT(s,t,F)\) be the reported distance. It is equal to \(d_{G-F}(s,t)\) and the query time is \(O(f^{2})\) since at most \(f\)+1 vertices are visited and computing the intersection takes time \(O(f)\).
The problem is, these trees are huge. Preprocessing them for all pairs of vertices takes total space \(O(n^{f+3})\). The main technical contribution of [19] is to reduce the space without sacrificing too much of their performance, that is, the stretch of the reported distance and the query time. In the first step, the number of vertices in the tree is decreased by introducing an approximation parameter \(\varepsilon>0\). Each path \(P_{\nu}\) is split into \(O(\log n/\varepsilon)\)_segments_. Now node \(\nu\) only has a child for each segment and the search procedure recursing on that child corresponds to failing the whole segment instead of only a single edge. This reduces the total size of all trees to \(O(n^{3}\,(c\,\frac{\log n}{\varepsilon})^{f})\) for some constant \(c>0\). However, it leads to some inaccuracies in the answer of the tree. The failed segments may contain edges that are actually present in \(G-F\) and thus the path \(P_{\nu^{*}}\) stored in the last visited node \(\nu^{*}\) may take unnecessary detours. It is proven in [19] that \(FT(s,t,F)=|P_{\nu^{*}}|=d_{G-F}(s,t)\) is correct if all failing edges are "far away"7 from the true replacement path \(P(s,t,F)\) in \(G-F\), where the required safety distance depends on the distance \(d_{G-F}(s,t)\). To also answer queries for which this condition is violated, they consult multiple FT-trees. An auxiliary graph \(H^{F}\) is constructed on the endpoints \(V(F)\) of all failing edges, that is, \(V(H^{F})=\{s,t\}\cup V(F)\). For each pair of vertices \(u,v\in V(H^{F})\), the edge \(\{u,v\}\) is weighted with the reported distance \(FT(u,v,F)\). While not all edge weights may be the correct \(u\)-\(v\)-replacement distance, the distance of \(s\) and \(t\) in \(H^{F}\) can be shown to be a (1+\(\varepsilon\))-approximation of \(d_{G-F}(s,t)\). The idea is that, when going from \(s\) to \(t\), one can always find a next vertex in \(V(H^{F})\) that is not too far off the shortest path and such that the subpath to that vertex is "far away" from all failures. Computing the weights for \(H^{F}\) increases the query time to \(O(f^{4})\).
Footnote 7: More formally, a path \(P\) being “far away” from \(F\) means that, for every vertex \(x\) on \(P\) except for \(s\) and \(t\) and every endpoint \(y\) of a failing edge in \(F\), the distance from \(x\) to \(y\) is more than \(\frac{\varepsilon}{9}\cdot\min(|P[s,x]|,\,|P[x,t]|)\), see Definition 9.
The next step is more involved and is concerned with the size of the nodes in the FT-trees. Originally, each of them stores all edges of a path in (a subgraph of) \(G\) and therefore may take \(O(n)\) space. Afek et al. [2] showed that every shortest path in \(G-F\), for \(|F|\leqslant f\), is \(f\)-_decomposable_, that is, a concatenation of at most \(f\) shortest paths in \(G\). Chechik et al. [19] extend this notion to so-called expaths. For a positive integer \(\ell\), a path is said to be an \(\ell\)-_expath_ if it is the concatenation of \((2\log_{2}(n)+1)\)\(\ell\)-decomposable paths such that the \(i\)th \(\ell\)-decomposable path has length at most \(\min\{2^{i},2^{2\log_{2}(n)-i}\}\). Consider a node \(\nu\) in the tree \(FT(u,v)\). Instead of storing the shortest \(u\)-\(v\)-path \(P_{\nu}\) edge by edge, one would like to represent it by the endpoints of the constituting shortest paths (in \(G\)) and edges. However, the collection \(A_{\nu}\) of edges in all segments that were failed while descending from the root to \(\nu\) may be much larger than \(f\) and \(P_{\nu}\) may not be \(f\)-decomposable. Instead, the node \(\nu\) now holds the shortest (2\(f\)+1)-expath from \(u\) to \(v\) in \(G-A_{\nu}\). It can be represented by \(O(f\log n)\) endpoints, bringing the total space of the trees to \(O(fn^{2}(\log n)(c\,\frac{\log n}{\varepsilon})^{f})\). It is described in [19] how to navigate the new representation to obtain a (1+\(\varepsilon\))-approximation of \(d_{G-F}(s,t)\) in time \(O(f^{5}\log n)\).
In this work, we advance the space reduction further into the subquadratic regime. Recall that \(L\) is the number of edges up to which a path is called short. When sampling a set \(B\) of \(\widehat{O}_{\varepsilon}(n/L)\)_pivots_ uniformly at random, then w.h.p. every long replacement path contains a pivot. Restricting the FT-trees \(FT(u,v)\) to only those pairs \(u,v\) for which at least one vertex is in \(B\) brings the total number of trees to \(o(n^{2})\). Unfortunately, it deprives us of the replacement distances for pairs that are joined by a short path.
**Short Paths.** To make up for this deficit, we design an approximate \(f\)-DSO for vertex pairs with short replacement paths. We extend a technique by Weimann and Yuster [37] from exact
to approximate distances while also reducing the required space and query time. When sampling \(\widetilde{O}(L^{f})\) spanning subgraphs of \(G\) by, in each one, removing any edge independently with probability \(\sfrac{1}{L}\), it is shown in [37] that w.h.p. for each set \(F\) of at most \(f\) edges and each pair of vertices connected by a short path in \(G-F\), there are \(\widetilde{O}(1)\) subgraphs that contain the path but none of \(F\). Such a collection of graphs is called an \((L,f)\)-_replacement path covering_ (RPC) [30]. For any two vertices \(s\) and \(t\) that have a replacement path on at most \(L\) edges, the minimum \(s\)-\(t\)-distance of the \(\widetilde{O}(1)\) suitable graphs of the RPC is the correct replacement distance.
We cannot use that approach directly in subquadratic space. The subgraphs have total size \(\Omega(L^{f}m)\), which is already too large if \(G\) is dense. Also, it is expensive to find the correct members of the RPC for a given query. In [37], the solution was to go over all graphs and explicitly check whether they have the set \(F\) removed, dominating the query time (for short paths). Karthik and Parter [30] derandomized this construction and thereby reduced the time needed to find the correct subgraphs to \(\widetilde{O}(L)\). Both approaches break down in subquadratic space, since we cannot even store all edges of the graphs. However, we are only seeking _approximate_ replacement distances. We exploit this fact in a new way of constructing and approximate \((L,f)\)-replacement path coverings. We do so by turning the sampling technique upside down and combining it with the distance oracle of Thorup and Zwick [36].
Instead of sampling the subgraphs directly by removing edges, we construct them in a hierarchical manner by _adding_ connections. We build a tree8 in which each node is associated with a subset of the edges of \(G\), this set stands for the "missing" edges. We start with the full edge set \(E\) in the root, that is, the graph in the root is empty. The height of the tree is \(h\) and each node has \(L^{f/h}\) children. The associated set of a child node contains any edge of its parent with probability \(L^{-1/h}\). This corresponds to adding any missing edge with probability \(1-L^{-1/h}\). Knowing the missing edges upfront benefits the query algorithm. At each node starting with the root, if we were to expand all children in which all failures of \(F\) are missing, we would find the suitable subgraphs. The hierarchical sampling creates some dependencies among the subgraphs associated with the leaves of the tree, while the graphs in [37] were independent. We tackle this issue by always recursing only on one child node and therefore querying a single leaf. We repeat the process in several independent trees in order to amplify the success probability. We prove that there exists a constant \(c>0\) such that \(\widetilde{O}(c^{h})\) trees together ensure the property we need from an \((L,f)\)-replacement path covering w.h.p. Optimizing the height \(h\) gives an \(\widetilde{O}(L^{o(1)})\) query time (assuming constant \(f\)).
Footnote 8: Again, sampling trees and fault-tolerant trees are not related.
The main challenge is to bring down the size of this construction by reducing the number of edges in the graphs associated with the nodes of the trees. Thorup and Zwick [36] devised, for any positive integer \(k\), a \((2k{-}1)\)-approximate distance oracle together with a compatible spanner of size \(O(kn^{1+1/k})\), i.e., the stretched distance returned by the oracle is the length of a shortest path in the spanner. Therefore, we can use the oracles in the leaves of the trees to report distances, giving a low query time, and employ the spanners as proxies for the graphs associated with the intermediate nodes. However, for this to work, we have to carefully tweak the computation of the spanners and interleave it with the sampling process in order to not blow up the size too much (or the stretch).
**Long Paths.** We return to the fault-tolerant trees. By the use of the pivots, we reduced the required number of trees to \(\widetilde{O}_{\varepsilon}(n^{2}/L)\). But even in the most compact version of FT-trees, this is not enough to reach subquadratic space altogether. The issue is with the representation of expaths as a sequence of \(O(f\log n)\) components, each of which is implicitly represented by its two endpoints. In [19] this was implemented by storing the original graph distance \(d(x,y)\) and the predecessor \(\operatorname{pred}(x,y)\) of \(y\) on the shortest \(x\)-\(y\)-path for _all_ pairs \(x,y\). This information is used to
expand the implicit representation of an expath when needed. However, the space is again \(\Omega(n^{2})\). The key observation to overcome this is that, in our case, we do not need to encode arbitrary expaths but only those with a particular structure, e.g., at least one endpoint is a pivot. This allows us to forgo the need for a quadratic database of all distances.
We also devise a new procedure to obtain an approximation of \(d_{G-F}(s,t)\) by combining the values from the FT-trees with the \(f\)-DSO for short paths. Recall that we build one FT-tree for each pair of vertices \((u,v)\) where \(u\) or \(v\) are pivots. The main open issue is to find the weight of the edge \(\{u,v\}\) in the auxiliary graph \(H^{F}\) (see above) if neither \(u\) nor \(v\) are pivots and they also do not have a short path between them in \(G-F\). Then, w.h.p. at least one pivot \(b\) hits the \(L\)-edge prefix of that replacement path. Therefore, it is sufficient to estimate its length as the sum of an approximation for \(d_{G-F}^{\leq L}(u,b)\) via the \(f\)-DSO for short paths, and an approximation for \(d_{G-F}(b,v)\) via the FT-trees. However, since we do not know the right pivot \(b\), we have to scan all of them. We prove that this results in a stretch of \(3+\varepsilon\) and a sublinear query time.
While this is already faster than all previous works (for a stretch independent of \(f\)), it is still not very efficient. In Section6, we improve the query time to \(O_{\varepsilon}(n^{\alpha})\) for any constant \(0<\alpha<1/2\). We provide an efficient way to check whether the number of pivots in \(B\) that are close to \(u\) and \(v\) in \(G-F\) are below the threshold value of \(L^{f-1}\) and, if so, find them all. If only a small number of pivots are around \(u\) (or \(v\)), we can afford to scan them as described above.
The complementary case of many pivots around both endpoints is solved by precomputing a set of \(\widetilde{O}_{\varepsilon}(n/L^{f})\)_new pivots_, much fewer than before, and generalizing the FT-trees to granularity \(\lambda>0\). This ensures that, in any node \(\nu\), the first and last \(\lambda\) edges of the corresponding path \(P_{\nu}\) each form their own segment. High granularity thus makes the generalized trees much larger. For comparison, the maximum granularity \(\lambda=n\) would unwind _all_ of the efforts taken in [19] to reduce their size, as summarized at the beginning of this section. We can still fit the trees in subquadratic space by building \(FT_{\lambda}(b,b^{\prime})\) only for pairs \(b,b^{\prime}\) of new pivots.
The \(u\)-\(v\)-distance in \(G-F\) in the case of many _original_ pivots around \(u\) and \(v\) is approximated as follows. We compute two _new_ pivots \(b_{u},b_{v}\), with \(b_{u}\) close to \(u\) in \(G-F\) and \(b_{v}\) close to \(v\). The approximate length of the shortest path from \(u\) to \(v\) in \(G-F\) is computed by the overall sum of (_i_) an approximation of the distance from \(u\) to \(b_{u}\) in \(G-F\), (_ii_) an approximation of the distance from \(b_{u}\) to \(b_{v}\) in \(G-F\) computed by querying \(FT_{\lambda}(b_{u},b_{v})\), and (_iii_) an approximation of the distance from \(b_{v}\) to \(v\) in \(G-F\). We make sure to have a granularity \(\lambda\leqslant L\) so that we can obtain the terms (_i_) and (_iii_) from our \(f\)-DSO for short paths.
## 3 Preliminaries
We let \(G=(V,E)\) denote the undirected and unweighted base graph with \(n\) vertices and \(m\) edges. We tacitly assume \(m=\Omega(n)\). For any undirected (multi-)graph \(H\), which may differ from the input \(G\), we denote by \(V(H)\) and \(E(H)\) the set of its vertices and edges, respectively. Let \(P\) be a path in \(H\) from a vertex \(s\in V(H)\) to \(t\in V(H)\), we say that \(P\) is an _\(s\)-\(t\)-path_ in \(H\). We denote by \(|P|=|E(P)|\) the _length_ of \(P\). For vertices \(u,v\in V(P)\), we let \(P[u..v]\) denote the subpath of \(P\) from \(u\) to \(v\). Let \(P=(u_{1},\ldots,u_{i})\) and \(Q=(v_{1},\ldots,v_{j})\) be two paths in \(H\). Their _concatenation_ is \(P\circ Q=(u_{1},\ldots,u_{i},v_{1},\ldots,v_{j})\), which is well-defined if \(u_{i}=v_{1}\) or \(\{u_{i},v_{1}\}\in E(H)\). For \(s,t\in V(H)\), the _distance_\(d_{H}(s,t)\) is the minimum length of any \(s\)-\(t\)-path in \(H\); if \(s\) and \(t\) are disconnected, we set \(d_{H}(s,t)=+\infty\). When talking about the base graph \(G\), we drop the subscripts.
A _spanning_ subgraph of a graph \(H\) is one with the same vertex set as \(H\) but possibly any subset of its edges. This should not be confused with a spanner. A _spanner of stretch_\(\sigma\geqslant 1\), or _\(\sigma\)-spanner_, is a spanning subgraph \(S\subseteq H\) such that additionally for any two vertices \(s,t\in V(S)=V(H)\), it
holds that \(d_{H}(s,t)\leqslant d_{S}(s,t)\leqslant\sigma\cdot d_{H}(s,t)\). A _distance oracle_ (DO) for \(H\) is a data structure that reports, upon query \((s,t)\), the distance \(d_{H}(s,t)\). It has _stretch_\(\sigma\geqslant 1\), or is \(\sigma\)_-approximate_, if the reported value \(\widehat{d}(s,t)\) satisfies \(d_{H}(s,t)\leqslant\widehat{d}(s,t)\leqslant\sigma\cdot d_{H}(s,t)\) for any admissible query.
For a set \(F\subseteq E\) of edges, let \(G{-}F\) be the graph obtained from \(G\) by removing all edges in \(F\). For any two \(s,t\in V\), a _replacement path_\(P(s,t,F)\) is a shortest path from \(s\) to \(t\) in \(G{-}F\). Its length \(d_{G-F}(s,t)\) is the _replacement distance_. Let \(L\) be a positive integer. We call a path in (a subgraph of) \(G\)_short_ if it has at most \(L\) edges, and _long_ otherwise. Let \(d_{G-F}^{\leq L}(s,t)\) be the minimum length of any short \(s\)-\(t\)-paths in \(G-F\), or \(+\infty\) if no such path exists.
For a positive integer \(f\), an \(f\)_-distance sensitivity oracle_ (DSO) answers queries \((s,t,F)\) with \(|F|\leqslant f\) with the replacement distance \(d_{G-F}(s,t)\). The stretch of a DSO is defined as for DOs. The maximum number \(f\) of supported failures is called the _sensitivity_. We measure the space complexity of any data structure in the number of \(O(\log n)\)-bit machine words. The size of the input graph \(G\) does not count against the space, unless it is stored explicitly.
## 4 Handling Short Paths
We develop here our \((2k{-}1)\)-approximate solution for short replacement paths, which will in turn be used for the general distance sensitivity oracle. To do so, we first review (and slightly modify) the distance oracle and spanner in [36] to an extent that is needed to present our construction.
### The Distance Oracle and Spanner of Thorup and Zwick
For any positive integer \(k\),9 Thorup and Zwick [36] devised a DO that is computable in time \(\widetilde{O}(kmn^{1/k})\), has size \(O(kn^{1+1/k})\), query time \(O(k)\), and a stretch of \(2k-1\). Their stretch-space tradeoff is essentially optimal for sufficiently dense graphs, assuming the Erdos girth conjecture [36]. For sparse graphs, better constructions are known [1, 32, 33], including subquadratic-space DOs with a stretch less than 2 [3, 4].
Footnote 9: In principle, \(k\) could depend on \(n\) or \(m\), but for \(k=\Omega(\log n)\) we do not get further space improvements. We assume \(k\) to be a constant in this work.
We first review the Thorup and Zwick construction before discussing our changes. First, a family of vertex subsets \(V=X_{0}\supseteq X_{1}\supseteq\cdots\supseteq X_{k-1}\supseteq X_{k}=\emptyset\) is computed. Each \(X_{i}\) is obtained by sampling the elements of \(X_{i-1}\) independently with probability \(n^{-1/k}\). We keep this family fixed and apply the construction to a variety of subgraphs of \(G\).
Let \(H\) be such a subgraph for which the oracle needs to be computed. For any \(v\in V\) and \(0\leqslant i<k\), let \(p_{i,H}(v)\) be the closest vertex10 to \(v\) in \(X_{i}\) in the graph \(H\), ties are broken in favor of the vertex with smaller label. The distances from \(v\) to all elements in
Footnote 10: We have \(p_{i,H}(v)=v\) for all \(i\) small enough so that \(X_{i}\) still contains \(v\).
\[X_{i,H}(v)=\{x\in X_{i}\mid d_{H}(v,x)<\min_{y\in X_{i+1}}d_{H}(v,y)\}\cup\{p_{ i,H}(v)\}\]
are stored in a hash table. In other words, \(X_{i,H}(v)\) contains those vertices of \(X_{i}\backslash X_{i+1}\) that are closer to \(v\) then any vertex of \(X_{i+1}\). Note that the set \(X_{i,H}(v)\) and vertices \(p_{i,H}(v)\) may differ for the various subgraphs of \(G\). This completes the construction of the DO for \(H\).
The oracle is accompanied by a \((2k{-}1)\)-spanner with \(O(kn^{1+1/k})\) edges. It stores all those edges of \(H\) that lie on a shortest path between \(v\) and a vertex in \(\bigcup_{0\leqslant i<k}X_{i,H}(v)\), again ties between shortest paths are broken using the edge labels.
```
1\(\widehat{d}\leftarrow\infty\);
2for\(i=0\)to\(k-1\)do
3if\(p_{i}(s)\in\bigcup_{j=0}^{k-1}X_{i,H}(t)\)then
4\(\widehat{d}\leftarrow\min\{\widehat{d},\ d_{H}(s,p_{i}(s))+d_{H}(p_{i}(s),t)\}\)
5if\(p_{i}(t)\in\bigcup_{j=0}^{k-1}X_{i,H}(s)\)then
6\(\widehat{d}\leftarrow\min\{\widehat{d},\ d_{H}(t,p_{i}(t))+d_{H}(p_{i}(t),s)\}\)
7return\(\widehat{d}\);
```
**Algorithm 2**Modified query algorithm of the distance oracle for the pair \((s,t)\).
Algorithm 1 shows how the oracle handles the query \((s,t)\). The returned distance can be shown to overestimate \(d_{H}(s,t)\) by at most a factor \(2k{-}1\). We instead use a slightly modified version as presented in Algorithm 2. Observe that the estimate \(\widehat{d}\) produced by our version is at most the value returned by the original one and at least the actual distance between \(s\) and \(t\). Further, as before, for any \(s\) and \(t\), the path corresponding to the new estimate is a concatenation of at most two original shortest paths in \(H\). The interconnecting vertex is either \(p_{i,H}(s)\) or \(p_{i,H}(t)\) for some \(i\), we denote it as \(u_{s,t,H}\), and the \((2k{-}1)\)-approximate shortest path as \(P_{s,t,H}\). The reason why we adapt the query algorithm is a crucial inheritance property.
**Lemma 5** (Inheritance property).: _Let \(H\subseteq G^{\prime}\subseteq G\) be two spanning subgraphs of \(G\), \(s,t\in V\) two vertices, and \(P_{s,t,G^{\prime}}\) the approximate shortest path underlying the value returned by the (modified) distance oracle for \(G^{\prime}\). If \(P_{s,t,G^{\prime}}\) also exists in \(H\), then \(P_{s,t,H}=P_{s,t,G^{\prime}}\), Moreover, the oracle for \(H\) returns \(|P_{s,t,G^{\prime}}|\) upon query \((s,t)\)._
Proof.: Recall that \(P_{s,t,G^{\prime}}\) is a concatenation of two shortest paths in \(G^{\prime}\), say, \(P(s,u)\) and \(P(u,t)\), where \(u=u_{s,t,G^{\prime}}\) is the interconnecting vertex in \(\bigcup_{j<k}X_{j,G^{\prime}}(s)\cup\bigcup_{j<k}X_{j,G^{\prime}}(t)\) that minimizes the sum of distances \(d_{G^{\prime}}(s,u)+d_{G^{\prime}}(u,t)\). Without loosing generality, we have \(u=p_{i,G^{\prime}}(s)\) for some \(0\leqslant i<k\); otherwise, we swap the roles of \(s\) and \(t\). Let \(0\leqslant j<k\) be such that \(u\in X_{j,G^{\prime}}(t)\).
For any spanning subgraph \(H\subseteq G^{\prime}\) that contains the path \(P_{s,t,G^{\prime}}\), it holds that \(u=p_{i,H}(s)\) and \(u\in X_{j,H}(t)\). Here, we use that the tie-breaking for the \(p_{i,H}(s)\) does not depend on the edge set of
\(H\). Moreover, the shortest \(s\)-\(u\)-path and \(u\)-\(t\)-path in the spanner for \(H\) are the same as in \(G\), that is, \(P(s,u)\) and \(P(u,t)\). As a result, we have \(u=u_{s,t,H}\) and \(P_{s,t,G^{\prime}}=P_{s,t,H}\). The second assertion of the lemma follows from \(d_{H}(s,u)=|P(s,u)|\) and \(d_{H}(u,t)=|P(u,t)|\).
### Tree Sampling
We present our fault-tolerant oracle construction for short paths. Recall that a path in \(G\) is short if it has at most \(L\) edges, and that \(d_{G-F}^{\leq L}(s,t)\) is the minimum distance over short \(s\)-\(t\)-paths in \(G-F\). Note that, while we assume \(f\) and \(k\) to be constants, \(L\) may depend on \(m\) and \(n\). We prove Theorem2 in the remainder of the section.
We first compute the vertex sets \(X_{0},\ldots,X_{k}\). Define \(h=\sqrt{f\ln L}\), \(K=\lceil((2k{-}1)L)^{f/h}\rceil\), \(p=K^{-1/f}\), and \(I=C\cdot 11^{h}\ln n\) for some sufficiently large constant \(C>0\) (independent of \(f\) and \(k\)). We build \(I\) rooted trees \(T_{1},\ldots,T_{I}\), each of height \(h\), such that any internal node has \(K\) children. For the following description, we fix some tree \(T_{i}\) and use \(x\) to denote a node in \(T_{i}\). Let \(y\) be the parent of \(x\) in case \(x\) is not the root. We associate with each \(x\) a subset of edges \(A_{x}\subseteq E\) and a spanning subgraph \(S_{x}\subseteq G\) in recursive fashion. For the root of \(T_{i}\), set \(A_{x}=E\); otherwise \(A_{x}\) is obtained by selecting each edge of \(A_{y}\) independently with probability \(p\). The random choices here and everywhere else are made independently of all other choices.
Let \(r\) be the depth of \(x\) in \(T_{i}\) (where the root has depth \(r=0\)). Define \(J_{r}=4\cdot K^{h-r}\) for \(r<h\), and \(J_{h}=1\). The graph \(S_{x}\) is constructed in \(J_{r}\) rounds. In each round, we sample a subset \(A\subseteq A_{x}\) by independently selecting each edge with probability \(p^{h-r}\). We then compute the Thorup-Zwick spanner of \(S_{y}-A\) using the family \(X_{0},\ldots,X_{k}\). Slightly abusing notation, if \(x\) is the root, we define \(S_{y}=G\) here. We set \(S_{x}\) to be the union of all those spanners. Note that, for a leaf \(x\) at depth \(r=h\), then \(A=A_{x}\) with probability \(1\), so indeed only \(J_{h}=1\) iteration is needed.
For each node, we store a dictionary of the edge set \(E(S_{x})\) and (except for the root) \(A_{x}\cap E(S_{y})\). We use the static construction of Hagerup, Bro Miltersen, and Pagh [29] that, for a set \(M\), has space \(O(|M|)\), preprocessing time \(\widetilde{O}(|M|)\), and query time \(O(1)\). For each leaf of a tree, we store the (modified) distance oracle \(D_{x}\). At depth \(0\leqslant r\leqslant h\), the tree \(T_{i}\) has \(K^{r}\) nodes. The largest dictionary at depth \(r\) is for \(A_{x}\cap E(S_{y})\) of size \(O(J_{r-1}\cdot kn^{1+1/k})=O(K^{h-r+1}n^{1+1/k})\) (using that \(k\) is constant). Due to \(K=O((2k{-}1)^{f/h}L^{f/h})\) and \(h=\sqrt{f\ln n}\), we have \(K^{h+1}=O(L^{f+o(1)})\) (using that \(f\) is constant as well). In total, our data structure requires \(O(I\cdot h\cdot K^{h+1}n^{1+1/k})=\widetilde{O}(L^{f+o(1)}n^{1+1/k})\) space and can be preprocessed in time \(O(I\cdot h\cdot K^{h+1}(kmn^{1/k}+kn^{1+1/k}))=\widetilde{O}(L^{f+o(1)}mn^{1/ k})\).
### Query Algorithm
Algorithm3 presents the query algorithm to report approximate distances. Fix a query \((s,t,F)\) where \(s,t\in V\) are two vertices and \(F\subseteq E\) is a set of at most \(f\) edges. For each of the \(I\) trees, we start at the root and recurse on an arbitrary child, computed in the inner for-loop, that satisfies \(F\cap E(S_{y})\subseteq A_{x}\), where \(y\) is parent of \(x\). Note that the set \(A_{x}\) is not stored as it may be too large. (We have \(|A_{x}|=m\) in the root.) The test is equivalent to \(F\cap E(S_{y})\subseteq A_{x}\cap E(S_{y})\) and can be performed in time \(O(f)\) using the stored dictionaries. If at some point no child satisfies the condition, the algorithm resumes with the next tree. Once a leaf \(y\) is reached, we query the associated (modified) distance oracle \(D_{y}\) with the pair \((s,t)\). Finally, the algorithm returns the minimum of all oracle answers.
Setting \(h=\sqrt{f\ln L}\), we get \(I=\widetilde{O}(11\sqrt{f\ln L})=\widetilde{O}(L^{o(1)})\) sampling trees, the number of children per node is \(K=((2k{-}1)L)^{\sqrt{f}/\sqrt{\ln L}}=\widetilde{O}(L^{o(1)})\). The total query time is \(I\cdot O(fhK+k)=\widetilde{O}(L^{o(1)})\).
We are left to prove correctness. That means, we claim that w.h.p. the returned estimate is at least as large as the replacement distance \(d(s,t,F)\) and, if \(s\) and \(t\) are joined by a short path in
\(G-F\), then this estimate is also at most \((2k{-}1)d_{G-F}^{\leq L}(s,t)\). Consider the Thorup-Zwick spanner for \(G-F\) and in it the approximate shortest path \(P_{s,t,G-F}\) (as defined ahead of Lemma5). If \(s\) and \(t\) have a short path in \(G-F\), then \(P_{s,t,G-F}\) has at most \((2k{-}1)L\) edges.
Let \(x\) be a node at depth \(r\) in the tree \(T_{i}\) and let \(S_{y}\) be the spanner associated to its parent (or \(S_{y}=G\) if \(x\) is the root). We say \(x\) is _well-behaved_ if it satisfies the following three properties.
1. \(F\cap E(S_{y})\subseteq A_{x}\).
2. Either \(x\) is a root or \(|E(P_{s,t,G-F})\cap A_{x}|<K^{\frac{h-r}{f}}\).
3. The path \(P_{s,t,G-F}\) is contained in \(S_{x}\).
Our query algorithm follows a path from the root to a leaf node such that at each node Property1 is satisfied. We show in the following lemma that any child \(x\) of a well-behaved node \(y\) that fulfills Property1 is itself well-behaved with constant probability.
**Lemma 6**.: _The following statements hold for any non-leaf node \(y\) in the tree \(T_{i}\)._
1. _If_ \(y\) _satisfies_ Property1_, then with probability at least_ \(1-\frac{1}{e}\) _there exists a child of_ \(y\) _that satisfies_ Property1_._
2. _If_ \(y\) _satisfies_ Property2_, then any child of_ \(y\) _satisfies_ Property2 _with probability at least_ \(\frac{1}{4}\)_._
3. _If_ \(y\) _is well-behaved and a child_ \(x\) _of_ \(y\) _satisfies_ Properties1 _and 2, then the probability of_ \(x\) _being well-behaved is at least_ \(1-\frac{1}{e}\)_._
_The root of \(T_{i}\) is well-behaved with probability at least \(1-\frac{1}{e}\)._
Proof.: Assume that \(F\cap S_{y}\subseteq A_{y}\) holds, that is, \(y\) satisfies Property1, and let \(x_{1},\ldots,x_{K}\) be the child nodes of \(y\). Each edge of \(A_{y}\) is sampled for \(A_{x}\) with probability \(p\). The probability that there
exists some child \(x_{j}\) satisfying \(F\cap S_{y}\subseteq A_{x_{j}}\) is therefore
\[1-\prod_{j=1}^{K}\mathrm{P}\Big{[}F\cap S_{y}\nsubseteq A_{x_{j}} \Big{]}=1-\prod_{i=1}^{K}\Big{(}1-p^{|F\cap S_{y}|}\Big{)}\\ =1-\left(1-\frac{1}{K^{|F\cap S_{y}|/f}}\right)^{K}\geqslant 1- \left(1-\frac{1}{K}\right)^{K}\geqslant 1-\frac{1}{e}.\]
For the second statement, let \(r{-}1\) be the depth of \(y\) in \(T_{i}\). Recall that the path \(P=P_{s,t,G-F}\) has at most \((2k{-}1)L\) edges. By our assumption of \(y\) satisfying Property2, at most \(K^{\frac{h-r+1}{f}}\) of those are in \(A_{y}\). Let \(x\) be a child of \(y\). We first analyze the case that \(x\) is a leaf, that is, we have \(r=h\).
\[\mathrm{P}\Big{[}E(P)\cap A_{x}=\emptyset\Big{]}=(1-p)^{|E(P)\cap A_{y}|}= \left(1-\frac{1}{K^{1/f}}\right)^{|E(P)\cap A_{y}|}\geqslant\left(1-\frac{1}{ K^{1/f}}\right)^{K^{1/f}}\geqslant\frac{1}{4}.\]
Now suppose \(r<h\). Observe that the random variable \(X=|E(P)\cap A_{x}|\) follows the binomial distribution with parameters \(|E(P)\cap A_{y}|\) and \(p\). In particular, it holds that \(\mathrm{E}[X]\leqslant pK^{\frac{h-r+1}{f}}=K^{\frac{h-r}{f}}\). By the central limit theorem, we have \(\mathrm{P}\big{[}X\geqslant K^{\frac{h-r}{f}}\big{]}\leqslant\mathrm{P} \big{[}X\geqslant\mathrm{E}[X]\big{]}\leqslant\frac{3}{4}\). In both cases, we see that \(x\) satisfies Property2 with probability at least \(\frac{1}{4}\).
We now turn to the third clause of the lemma. Suppose \(y\) is well-behaved and its child node \(x\) fulfills Properties1 and 2. If \(x\) is a leaf, it is well-behaved deterministically. Indeed, in this case, subgraph \(S_{x}\) is just the Thorup-Zwick spanner for \(S_{y}-A_{x}\). Property1 for \(x\) means that \(S_{y}-A_{x}\) doesn't contain edges of \(F\). Likewise, \(x\) satisfying Property2 with \(r=h\) and together with \(y\) satisfying Property3 shows that the path \(P\) is contained in \(S_{y}-A_{x}\). The inheritance property (Lemma5) applied to \(G^{\prime}=G-F\) and \(H=S_{y}-A_{x}\) gives that \(P\) is indeed contained in \(S_{x}\).
For \(r<h\), this argument goes through only with a certain probability. Recall that the graph \(S_{x}\) is obtained in \(J_{r}=4K^{h-r}\) iterations, wherein, in each iteration, a subset \(A\subseteq A_{x}\) is sampled by selecting each edge with probability \(p^{h-r}\), and the spanner \(H_{A}\) of \(S_{y}{-}A\) is computed. \(S_{x}\) is the union of all \(4K^{h-r}\)\(H_{A}\)'s. We estimate the probability that the path \(P\) exists in \(S_{y}-A\) and no failing edge of \(F\) is in \(S_{y}{-}A\), which, by inheritance to \(H_{A}\) and taking the union, will imply \(P\) lies in \(S_{x}\).
We first claim that \(\mathrm{P}[F\cap(E(S_{y})\backslash A)=\emptyset]=p^{|F\cap E(S_{y})|\cdot(h- r)}\). To see this, note that Property1 holding for \(x\) means that \(F\cap S_{y}\subseteq A_{x}\). No failure from \(F\) is in \(S_{y}{-}A\) if and only if all the edges of \(F\cap S_{y}\) are chosen for \(A\). Our second claim is \(\mathrm{P}[E(P)\subseteq E(S_{y})\backslash A]=(1-p^{h-r})^{|E(P)\cap A_{x}|}\). It holds that \(E(P)\subseteq E(S_{y})\) since \(y\) is well-behaved (Property3). Thus, \(E(P)\subseteq E(S_{y})\backslash A\) is true if and only if none of the edges in \(E(P)\cap A_{x}\) are selected in \(A\).
Using the independence of the events and Property2 of the node \(x\), we arrive at
\[\mathrm{P}\Big{[}F\cap(E(S_{y})\backslash A)=\emptyset \,\wedge\,E(P)\subseteq E(S_{y})\backslash A\Big{]}=p^{|F\cap E(S_{y })|\cdot(h-r)}\cdot(1-p^{h-r})^{|E(P)\cap A_{x}|}\\ \geqslant p^{f(h-r)}\cdot\left(1-p^{h-r}\right)^{K^{\frac{h-r}{f}} }=\frac{1}{K^{h-r}}\cdot\left(1-\frac{1}{K^{\frac{h-r}{f}}}\right)^{K^{\frac{ h-r}{f}}}\geqslant\frac{1}{4\cdot K^{h-r}}.\]
Iterating this \(J_{r}\) times gives
\[\mathrm{P}\big{[}E(P)\subseteq E(S_{x})\big{]}\geqslant 1-\left(1-\frac{1}{4K^{h -r}}\right)^{J_{r}}=1-\left(1-\frac{1}{4K^{h-r}}\right)^{4K^{h-r}}\geqslant 1- \frac{1}{e}.\]
The assertion about the root follows as the previous one by observing that, for the purpose of this proof, the original graph \(G\) is the "parent" of the root in that we have \(A_{x}=E\) and \(S_{y}=G\).
The next lemma shows that the distance oracle computed for a well-behaved leaf reports a \((2k{-}1)\)-approximation of the distance in \(G-F\) for short paths.
**Lemma 7**.: _Let \(s,t\in V\) be two vertices and \(F\subseteq E\) a set of at most \(f\) edges. Let further \(x\) be a leaf in \(T_{i}\) and \(D_{x}\) be the (modified) distance oracle associated with \(x\). If \(x\) satisfies Property1 with respect to \(F\), then \(D_{x}(s,t)\geqslant d(s,t,F)\). Moreover, if \(x\) is well-behaved with respect to the approximate shortest path \(P_{s,t,G-F}\), then \(D_{x}(s,t)\leqslant(2k{-}1)\,d(s,t,F)\)._
Proof.: As \(x\) is a leaf node, \(S_{x}\) is the spanner of the graph \(S_{y}-A_{x}\) and \(D_{x}\) reports the distances in \(S_{x}\). By Property1, we have \(F\cap E(S_{y})\subseteq A_{x}\) whence \(S_{x}\subseteq S_{y}-A_{x}\subseteq G-F\). This implies that \(D_{x}(s,t)=d_{S_{x}}(s,t)\geqslant d_{G-F}(s,t)=d(s,t,F)\). If \(x\) is even well-behaved then, by Property3, the path \(P_{s,t,G-F}\) lies in \(S_{x}\) and thus by inheritance, \(D_{x}(s,t)\leqslant|P_{s,t,G-F}|\leqslant(2k{-}1)\cdot d(s,t,F)\).
Our algorithm only ever queries leaves that fulfill Property1, it therefore never underestimates the distance \(d(s,t,F)\). Now additionally assume that \(s\) and \(t\) are connected in \(G-F\) via a path with at most \(L\) edges. To complete the proof of Theorem2, we need to show that, under this condition and with high probability over all queries, our algorithm queries at least one well-behaved leaf. If there is a short \(s\)-\(t\)-path in \(G-F\) then \(P_{s,t,G-F}\) has at most \((2k-1)L\) edges. Lemma6 shows that the root of each tree \(T_{i}\), for \(1\leqslant i\leqslant I\), is well-behaved with probability \(1-\frac{1}{e}\), and that in each stage the query algorithm finds a well-behaved child node with constant probability. More precisely, we arrive at a well-behaved leaf with probability at least \((1-\frac{1}{e})\cdot\left((1-\frac{1}{e})^{2}\frac{1}{4}\right)^{h}\geqslant \frac{1}{2}\cdot 11^{-h}\). Since there are \(I=c\cdot 11^{h}\ln n\) independent trees, the query algorithms fails for any fixed query with probability at most \((1-\frac{1}{2\cdot 11^{h}})^{I}\leqslant n^{-c/2}\). We choose the constant \(c>0\) large enough to ensure a high success probability over all \(O(n^{2}m^{f})=O(n^{2+2f})\) possible queries.
## 5 Sublinear Query Time for Long Paths
Let \(0<\alpha<1/2\) be a constant, where the approximation parameter \(\varepsilon>0\) may depend on \(m\) and \(n\). As a warm-up, we construct a distance sensitivity oracle with the same stretch and space as in Theorem1, but only a sublinear query time of the form \(O_{\varepsilon}(n^{1-g(\alpha,f)})\), for some function \(g\). In Section6, we then show how to reduce the query time to \(\widetilde{O}_{\varepsilon}(n^{\alpha})\). The intermediate solution serves to highlight many of the key ideas needed to implement the classical FT-trees in subquadratic space, but does not yet involve the granularity \(\lambda\). Recall that we assume that, for every two vertices \(u\) and \(v\) of \(G\), there is a unique shortest path from \(u\) to \(v\) in \(G\). Since the short replacement paths are handled by Theorem2, we focus on long paths. The structure of this section is as follows. We first describe the interface of an abstract data structure \(\mathit{FT}\) and show how to use it to get a \((3{+}\varepsilon)\)-approximation of the replacement distances. We then implement \(\mathit{FT}\) using FT-trees.
**Lemma 8**.: _Let \(f\) be a positive integer and \(0<\alpha<1/2\) a constant. For any undirected, unweighted graph with unique shortest paths and any \(\varepsilon>0\), there exists a \((3{+}\varepsilon)\)-approximate \(f\)-DSO that takes space \(\widetilde{O}(n^{2-\alpha/(f+1)})\cdot O(\log n/\varepsilon)^{f+1}\), has query time \(n^{1-\frac{\alpha}{f+1}+o(1)}/\varepsilon\), and preprocessing time \(\widetilde{O}(n^{2-\alpha/(f+1)}(m+1/\varepsilon))\cdot O(\log n/\varepsilon)^ {f}\)._
### Trapezoids and Expaths
For the interface of \(\mathit{FT}\), we need a bit of terminology from the work by Chechik et al. [19]. Recall the high-level description of the original FT-trees in Section2. We now make precise what we mean by all failures in \(F\) being "far away" from a given path. Let \(0<\varepsilon<3\); moreover, we assume it to
be bounded away from 3. (Recall that \(\varepsilon\) may depend on the input.) We use \(V(F)\) for the set of endpoints of failing edges.
**Definition 9** (\(\frac{\varepsilon}{9}\)-trapezoid).: Let \(F\subseteq E\) a set of edges, \(u,v\in V\), and \(P\) a \(u\)-\(v\)-path in \(G-F\). The _\(\frac{\varepsilon}{9}\)-trapezoid_ around \(P\) in \(G-F\) is
\[\operatorname{tr}_{G-F}^{\varepsilon/9}(P)=\{\,z\in V\backslash\{u,v\}\mid \exists y\in V(P)\colon d_{G-F}(y,z)\leqslant\frac{\varepsilon}{9}\cdot\min( |P[u..y]|,|P[y..v]|)\,\}.\]
\(P\) is _far away11_ from \(F\) if it exists in \(G-F\) and \(\operatorname{tr}_{G-F}^{\varepsilon/9}(P)\cap V(F)=\emptyset\).
Footnote 11: Definition 9 relaxes the notion of “far away” compared to [19] in that we allow the case \(\operatorname{tr}_{G-F}^{\varepsilon/9}(P)\cap\{s,t\}\neq\emptyset\) if \(s,t\notin V(F)\). This makes the definition independent of the vertices \(s\) and \(t\) in the query. The proof of Lemma 10 remains the same using a vertex \(z\in V(F)\) instead of \(z\in V(H^{F})=V(F)\cup\{s,t\}\).
The endpoints \(u,v\) of \(P\) are removed from the trapezoid to exclude trivialities when applying it to paths between vertices contained in the failing edges. Finally, note that, due to \(\varepsilon/9<1\), the distance from \(u\) to any vertex in the trapezoid is strictly smaller than \(d_{G^{\prime}}(u,v)\) (by symmetry, this also holds for \(v\)). The idea is that either the path \(P\) is already far away from all failures, or we can reach our destination via a vertex \(z\in\operatorname{tr}_{G-F}^{\varepsilon/9}(P)\cap V(F)\) such that the shortest \(u\)-\(z\)-path in \(G-F\) is far away from \(F\) and only a slight detour. An illustration is given in Figure 1.
**Lemma 10** (Lemma 2.6 in [19]).: _Let \(u,v\in V(F)\cup\{s,t\}\) be endpoints of failing edges or query vertices and \(P=P(u,v,F)\) their replacement path. If \(\operatorname{tr}_{G-F}^{\varepsilon/9}(P)\cap V(F)\neq\emptyset\), then there are vertices \(x\in\{u,v\}\), \(y\in V(P)\), and \(z\in\operatorname{tr}_{G-F}^{\varepsilon/9}(P)\cap V(F)\) satisfying the following statements._
1. \(|P[x..y]|\leqslant|P|/2\)_;_
2. \(d_{G-F}(y,z)\leqslant\frac{\varepsilon}{9}\cdot d_{G-F}(x,y)\)_;_
3. \(\operatorname{tr}_{G-F}^{\varepsilon/9}(P[x..y]\circ P(y,z,F))\cap V(F)=\emptyset\)_._
_Thus, the path \(P[x..y]\circ P(y,z,F)\) is far away from \(F\) and has length at most \((1+\frac{\varepsilon}{9})\cdot d_{G-F}(x,y)\)._
We now turn to expaths. Afek et al. [2] showed that shortest paths in \(G-F\) are \(f\)-_decomposable_, that is, each of them is obtained by concatenating at most \(f+1\) shortest paths in \(G\) (for weighted \(G\) those shortest paths may be interleaved with up to \(f\) edges). One would like to represent
Figure 1: A visualization of the trapezoid \(\operatorname{tr}_{G-F}^{\varepsilon/9}(P)\) in Lemma 10 for the case \(u=x\). The vertices \(u,v\) are endpoints of failing edges in \(F\) or the query vertices \(s\) or \(t\), they are not part of \(\operatorname{tr}_{G-F}^{\varepsilon/9}(P)\). Vertex \(y\) lies on the path \(P\) and vertex \(z\) is in \(V(F)\). The replacement path from \(y\) to \(z\) has length at most \(\frac{\varepsilon}{9}\,d_{G-F}(u,y)\). The smaller trapezoid around \(P[u..y]\circ P(y,z,F)\) (red dashed line) does not contain any vertex from \(V(F)\).
replacement paths by the \(O(f)\) endpoints of those shortest paths (and edges), but during the construction of the FT-trees much more than \(f\) edges may fail, so this is not directly possible. We will see that expaths offer a suitable alternative.
**Definition 11** (\(\ell\)-decomposable path).: Let \(A\subseteq E\) be a set of edges and \(\ell\) a positive integer. An \(\ell\)_-decomposable_ path in \(G-A\) is a concatenation of at most \(\ell+1\) shortest paths of \(G\).
**Definition 12** (\(\ell\)-expath).: Let \(A\subseteq E\) be a set of edges and \(\ell\) a positive integer. An \(\ell\)_-expath_ in \(G-A\) is a concatenation of \((2\log_{2}(n)+1)\)\(\ell\)-decomposable paths such that, for every \(0\leqslant i\leqslant 2\log_{2}n\), the length of the \(i\)-th path is at most \(\min(2^{i},2^{2\log_{2}(n)-i})\).
Since \(n-1\) is an upper bound on the diameter of any connected subgraph of \(G\), the middle level \(i\!=\log_{2}n\) is large enough to accompany any (decomposable) path. Levels may be empty. Therefore, for any \(\ell^{\prime}\geqslant\ell\), an \(\ell\)-decomposable path is also both \(\ell^{\prime}\)-decomposable and an \(\ell^{\prime}\)-expath. Also, an arbitrary subpath of an \(\ell\)-decomposable path (respectively, \(\ell\)-expath) is again \(\ell\)-decomposable (respectively, an \(\ell\)-expath). This gives the following intuition why it is good enough to work with expaths. Suppose some replacement path \(P(u,v,F)\) survives in \(G\!-\!A\) albeit \(A\supseteq F\) may be much larger than \(F\), then the shortest \(u\)-\(v\)-path in \(G-A\) is indeed \(P(u,v,F)\) and thus \(f\)-decomposable. The length of the shortest \((2f\!+\!1)\)-expath between \(u\) and \(v\) in \(G\!-\!A\) is the actual replacement distance \(|P(u,v,F)|=d_{G-F}(u,v)\). The reason for the choice \(\ell=2f+1\) will become apparent in the proof of Lemma13. The difficulties of working merely with \((2f\!+\!1)\)-_decomposable_ paths are described in Lemma18.
Finally, we define a set \(B\) of special vertices of \(G\) that we call _pivots_. Recall that we are mainly interested in paths with more than \(L\) edges. Suppose \(L=\omega(\log n)\). We construct the set \(B\) by sampling any vertex from \(V\) independently with probability \(C^{\prime}f\log_{2}(n)/L\) for some sufficiently large constant \(C^{\prime}>0\). With high probability, we have \(|B|=\widetilde{O}(n/L)\) and any replacement path with more than \(L/2\) edges in any of the graphs \(G-F\) with \(|F|\leqslant f\) contains a pivot as can be seen by standard Chernoff bounds, see e.g. [27, 35, 37].
**Interface of Data Structure _FT.**_ For a positive integer \(\ell\) and vertices \(u,v\in V\), define \(d_{\varepsilon/9}^{(\ell)}(u,v,F)\) to be the minimum length over all \(\ell\)-decomposable paths between \(u\) and \(v\) in \(G-F\) that are far away from \(F\). If there are no such paths, we set \(d_{\varepsilon/9}^{(\ell)}(u,v,F)=+\infty\). The data structure _FT_ can only be queried with triples \((u,v,F)\) for which \(u\) or \(v\) is a pivot in \(B\). Its returned value satisfies \(d_{G-F}(u,v)\leqslant FT(u,v,F)\leqslant 3\cdot d_{\varepsilon/9}^{(2f+1)}(u,v,F)\). We let \(q_{FT}\) denote its query time.
### Querying the Distance Sensitivity Oracle
We show how to use the black box _FT_ to get a \((3\!+\!\varepsilon)\)-approximate \(f\)-DSO. Fix a query \((s,t,F)\) that we want to answer on the top level. Let \(u,v\in V\) be any two vertices. Recall that we use \(d_{G-F}^{\leqslant L}(u,v)\) for the minimum length over all short \(u\)-\(v\)-paths in the graph \(G-F\), and \(\widehat{d^{\leqslant L}}(u,v,F)\) for its \((2k\!-\!1)\)-approximation by the \(f\)-DSO for short paths described in Theorem2. We instantiate that oracle with \(k=2\). The time to obtain the estimate is \(\widetilde{O}(L^{o(1)})\).
To answer \((s,t,F)\), we build the complete graph \(H^{F}\) on the vertex set \(V(H^{F})=\{s,t\}\cup V(F)\) and assign weights to its edges. For a pair \(\{u,v\}\in\binom{V(H^{F})}{2}\), let \(w_{H^{F}}(u,v)\) denote the weight of the edge \(\{u,v\}\). Since \(G\) is undirected, \(w_{H^{F}}(\cdot,\cdot)\) is symmetric. We allow possibly infinite edge weights instead of removing the respective edge in order to simplify notation. If \(u\) or \(v\) is a pivot, we set \(w_{H^{F}}(u,v)\) to the minimum of \(\widehat{d^{\leqslant L}}(u,v,F)\) and \(FT(u,v,F)\). Otherwise, if \(\{u,v\}\cap B=\emptyset\), we set it to the minimum of \(\widehat{d^{\leqslant L}}(u,v,F)\) and
\[w_{H^{F}}^{\prime}(u,v)=\min_{b\in B}\left\{FT(u,b,F)+FT(b,v,F)\right\}.\]
The eventual answer to the query \((s,t,F)\) is the distance \(d_{H^{F}}(s,t)\).
**Lemma 13**.: _With high probability over all queries, the query time is \(\widetilde{O}(L^{o(1)}+\frac{n}{L}\cdot q_{FT})\) and it holds that \(d_{G-F}(s,t)\leqslant d_{H^{F}}(s,t)\leqslant(3+\varepsilon)\,d_{G-F}(s,t)\)._
Proof.: The graph \(H^{F}\) has \(O(f^{2})=O(1)\) edges, and assigning a weight takes \(\widetilde{O}(L^{o(1)}+|B|\cdot q_{FT})\) per edge. The distance from \(s\) to \(t\) can be computed using Dijkstra's algorithm in time \(O(f^{2})\).
We prove the seemingly stronger assertion that for each pair \(u,v\in V(H^{F})\), we have \(d_{G-F}(u,v)\leqslant d_{H^{F}}(u,v)\leqslant(3+\varepsilon)\,d_{G-F}(u,v)\). The first inequality is immediate from the fact that the values \(\widehat{d^{\leqslant L}}(u,v,F)\), \(FT(u,v,F)\), and \(FT(u,b,F)+FT(b,v,F)\) for any \(b\in B\) are all at least \(d_{G-F}(u,v)\).
We prove the second inequality by induction over \(d_{G-F}\). The case \(u=v\) is trivial. Assume the inequality holds for all pairs of vertices with replacement distance strictly smaller than \(d_{G-F}(u,v)\). We distinguish three cases. In the first case, the (unique) replacement path \(P=P(u,v,F)\) has at most \(L\) edges. Theorem2 then implies
\[d_{H^{F}}(u,v)\leqslant w_{H^{F}}(u,v)\leqslant\widehat{d^{\leqslant L}}(u,v,F)\leqslant 3\cdot d_{G-F}^{\leqslant L}(u,v)=3\cdot|P|,\]
which is \(3\,d_{G-F}(u,v)\) as \(P\) is a replacement path.
If the path \(P\) is long instead, it contains a pivot \(b\in B\) w.h.p. (possibly \(u=b\) or \(v=b\)). For the second case, assume \(P\) has more than \(L\) edges and is far away from all failures in \(F\). Note that then the subpaths \(P[u..b]\) and \(P[b..v]\) are the replacement paths for their respective endpoints, and therefore both \(f\)-decomposable (and also \((2f+1)\)-decomposable). Moreover, they are far away from all failures as their trapezoids are subsets of \(\operatorname{tr}_{G-F}^{\varepsilon/9}(P)\). It holds that
\[d_{H^{F}}(u,v) \leqslant w_{H^{F}}(u,v)\leqslant FT(u,b,F)+FT(b,v,F)\] \[\leqslant 3\cdot d_{\varepsilon/9}^{(2f+1)}(u,b,F)+3\cdot d_{ \varepsilon/9}^{(2f+1)}(b,v,F)\] \[=3\cdot|P[u..b]|+3\cdot|P[b..v]|=3\cdot d_{G-F}(u,v).\]
Finally, for the third case suppose the replacement path \(P\) is long but _not_ far away from \(F\). Lemma10 states the existence of three vertices \(x\in\{u,v\}\), \(y\in V(P)\), and \(z\in\operatorname{tr}_{G-F}^{\varepsilon/9}(P)\cap V(F)\) such that \(d_{G-F}(z,y)\leqslant\frac{\varepsilon}{9}\cdot d_{G-F}(x,y)\). The path \(P^{\prime}=P[x..y]\circ P(y,z,F)\) is far away from all failures and has length at most \((1+\frac{\varepsilon}{9})\cdot d_{G-F}(x,y)\). In the remainder, we assume \(x=u\); the argument for \(x=v\) is symmetric. If the concatenation \(P^{\prime}\) has at most \(L\) edges, we get
\[w_{H^{F}}(u,z)\leqslant\widehat{d^{\leqslant L}}(u,z,F)\leqslant 3\,|P^{\prime}| \leqslant 3\left(1+\frac{\varepsilon}{9}\right)\,d_{G-F}(u,y)=\left(3+\frac{ \varepsilon}{3}\right)\,d_{G-F}(u,y).\]
Note that we do mean \(d_{G-F}(u,y)\) here and not \(d_{G-F}(u,z)\).
The subpath \(P[u..y]\) is in fact the unique replacement path \(P(u,y,F)\). So, if \(P^{\prime}\) has more than \(L\) edges, one of its subpaths \(P[u..y]\) or \(P(y,z,F)\) has more than \(L/2\) edges. Thus, there exists a pivot \(b\) on \(P^{\prime}\). Here, we actually use the uniqueness of shortest paths in \(G\) since replacing, say, \(P[u..y]\) with another shortest \(u\)-\(y\)-path in \(G-F\) to ensure a pivot may result in a concatenation that is no longer far away from all failures. Similar to the second case, we arrive at
\[w_{H^{F}}(u,z) \leqslant FT(u,b,F)+FT(b,z,F)\] \[\leqslant 3\cdot d_{\varepsilon/9}^{(2f+1)}(u,b,F)+3\cdot d_{ \varepsilon/9}^{(2f+1)}(b,z,F)\] \[\leqslant 3\cdot|P^{\prime}[u..b]|+3\cdot|P^{\prime}[b..z]|=3|P^{ \prime}|\leqslant\left(3+\frac{\varepsilon}{3}\right)d_{G-F}(u,y).\]
It is important that \(FT\) approximates \(d_{\varepsilon/9}^{(2f+1)}\) since \(P^{\prime}\) may not be \(f\)-decomposable. As the concatenation of two \(f\)-decomposable paths, \(P^{\prime}\) is \((2f+1)\)-decomposable; so are \(P^{\prime}[u..b]\) and \(P^{\prime}[b..z]\).
Now that we have an upper bound on \(w_{H^{F}}(u,z)\) we can conclude the third case. Since \(\frac{\varepsilon}{9}<1\) and \(z\in\operatorname{\mathrm{tr}}_{G_{F}}^{\varepsilon/9}(P)\) (where \(P\) is the \(u\)-\(v\)-replacement path), the distance \(d_{G-F}(z,v)\) is strictly smaller than \(d_{G-F}(u,v)\). By induction, \(d_{H^{F}}(z,v)\leqslant(3+\varepsilon)\cdot d_{G-F}(z,v)\). Recall that vertex \(y\) lies on \(P\), whence \(d_{G-F}(u,y)+d_{G-F}(y,v)=d_{G-F}(u,v)\). Due to \(\varepsilon\leqslant 3\), we have \((2+\frac{\varepsilon}{3})\frac{\varepsilon}{9}\leqslant\frac{\varepsilon}{3}\). Also, recall that \(d_{G-F}(z,y)\leqslant\frac{\varepsilon}{9}\,d_{G-F}(u,y)\) by the definition of \(z\) and \(x=u\). Putting everything together, we estimate the \(u\)-\(v\)-distance in the graph \(H^{F}\).
\[d_{H^{F}}(u,v) \leqslant w_{H^{F}}(u,z)+d_{H^{F}}(z,v)\leqslant\left(3+\frac{ \varepsilon}{3}\right)d_{G-F}(u,y)+(3+\varepsilon)d_{G-F}(z,v)\] \[=3\left(\left(1+\frac{\varepsilon}{9}\right)d_{G-F}(u,y)+\left(1 +\frac{\varepsilon}{3}\right)\left(d_{G-F}(z,y)+d_{G-F}(y,v)\right)\right)\] \[\leqslant 3\left(\left(1+\frac{\varepsilon}{9}\right)d_{G-F}(u,y)+ \left(1+\frac{\varepsilon}{3}\right)\left(\frac{\varepsilon}{9}d_{G-F}(u,y)+ d_{G-F}(y,v)\right)\right)\] \[=3\Big{(}d_{G-F}(u,y)+d_{G-F}(y,v)+\left(2+\frac{\varepsilon}{3} \right)\!\frac{\varepsilon}{9}d_{G-F}(u,y)+\frac{\varepsilon}{3}d_{G-F}(y,v) \!\Big{)}\] \[\leqslant 3\Big{(}d_{G-F}(u,v)+\frac{\varepsilon}{3}d_{G-F}(u,y)+ \frac{\varepsilon}{3}d_{G-F}(y,v)\Big{)}\] \[=3\left(1+\frac{\varepsilon}{3}\right)d_{G-F}(u,v)=(3+\varepsilon )\,d_{G-F}(u,v).\qed\]
### Fault-Tolerant Trees
We now describe the implementation of the \(FT\) data structure via fault-tolerant trees. We compute all-pairs shortest distances in the original graph \(G\) (if required, with slightly perturbed edge weights for unique shortest paths), and, for each pivot \(b\in B\), a shortest path tree of \(G\) rooted in \(b\) in \(\widetilde{O}(mn)\) time. We turn each of those trees into a data structure that reports the lowest common ancestor (LCA) in constant time with the algorithm of Bender and Farach-Colton [7]. This takes time and space \(O(|B|n)=\widetilde{O}(n^{2}/L)\) w.h.p.
We also assume that we have access to a procedure that, given any set \(A\subseteq E\) of edges (which may have much more than \(f\) elements) and pair of vertices \(u,v\in V\), computes the shortest \((2f+1)\)-expath between \(u\) and \(v\) in \(G{-}A\). This expath is labeled with its structure, that means, (a) the start and endpoints of the \(2\log_{2}(n)+1\) constituting \((2f+1)\)-decomposable subpaths, and (b) inside each decomposable path the start and endpoint of the constituting shortest paths (and possibly interleaving edges). The explanation how to how to achieve this in time \(\widetilde{O}(fm)\) is deferred to Section7. This is also the key ingredient of the proof of Theorem4.
We build the FT-trees only for pairs of vertices \((u,b)\) for which \(b\in B\) is a pivot. On a high level, \(FT(u,b)\) is a tree of depth \(f\) that stores in each node the shortest \((2f+1)\)-expath between \(u\) and \(b\) in some graph \(G{-}A\). We first describe the information that we hold in a single node \(\nu\). Let \(P_{\nu}\) be the stored expath. It is partitioned first into segments and those into parts. To define the segments, we need the notion of netpoints.
**Definition 14** (Path netpoints).: Let \(P=(u=v_{1},\ldots,v_{\ell}=b)\) be a path. Define \(p_{\mathrm{left}}\) to be all vertices \(v_{j},v_{j+1}\in V(P)\) such that \(|P[u..v_{j}]|<(1+\frac{\varepsilon}{36})^{i}\leqslant|P[u..v_{j+1}]|\) for some integer \(i\geqslant 0\). Analogously, let \(p_{\mathrm{right}}\) be all vertices \(v_{j},v_{j-1}\in V(P)\) such that \(|P[v_{j}..b]|<(1+\frac{\varepsilon}{36})^{i}\leqslant|P[v_{j-1}..b]|\) for some \(i\). The _netpoints_ of \(P\) are all vertices in \(p_{\mathrm{left}}\cup p_{\mathrm{right}}\cup\{u,b\}\).
A _segment_ of the path \(P\) is the subpath between consecutive netpoints. For an edge \(e\in E(P)\), let \(\operatorname{seg}(e,P)\) denote the segment of \(P\) containing \(e\). The netpoints cut \(P\) into segments of exponentially increasing length, with \(1+\frac{\varepsilon}{36}\) being the base of the exponential. However, since we do this from _both_ ends the segments do not get too large. We make this precise in Lemma17 below.
The segments are further subdivided into parts. An expath \(P\) consists of decomposable subpaths, which in turn consist of shortest paths (and interleaving edges) in \(G\), but they may not be aligned with the segments. To avoid this, we define a _part_ of \(P\) to be a maximal subpath that is completely contained in a shortest path of a \((2f{+}1)\)-decomposable subpath and also does not cross netpoints. We can find all parts by a linear scan over the labels of the expath given by the procedure mentioned above. Note that each part is a shortest path/edge in \(G\). By the assumption that shortest paths are unique, it is enough to represent a part by its two endpoints. With each part \([v,w]\), for \(v,w\in V(P)\), we store pointers to the closest netpoint before \(v\) and after \(w\) (potentially \(v\) and \(w\) themselves in case they are netpoints), as well as the original graph distance \(d(v,w)\). If the part is long, i.e., if it contains more than \(L\) edges, we mark that and additionally store a pivot \(p\in B\) that lies in that part. Here, the case \(p=b\) is possible if the respective part lies at the end of the expath \(P\), that is, if \(w=b\).
We now describe the whole FT-tree \(FT(u,b)\) recursively. In some node \(\nu\), let \(A_{\nu}\) be the set of all edges that were failed in the path from the root to \(\nu\); with \(A_{\nu}=\emptyset\) in the root itself. We compute the shortest \((2f{+}1)\)-expath \(P_{\nu}\) in \(G-A_{\nu}\) and store the information for all its parts. For each of its segments \(S\), we create a child node \(\mu\) in which we set \(A_{\mu}=A_{\nu}\cup E(S)\). That means, the transition from a parent to a child corresponds to failing the _whole segment_. Note that the sets \(A_{\nu}\) are only used during preprocessing and never actually stored. We continue the recursive construction until depth \(f\) is reached; if in a node \(\nu\) the vertices \(u\) and \(b\) become disconnected, we mark this as a leaf node not storing any path. We build one FT-tree for each pair of (distinct) vertices in \(V\times B\) and additionally store the LCA data structure for each pivot.
The number of segments of any simple path in a subgraph of \(G\) is at most \(2\log_{1+\frac{\varepsilon}{36}}(n)+1\). Therefore, there exists a constant \(c>0\) such that the maximum number of segments of one path is at most \(c\log_{2}(n)/\varepsilon\). This is an upper bound on the degree of any node, so there are at most \((c\log_{2}(n)/\varepsilon)^{f}\) nodes in each tree. Moreover, an \((2f{+}1)\)-expath consists of \(O(f\log n)\) shortest paths. So there are \(O(f\log^{2}n/\varepsilon)\) parts in one node, for each of which we store a constant number of machine words. In summary, all FT-trees and LCA structures together take space
\[|B|n\cdot O\!\left(\frac{f\log^{2}n}{\varepsilon}\right)\cdot O\!\left(\frac{ \log n}{\varepsilon}\right)^{f}+O(|B|n)=\widetilde{O}\!\left(\frac{n^{2}}{L} \right)\cdot O\!\left(\frac{\log n}{\varepsilon}\right)^{f+1}.\]
The time spent in each node is dominated by computing the \((2f{+}1)\)-expath. The total time to precompute \(FT\) is \(|B|n\cdot\widetilde{O}\!\left(fm+\frac{1}{\varepsilon}\right)\cdot O\!\left( \frac{\log n}{\varepsilon}\right)^{f}+O(|B|n)=\widetilde{O}\!\left(\frac{n^{2 }}{L}\left(m+\frac{1}{\varepsilon}\right)\right)\cdot O\!\left(\frac{\log n}{ \varepsilon}\right)^{f}\).
### Querying the Data Structure _FT_
We used in Lemma13 that the value \(FT(u,b,F)\) is between \(d_{G-F}(u,b)\) and \(3\,d_{\varepsilon/9}^{(2f{+}1)}(u,b,F)\), three times the minimum length of an \((2f{+}1)\)-decomposable between \(u\) and \(b\) in \(G{-}F\) that is far away from all failures in \(F\). We now show how to do this.
The main challenge when traversing the FT-tree is to utilize the little information that is stored in a node \(\nu\) to solve the following problem. Either find the segment \(\operatorname{seg}(e,P_{\nu})\) for some failing edge \(e\in F\) or verify that \(F\cap E(P_{\nu})=\emptyset\). The original solution in [19] was to compare for each shortest path/interleaving edge \([v,w]\) on \(P_{\nu}\) and edge \(e=\{x,y\}\in F\) whether the minimum of \(d(v,x)+w(x,y)+d(y,w)\) and \(d(v,y)+w(x,y)+d(x,w)\) is equal to \(d(v,w)\). If so, \(e\) must lie on the
shortest path \(P_{\nu}[v..w]\). Finding the according segment amounts to computing the two bounding netpoints. The problem is that this approach requires to store all \(\Omega(n^{2})\) original graph distances in \(G\), which we cannot afford. We first prove that we can get a weaker guarantee with our setup.
**Lemma 15**.: _Let \(\nu\) be a node of \(FT(u,b)\). There exists an algorithm to check that there is a path between \(u\) and \(b\) in \(G-F\) that has length at most \(3|P_{\nu}|\) or find the segment \(\operatorname{seg}(e,P_{\nu})\) for some \(e\in F\cap E(P_{\nu})\). The computation time is \(\widetilde{O}(L^{o(1)}/\varepsilon)\)._
Proof.: Note that one of the alternatives must occur for if \(F\cap E(P_{\nu})=\emptyset\), then \(P_{\nu}\) exists in \(G-F\). Consider a part \([v,w]\) of \(P_{\nu}\). If it has more than \(L\) edges, then we stored a pivot \(p\) in \([v,w]\). More precisely, \([v,w]\) is the concatenation of the unique shortest path between \(v\) and \(p\) and the one between \(p\) and \(w\) in \(G\). We have access to a shortest path tree rooted in \(p\). So, for each edge \(e=\{x,y\}\in F\), we can check with a constant number of LCA queries involving \(p\), \(v\), \(w\), \(x\), and \(y\) whether edge \(e\) is in that concatenation in time \(O(f)\) per part. If all checks fail, we have \(d_{G-F}(v,w)=d(v,w)=|P_{\nu}[v..w]|\).
If \([v,w]\) is short, the oracle from Theorem2 is queried with the triple \((v,w,F)\). That oracle was preprocessed anyway and answers in time \(\widetilde{O}(L^{o(1)})\). The return value \(\widehat{d^{<L}}(v,w,F)\) is compared with the original distance \(d(v,w)\) that was stored with the part. If the former is more than 3 times the latter, it must be that \(d_{G-F}(v,w)>d(v,w)\), so the part contains some edge of \(F\).
We either find a part that has a failing edge in total time \(\widetilde{O}(L^{o(1)}\cdot f\,\frac{\log^{2}n}{\varepsilon})=\widetilde{O}(L ^{o(1)}/\varepsilon)\) or verify that \(d_{G-F}(v,w)\leqslant 3\cdot d(v,w)\) holds for _all_ parts. In the latter case, swapping each part \([v,w]\) by its replacement path \(P(v,w,F)\) shows the existence of a path in \(G-F\) of length at most \(3|P_{\nu}|=\sum_{[v,w]}3d(v,w)\).
Finally, let \([v,w]\) be a part for which we determined that it contains a failing edge. The query algorithm does not need to know which edges are in \(E([v,w])\cap F\) since for all of them \([v,w]\) is completely contained in the segment \(\operatorname{seg}(e,P_{\nu})\). It is thus enough to find the last netpoint on the subpath \(P_{\nu}[u..v]\) and the first on \(P_{\nu}[w..b]\) by following the pointers.
We use the lemma to compute \(FT(u,b,F)\). The tree transversal starts at the root. Once it enters a node \(\nu\), it checks whether there is a path in \(G-F\) of length at most \(3|P_{\nu}|\). If so, this length is returned. Otherwise, the algorithm obtains a segment \(\operatorname{seg}(e,P_{\nu})\) for some \(e\in F\cap E(P_{\nu})\) and recurses on the corresponding child. Once a leaf \(\nu^{*}\) is encountered, the length \(|P_{\nu^{*}}|\) is returned; or \(+\infty\) if the leaf does not store a path. This takes total time \(q_{FT}=\widetilde{O}(L^{o(1)}/\varepsilon)\) since at most \(f+1=O(1)\) nodes are visited. The main argument for the correctness of this procedure is to show that if a \((2f+1)\)-expath \(P\) in \(G-F\) is far away from all failures, it survives in \(G-A_{\nu^{*}}\).
**Lemma 16**.: _Let \(P\) be the shortest \((2f+1)\)-decomposable path between \(u\) and \(b\) in \(G-F\) that is far away from all failures in \(F\). Let \(\nu^{*}\) be the node of \(FT(u,b)\) in which a value is returned when queried with \(F\), and let \(A_{\nu^{*}}\) be the set of edges that were failed from the root to \(\nu^{*}\). Then, \(P\) exists in the graph \(G-A_{\nu^{*}}\). Moreover, it holds that \(d_{G-F}(u,b)\leqslant FT(u,b,F)\leqslant 3\cdot d_{\varepsilon/9}^{(2f+1)}(u,b,F)\)._
We need the following two lemmas for the proof. The first one states that the segments of a path are not too long, or even only contain a single edge. The second lemma verifies a certain prefix optimality of expaths. This is the crucial property that decomposable paths are lacking. For some edge set \(A\subseteq E\), let \(d^{(\ell)}(u,v,A)\) be the length of the shortest \(\ell\)-_decomposable_ path in \(G-A\). Compared to \(d_{\varepsilon/9}^{(\ell)}(u,v,F)\), this definition allows for larger failure sets and drops the requirement of the path being far away from the failures.
**Lemma 17** (Lemma 3.2 in [19]).: _Let \(u\in V\) and \(b\in B\), \(P\) be any path between \(u\) and \(b\), \(e\in E(P)\), and \(y\) a vertex of the edge \(e\). Then, \(E(\operatorname{seg}(e,P))=\{e\}\) or \(|\operatorname{seg}(e,P)|\leqslant\frac{\varepsilon}{36}\min(|P[u..y]|,|P[y..b]|)\)._
**Lemma 18** (Lemma 3.1 in [19]).: _Let \(u\in V\) and \(b\in B\), \(A\subseteq E\) a set of edges, \(\ell\) a positive integer, and \(P\) the shortest \(\ell\)-expath between \(u\) and \(b\) in \(G-A\). Then, for every \(y\in V(P)\), \(|P[u..y]|\leqslant 4\cdot d^{(\ell)}(u,y,A)\) and \(|P[y..v]|\leqslant 4\cdot d^{(\ell)}(y,v,A)\) both hold._
Proof of Lemma 16.: The second assertion is an easy consequence of the first. \(P\) is the shortest (\(2f\)+1)-decomposable \(u\)-\(b\)-path in \(G-F\) that is far away from all failures in \(F\). If \(P\) also exists in \(G-A_{\nu^{*}}\), then \(|P_{\nu^{*}}|\leqslant|P|\) by the definition of \(P_{\nu^{*}}\) as the shortest (\(2f\)+1)-expath between \(u\) and \(v\) in \(G-A_{\nu^{*}}\) and \(P\) being (\(2f\)+1)-decomposable (and thus a (\(2f\)+1)-expath). The query algorithm guarantees \(FT(u,b,F)\leqslant 3\,|P_{\nu^{*}}|\leqslant 3\,|P|=3\cdot d_{\varepsilon/9}^{(2f +1)}(u,b,F)\). It is clear that we never underestimate the true distance \(d_{G-F}(u,b)\).
We show the existence of the path \(P\) in \(G-A_{\nu}\) for _every_ visited node \(\nu\) by induction over the parent-child transitions of the tree transversal. It is true for the root where \(A_{\nu}=\emptyset\). When going from \(\nu\) to a child, \(A_{\nu}\) gets increased by the edges \(E(\operatorname{seg}(e,P_{\nu}))\) of a segment for some \(e_{F}\in F\cap E(P_{\nu})\). It is enough to prove that \(P\) does not contain an edge of \(\operatorname{seg}(e,P_{\nu})\). Intuitively, we argue that the segments are too short for their removal to influence a path far away from \(F\).
The claim is immediate if \(E(\operatorname{seg}(e_{F},P_{\nu}))=\{e_{F}\}\), because \(P\) exists in \(G-F\). For the remainder, suppose \(\operatorname{seg}(e_{F},P_{\nu})\) consists of more than one edge. To reach a contradiction, assume \(e_{P}\in E(P)\cap E(\operatorname{seg}(e_{F},P_{\nu}))\) is an edge in the intersection. If \(\operatorname{seg}(e_{F},P_{\nu})\) contains multiple edges from \(F\), we let \(e_{F}\) be the one closest to \(e_{P}\). This ensures that the subpath of \(P_{\nu}\) between the closest vertices in \(e_{F}\) and \(e_{P}\) does not contain any other failing edges. More formally, there are vertices \(y\in e_{P}\) and \(z\in e_{F}\) such that neither \(y\) nor \(z\) are the endpoints \(u\) or \(b\) and the subpath \(P_{\nu}[y..z]\) lies entirely both in \(\operatorname{seg}(e,P_{\nu})\) and the graph \(G-F\). Since \(y\in V(P)\), \(z\in V(F)\), and the path \(P\) is far away from all failures, \(z\) must be outside of the trapezoid \(\operatorname{tr}_{G-F}^{\varepsilon/9}(P)\), that is,
\[|\operatorname{seg}(e_{F},P_{\nu})|\geqslant|P_{\nu}[y..z]|\geqslant d_{G-F}(y,z)>\frac{\varepsilon}{9}\min(|P[u..y]|,|P[y..b]|).\]
Conversely, we combine Lemmas 17 and 18, edge \(e_{P}\) lying both on \(P\) and \(P_{\nu}\), and \(P_{\nu}\) (with its subpaths) being a (\(2f\)+1)-expath to arrive at
\[|\operatorname{seg}(e_{F},P_{\nu})| \leqslant\frac{\varepsilon}{36}\min(|P_{\nu}[u..y]|,|P_{\nu}[y..b ]|)\leqslant\frac{\varepsilon}{36}\cdot\min\Bigl{(}4\cdot d^{(2f+1)}(u,y,F),\ 4\cdot d^{(2f+1)}(y,b,F)\Bigr{)}\] \[=\frac{\varepsilon}{9}\cdot\min\Bigl{(}d^{(2f+1)}(u,y,F),\ d^{(2 f+1)}(y,b,F)\Bigr{)}\leqslant\frac{\varepsilon}{9}\cdot\min(|P[u..y]|,|P[y..b]|).\qed\]
### Proof of Lemma 8
We derive the parameters of the \(f\)-DSO with sublinear query time. The preprocessing consists of two main parts. First, the oracle for short paths is computable in time \(\widetilde{O}(L^{f+o(1)}m\sqrt{n})\) (Theorem 2). Secondly, \(FT\) has preprocessing time \(\widetilde{O}((n^{2}/L)(m+1/\varepsilon))O(\log n/\varepsilon)^{f}\), assuming that we can compute expaths in time \(\widetilde{O}(fm)\). We set \(L=n^{\alpha/(f+1)}\) for a constant \(0<\alpha<1/2\). The total preprocessing time is dominated by the FT-trees giving a total time of \(O(n^{2-\frac{\alpha}{f+1}}(m+1/\varepsilon))\cdot O(\log n/\varepsilon)^{f}\). By Lemma 13 with \(q_{FT}=\widetilde{O}(L^{o(1)}/\varepsilon)\) the query time of the resulting oracle is \(\widetilde{O}(n/\varepsilon L^{1-o(1)})=n^{1-\frac{\alpha}{f+1}+o(1)}/\varepsilon\). The data structure from Theorem 2 requires space \(\widetilde{O}(L^{f+o(1)}n^{3/2})\), and _FT_ takes \(\widetilde{O}(n^{2}/L)\cdot O(\log n/\varepsilon)^{f+1}\). Inserting our choice of \(L\) gives \(n^{\frac{f}{f+1}\alpha+\frac{3}{2}+o(1)}+\widetilde{O}\Bigl{(}n^{2-\frac{ \alpha}{f+1}}\Bigr{)}\cdot O\Bigl{(}\frac{\log n}{\varepsilon}\Bigr{)}^{f+1}\). Since \(\alpha<1/2\) is a constant, the second term dominates.
Reducing the Query Time
We now reduce the query time to \(O_{\varepsilon}(n^{\alpha})\). The bottleneck of the query answering is computing the (auxiliary) weight \(w^{\prime}_{H^{F}}(u,v)\) of the edge \(\{u,v\}\) in the graph \(H^{F}\), see the beginning of Section5.2. Minimizing \(FT(u,b,F)+FT(b,v,F)\) over all pivots \(b\) takes linear time in \(|B|\). Let \(\lambda=\lambda(L,\varepsilon)\leqslant L\) be a parameter to be fixed later. We define \(\operatorname{ball}_{G-F}(x,\lambda)=\{z\in V\mid d_{G-F}(x,u)\leqslant\lambda\}\). If we had access to the graph \(G-F\) at query time, we could run breath-first searches from \(u\) and from \(v\) to scan \(\operatorname{ball}_{G-F}(u,\lambda)\) and \(\operatorname{ball}_{G-F}(v,\lambda)\) of radius \(\lambda\), and only consider the pivots that are inside these balls. By carefully adapting the sampling probability of the pivots to \(\widetilde{O}(n/\lambda)\), we can ensure at least one of them hits the shortest expath (replacement path) for from \(u\) to \(v\), more details are given below. The problem is that these balls may still contain too many pivots. In the worst case, we have, say, \(\operatorname{ball}_{G-F}(u,\lambda)\cap B=B\) degenerating again to scanning _all_ pivots. Furthermore, we cannot even afford to store all balls as there are \(\Omega(nm^{f})\) different ones, a ball for each pair \((x,F)\). Finally, the assumption of access to \(G-F\) itself is problematic in the subquadratic-space regime.
To handle all these issues, we consider two cases when computing \(w^{\prime}_{H^{F}}(u,v)\). That of _sparse balls_, where at least one of \(\operatorname{ball}_{G-F}(u,\lambda)\) and \(\operatorname{ball}_{G-F}(v,\lambda)\) contains less than \(L^{f}\) vertices, and the case of _dense balls_ where the two sets both contain more than \(L^{f}\) vertices.
### The Case of Sparse Balls
Consider the same setup as in Section5.1, only that the pivots for \(B\) are now sampled with probability \(C^{\prime\prime}f\log_{2}(n)/\lambda\) for some \(C^{\prime\prime}>0\). By making the constant \(C^{\prime\prime}\) slightly larger than \(C^{\prime}\) in the original sampling probability (see the end of Section5.1), we ensure that w.h.p. every path that is a _concatenation_ of at most two replacement paths and has more than \(\lambda\) edges contains a pivot. (Previously, we only had this for ordinary replacement paths with at least \(L/2\) edges.) Note that all statements from Section5 except for the space, preprocessing and query time in Lemma8 remain true. Further, observe that in the case of sparse balls, w.h.p. there are \(\widetilde{O}(L^{f}/\lambda)\) pivots in \(\operatorname{ball}_{G-F}(u,\lambda)\) or in \(\operatorname{ball}_{G-F}(v,\lambda)\). In this case, it is sufficient to scan those in the same way as we did above. The only issue is that we do not have access to \(\operatorname{ball}_{G-F}(u,\lambda)\) at query time, so we precompute a proxy.
Let \(G_{1},\ldots,G_{\kappa}\) be all the subgraphs of \(G\) in the leaves of the sampling trees introduced in Section4.2. Recall that they form an \((L,f)\)-replacement path covering w.h.p. During preprocessing, we compute and store the sets \(B_{G_{i}}(x,\lambda)=B\cap\operatorname{ball}_{G_{i}}(x,\lambda)\) for all the sparse balls \(\operatorname{ball}_{G_{i}}(x,\lambda)\), that is, if \(|\operatorname{ball}_{G_{i}}(x,\lambda)|\leqslant L^{f}\). Otherwise, we store a marker that \(\operatorname{ball}_{G_{i}}(x,\lambda)\) is dense.12 As \(\kappa=L^{f+o(1)}\) and w.h.p. \(|B_{G_{i}}(x,\lambda)|=\widetilde{O}(L^{f}/\lambda)\) for sparse balls, storing all of these sets requires \(\widetilde{O}(nL^{2f+o(1)}/\lambda)\) space. One can compute \(B_{G_{i}}(x,\lambda)\) by running Dijkstra from \(x\) in \(G_{i}\) until at most \(L^{f}\) vertices are discovered in time \(\widetilde{O}(L^{2f})\). In total, this takes \(\widetilde{O}(nL^{3f+o(1)})\) time.
Footnote 12: This marker is made more precise in Section6.2.
Suppose we want to compute the weight \(w_{H^{F}}(u,v)\) in the sparse balls case, meaning that there is an \(x\in\{u,v\}\) such that the true set \(\operatorname{ball}_{G-F}(x,\lambda)\) is sparse. If this holds for both \(u\) and \(v\) the choice of \(x\) is arbitrary. We use \(y\) to denote the remaining vertex in \(\{u,v\}\backslash\{x\}\). Let \(i_{1},\ldots,i_{r}\) be the indices of the graphs \(G_{i_{j}}\) that exclude \(F\) as computed by Algorithm3. We showed in Section4.2 that \(r=\widetilde{O}(L^{o(1)})\) and that the indices can be found in time proportional to their number. By definition of \(x\), all the proxies \(\operatorname{ball}_{G_{i_{j}}}(x,\lambda)\) for \(1\leqslant j\leqslant r\) are sparse as well. Departing from Section5.2, we define the auxiliary weight as
\[w^{\prime}_{H^{F}}(u,v)=\min_{\begin{subarray}{c}1\leqslant j\leqslant r\\ b\in B_{G_{i_{j}}}(x,\lambda)\end{subarray}}\left(\widehat{d^{<\!L}}(x,b,F)+ FT(b,y,F)\right). \tag{1}\]
The (actual) weight is \(w_{H^{F}}(u,v)=\min(w^{\prime}_{H^{F}}(u,v),\widetilde{d^{\leqslant L}}(u,v,F))\). Its computation takes time \(\widetilde{O}(L^{f+o(1)}/\varepsilon\lambda)\) as there are \(\widetilde{O}(L^{o(1)})\) balls, each with \(\widetilde{O}(L^{f}/\lambda)\) pivots, the values \(\widetilde{d^{\leqslant L}}\) can be evaluated in time \(L^{o(1)}\) (Theorem 2), and we navigate through \(\widetilde{O}(L^{f}/\lambda)\) FT-trees with a query time of \(q_{FT}=\widetilde{O}(L^{o(1)}/\varepsilon)\) each.
Recall the proof of the (3+\(\varepsilon\))-approximation in Lemma 13. Clearly, if the replacement path \(P(u,v,F)\) is short, then \(d_{H^{F}}(u,v)\leqslant 3\cdot d_{G-F}(u,v)\) still holds, the argument was independent of \(w^{\prime}_{H^{F}}(u,v)\). We make the next step in recovering what we called the "second case" for sparse balls. The proof of the following lemma motivates the transition from \(\widetilde{O}(n/L)\) to \(\widetilde{O}(n/\lambda)\) pivots.
**Lemma 19**.: _Let \(u,v\in V\) be such that \(|\mathrm{ball}_{G-F}(u,\lambda)|\leqslant L^{f}\) or \(|\mathrm{ball}_{G-F}(v,\lambda)|\leqslant L^{f}\), and the replacement path \(P(u,v,F)\) is long and far away from all failures in \(F\). Then, with high probability \(w_{H^{F}}(u,v)\leqslant 3\cdot d_{G-F}(u,v)\) holds._
Proof.: Let \(P=P(u,v,F)\). Without losing generality, we assume \(\mathrm{ball}_{G-F}(u,\lambda)\) is sparse, the other case is symmetric. Note that \(P\) has at least \(L\geqslant\lambda\) edges. Let \(u^{\prime}\in V(P)\) be the vertex on \(P\) at distance exactly \(\lambda\) from \(u\). There exists a (regular) pivot \(b^{*}\) on \(P[u..u^{\prime}]\) w.h.p. Here, we used the adapted sampling probability for set \(B\) in Section 6. Note that the pivot is in \(B_{G-F}(u,\lambda)=B\cap\mathrm{ball}_{G-F}(u,\lambda)\). The graphs \(G_{1},\ldots,G_{\kappa}\) are an \((L,f)\)-replacement path covering, and Algorithm 3 finds the right indices \(i_{1},\ldots,i_{r}\). Equation (1) thus gives
\[w_{H^{F}}(u,v)\leqslant w^{\prime}_{H^{F}}(u,v)=\min_{\begin{subarray}{c}1 \leqslant j\leqslant r\\ b\in B_{G_{ij}}(u,\lambda)\end{subarray}}\left(\widetilde{d^{\leqslant L}}(u,b,F)+FT(b,v,F)\right)\leqslant\widetilde{d^{\leqslant L}}(u,b^{*},F)+FT(b^{ *},v,F).\]
Recall that _FT_ approximates the length \(d^{(2f+1)}_{\varepsilon/9}\) of the shortest \((2f+1)\)-decomposable path that is far away from all failures. As in the proof of Lemma 13, since \(P\) is far away from all failures, \(P[b^{*}..v]\) is \((2f+1)\)-decomposable and far away itself. It holds that
\[w^{\prime}_{H^{F}}(u,v) \leqslant 3\cdot d_{G-F}(u,b^{*})+3\cdot d^{(2f+1)}_{\varepsilon/9}( b^{*},v,F)\] \[=3|P[u..b^{*}]|+3|P[b^{*}..v]|=3|P|=3\,d_{G-F}(u,v).\qed\]
### The Case of Dense Balls
To transfer the proof of Lemma 13, \(w_{H^{F}}(u,v)\leqslant 3\,d_{G-F}(u,v)\) would have to be true also if both \(\mathrm{ball}_{G-F}(u,\lambda)\) and \(\mathrm{ball}_{G-F}(v,\lambda)\) are dense and \(P(u,v,F)\) is far away from all failures. If that were our only concern, Equation (1), would ensure that. However, the query time is \(\Omega(n/\lambda)\) since a dense ball may contain too many pivots. We provide a more efficient query algorithm which, however, only gives a \((3+\delta)\)-approximation for a small \(\delta>0\) (Lemma 24). Therefore, we have to adapt the proof of Lemma 13.
Our changes to the construction are twofold. We define a set \(\mathcal{B}\) of _new_ pivots, polynomially sparser than \(B\), by sampling each vertex independently with probability \(C^{\prime}f\log_{2}(n)/\lambda L^{f-1}\). By a Chernoff bound and \(\lambda L^{f-1}\leqslant L^{f}\), it holds that w.h.p. \(|\mathcal{B}|=\widetilde{O}(n/\lambda L^{f-1})\) and all sets \(\mathrm{ball}_{G-F}(x,\lambda)\) with \(|\mathrm{ball}_{G-F}(x,\lambda)|>L^{f}\) contain a new pivot. We build an FT-tree with granularity \(\lambda\) for each pair in \(\mathcal{B}^{2}\).
**FT-Trees with Granularity.** Given two new pivots \(b_{u},b_{v}\in\mathcal{B}\), let \(FT_{\lambda}(b_{u},b_{v})\) be the _fault-tolerant tree of \(b_{u}\) and \(b_{v}\) with granularity \(\lambda\)_. Granularity affects the netpoints, segments and expaths.
**Definition 20** (Path netpoints with granularity \(\lambda\)).: Let \(P=(b_{u}=v_{1},v_{2},\ldots,v_{\ell}=b_{v})\) be a path. If \(|P|\leqslant 2\lambda\), then the _netpoints of \(P\) with granularity \(\lambda\)_ are all vertices \(V(P)\) of the path. Otherwise,
define \(p_{\mathrm{left}}\) to be all vertices \(v_{j},v_{j+1}\in V(P)\) with \(\lambda\leqslant j\leqslant\ell-\lambda\) such that \(|P[v_{\lambda\cdot}.v_{j}]|<(1+\frac{\varepsilon}{36})^{i}\leqslant|P[v_{ \lambda\cdot}.v_{j+1}]|\) for some integer \(i\geqslant 0\). Analogously, let \(p_{\mathrm{right}}\) be all vertices \(v_{j},v_{j-1}\in V(P)\) such that \(|P[v_{j}.\cdot v_{\ell-\lambda}]|<(1+\frac{\varepsilon}{36})^{i}\leqslant|P[v_ {j-1\cdot}.v_{\ell-\lambda}]|\) for some \(i\). The _netpoints of P with granularity \(\lambda\)_ are all vertices in \(\{v_{0},\ldots,v_{\lambda}\}\cup p_{\mathrm{left}}\cup p_{\mathrm{right}} \cup\{v_{\ell-\lambda},\ldots,v_{\ell}\}\).
For \(\lambda=0\), this is the same as Definition14. Similar as before, we denote by \(\mathrm{seg}_{\lambda}(e,P)\) for \(e\in P\) the set of _segment_ w.r.t. to the new netpoints that contains \(e\). Any path has \(O(\lambda)+O(\log_{1+\varepsilon}n)=O(\lambda)+O(\log n/\varepsilon)\) netpoints with granularity \(\lambda\) and thus so many segments. The number of nodes per tree is now \((O(\lambda)+O(\log n/\varepsilon))^{f}=O(\lambda^{f})+O(\log n/\varepsilon)^{f}\) In summary, the crucial change is that the first and last \(\lambda\) edges are in their own segment and the segment lengths increase exponentially only in the middle part.
**Definition 21** (\(\ell\)-expath with granularity \(\lambda\)).: Let \(A\subseteq E\) be a set of edges and \(\ell\) a positive integer. An \(\ell\)_-expath with granularity \(\lambda\)_ in \(G-A\) is a path \(P_{a}\circ P_{b}\circ P_{c}\) such that \(P_{a}\) and \(P_{c}\) contain at most \(\lambda\) edges each, while \(P_{b}\) is a concatenation of \((2\log_{2}(n){+}1)\)\(\ell\)-decomposable paths such that, for every \(0\leqslant i\leqslant 2\log_{2}n\), the length of the \(i\)-th \(\ell\)-decomposable path is at most \(\min(2^{i},2^{2\log_{2}(n)-i})\).
The _parts_ of an \(\ell\)-expath with granularity \(\lambda\) are defined as before. In each node \(\nu\) of \(FT_{\lambda}(b_{u},b_{v})\), we store the shortest \((2f{+}1)\)-expath \(P_{\nu}\) with granularity \(\lambda\) from \(b_{u}\) to \(b_{v}\) in \(G_{\nu}\). Note that \(P_{\nu}\) now has \(O(f\log(n)\cdot(\lambda+\log(n)/\varepsilon))\) many parts.
**Space and Preprocessing Time.** Recall the analysis at the end of Section5.3, and also that (compared to that) we changed the size of \(|B|\) to \(\widetilde{O}(n/\lambda)\) in Section6.1. The number of nodes in \(FT_{\lambda}(b_{u},b_{v})\) is \(O(\lambda^{f})+O(\log n/\varepsilon)^{f}\) and we only need \(|\mathcal{B}|^{2}=\widetilde{O}(n^{2}/\lambda^{2}L^{2f-2})\) new trees. With \(\lambda\leqslant L\), this makes for \(\widetilde{O}\Big{(}\frac{n^{2}}{L^{f}}\Big{)}+\widetilde{O}\Big{(}\frac{n^{2 }}{\lambda^{2}L^{2f-2}}\Big{)}\,O\Big{(}\frac{\log n}{\varepsilon}\Big{)}^{f}\) nodes in all new trees. This is less than the \(\widetilde{O}(n^{2}/\lambda)\cdot O(\log n/\varepsilon)^{f}\) we had for the original FT-trees (that we still need to preprocess). The more efficient expath computation transfers to expaths with granularity, see Section7.3. We can compute such a path in asymptotically the same time \(\widetilde{O}(fm)=\widetilde{O}(m)\). So the preprocessing time of the new trees is dominated by the one for the old trees. Also, the additional \(\widetilde{O}(nL^{3f+o(1)})\) term for the sparse/dense balls will turn out to be negligible. More importantly, for the total size of the new trees the number of nodes gets multiplied by \(O(\lambda+f\log^{2}(n)/\varepsilon)\), proportional to the number of parts. The result turns out to be
\[\widetilde{O}\Bigg{(}\frac{n^{2}}{L^{f-1}}\Bigg{)}+\widetilde{O}\Bigg{(}\frac {n^{2}}{\lambda L^{2f-2}}\Bigg{)}\cdot O\Big{(}\frac{\log n}{\varepsilon} \Bigg{)}^{f}+\widetilde{O}\Bigg{(}\frac{n^{2}}{\varepsilon L^{f}}\Bigg{)}+ \widetilde{O}\Bigg{(}\frac{n^{2}}{\lambda^{2}L^{2f-2}}\Bigg{)}\cdot O\Big{(} \frac{\log n}{\varepsilon}\Bigg{)}^{f+1}.\]
With \(f\geqslant 2\) (see Theorem1), all the terms are at most the \(\widetilde{O}(n^{2}/\lambda)\cdot O(\log n/\varepsilon)^{f+1}\) for the old FT-trees. Again, the \(\widetilde{O}(nL^{2f+o(1)}/\lambda)\) space to store the regular pivots in the sparse balls is irrelevant.
A straightforward generalization of Lemma15 shows that evaluating \(FT_{\lambda}(b_{u},b_{v})\) with query set \(F\) takes time \(\widetilde{O}(L^{o(1)}(\lambda+1/\varepsilon))\).
**Computing \(w^{\prime}_{HF}(u,v)\).** Let again \(G_{1},\ldots,G_{\kappa}\) be the graphs in the leaves of the sampling trees (Section4.2). For every \(G_{i}\) and vertex \(x\in V\) for which \(|\mathrm{ball}_{G_{i}}(x,\lambda)|>L^{f}\), we said we store a marker. More precisely, we associate with \((G_{i},x)\) a _single_ new pivot \(b_{x}\in\mathcal{B}\cap\mathrm{ball}_{G_{i}}(x,\lambda)\). As before, let \(i_{1},\ldots,i_{r}\) be the indices of graphs \(G_{i_{j}}\) that are relevant for the query \((u,v,F)\). Even if \(\mathrm{ball}_{G-F}(u,\lambda)\) and \(\mathrm{ball}_{G-F}(v,\lambda)\) are dense, it might be that all the \(\mathrm{ball}_{G_{i_{j}}}(u,\lambda)\) are sparse or all \(\mathrm{ball}_{G_{i_{j}}}(v,\lambda)\) are sparse. If so, we compute \(w^{\prime}_{HF}(u,v)\) (and in turn \(w_{HF}(v,v)\)) via Equation1. Otherwise, there are indices \(i_{u},i_{v}\in\{i_{1},\ldots,i_{r}\}\) such that both \(|\mathrm{ball}_{G_{i_{u}}}(u,\lambda)|>L^{f}\) and \(|\mathrm{ball}_{G_{i_{v}}}(v,\lambda)|>L^{f}\). If there are multiple such indices, the choice is arbitrary. Let \(b_{u}\in\mathcal{B}\cap\mathrm{ball}_{G_{i_{u}}}(u,\lambda)\) and let \(b_{v}\in\mathcal{B}\cap\mathrm{ball}_{G_{i_{v}}}(v,\lambda)\)
be the stored new pivots. We define the auxiliary weight as
\[w^{\prime}_{H^{F}}(u,v)=FT_{\lambda}(b_{u},b_{v},F)+2\lambda. \tag{2}\]
This takes time \(\widetilde{O}(L^{o(1)}(\lambda+1/\varepsilon))\), much less than with sparse balls.
### Approximation Guarantee
Towards Theorem1, we have already seen that the space and preprocessing time are dominated by the original FT-trees, when accounting for the slightly larger \(|B|=\widetilde{O}(n/\lambda)\). We also argued the query time. The plan to prove the approximation guarantee is the same as in Section5, which crucially involved Lemma13. We already discussed how to transfer its "first case", as well as the "second case" if both \(\operatorname{ball}_{G-F}(u,\lambda)\) and \(\operatorname{ball}_{G-F}(v,\lambda)\) are sparse. The "third case" is handled by Lemma10 together with an induction. Due to the restricted space, we focus here on the "second case" also if the balls are dense, and how to adapt Lemma13. As a first step, we generalize Lemmas17 and 18 to FT-trees with granularity \(\lambda>0\).
**Lemma 22**.: _Let \(b_{u},b_{v}\in\mathcal{B}\), \(P\) be any path between \(b_{u}\) and \(b_{v}\), \(e\in E(P)\), and \(y\in e\) a vertex of that edge. Then, \(E(\operatorname{seg}_{\lambda}(e,P))=\{e\}\) or \(|\operatorname{seg}_{\lambda}(e,P)|\leqslant\frac{\varepsilon}{36}\left(\min(|P [b_{u}..y]|,|P[y..b_{v}|])-\lambda\right)\)._
Proof.: If \(|P|\leqslant 2\lambda\) then \(\operatorname{seg}_{\lambda}(e,P)=\{e\}\) for every edge \(e\) of \(P\) by definition. We can thus assume that \(P\) has more than \(2\lambda\) edges. Let \(u^{\prime}\) and \(v^{\prime}\) be the two vertices of \(P\) such that \(|P[b_{u}..u^{\prime}]|=\lambda\) and \(|P[v^{\prime}..b_{v}]|=\lambda\), respectively. Let \(e\) be an edge of \(P\) such that \(\operatorname{seg}_{\lambda}(e,P)\supsetneq\{e\}\) and \(y\in e\). Note that \(y\) must lie on \(P[u^{\prime}..v^{\prime}]\) since \(e\) is not among the first or last \(\lambda\) edges of \(P\). It is enough to show
\[|\operatorname{seg}_{\lambda}(e,P)|\leqslant\frac{\varepsilon}{36}\cdot\min(| P[u^{\prime}..y]|,|P[y..v^{\prime}]|)\]
since \(\min(|P[u^{\prime}..y]|,|P[y..v^{\prime}]|)=\min(|P[b_{u}..y]|,|P[y..b_{v}]|)-\lambda\).
Let \(z\) be the vertex of the edge \(e\) that is not \(y\). First, assume that \(y\) is closer to \(u^{\prime}\) along \(P\) than \(z\), that is, \(|P[u^{\prime}..y]|<|P[u^{\prime}..z]|\). Let \(i\) be the maximal integer such that \((1+\varepsilon/36)^{i}\leqslant|P[u^{\prime}..y]|\), whence \((1+\varepsilon/36)^{i+1}>|P[u^{\prime}..y]|\). Since \(\operatorname{seg}_{\lambda}(e,P)\) contains more edges than just \(e\), the endpoints \(y,z\) cannot both be endpoints of the path \(P\) with granularity \(\lambda\). Therefore, we even have \((1+\varepsilon/36)^{i+1}>|P[u^{\prime}..z]|\), which gives
\[|\operatorname{seg}_{\lambda}(e,P)|\leqslant\left(1+\frac{\varepsilon}{36} \right)^{i+1}-\left(1+\frac{\varepsilon}{36}\right)^{i}=\frac{\varepsilon}{36} \cdot\left(1+\frac{\varepsilon}{36}\right)^{i}\leqslant\frac{\varepsilon}{36} \cdot|P[u^{\prime}..y]|.\]
Symmetrically, we can show that \(|\operatorname{seg}_{\lambda}(e,P)|\leqslant\varepsilon/36\cdot|P[z..v^{ \prime}]|\), from which it follows that \(|\operatorname{seg}_{\lambda}(e,P)|<\varepsilon/36|P[y..v^{\prime}]|\) since we assumed \(|P[u^{\prime}..y]|<|P[u^{\prime}..z]|\), whence \(|P[z..v^{\prime}]|<|P[y..v^{\prime}]|\).
If conversely \(|P[u^{\prime}..y]|>|P[u^{\prime}..z]|\) holds (and thus \(|P[z..v^{\prime}]|>|P[y..v^{\prime}]|\)), then the exact same argument as above where \(y\) and \(z\) switch places shows \(|\operatorname{seg}_{\lambda}(e,P)|\leqslant\varepsilon/36\cdot|P[u^{\prime}.. z]|<\varepsilon/36\cdot|P[u^{\prime}..y]|\) and \(|\operatorname{seg}_{\lambda}(e,P)|\leqslant\varepsilon/36\cdot|P[y..v^{ \prime}]|\). In summary, we get \(|\operatorname{seg}_{\lambda}(e,P)|\leqslant\varepsilon/36\cdot\min(|P[u^{ \prime}..y]|,|P[y..v^{\prime}]|)\) in both cases.
Recall that \(d^{\ell}(u,v,A)\), for \(A\subseteq E\), is the length of the shortest \(\ell\)-decomposable path in \(G-A\).
**Lemma 23**.: _Let \(u,v\in V\) be two vertices, \(A\subseteq E\) a set of edges, and \(b_{u}\in\mathcal{B}\cap\operatorname{ball}_{G-A}(u,\lambda)\) and \(b_{v}\in\mathcal{B}\cap\operatorname{ball}_{G-A}(u,\lambda)\). Let further \(\ell\) be a positive integer, and \(P\) the shortest \(\ell\)-expth with granularity \(\lambda\) between \(b_{u}\) and \(b_{v}\) in \(G-A\). Then, for every \(y\in V(P)\) with \(|P[b_{u}..y]|,|P[y..b_{v}]|>\lambda\), it holds that \(|P[b_{u}..y]|\leqslant 4\cdot d^{(\ell)}(u,y,A)+\lambda\) and \(|P[y..b_{v}]|\leqslant 4\cdot d^{(\ell)}(y,v,A)+\lambda\)._
Proof.: We only show that \(|P[b_{u}..y]|,|P[y..b_{v}]|>\lambda\) implies \(|P[b_{u}..y]|\leqslant 4d^{(\ell)}(u,y,A)+\lambda\), the proof of the other inequality is symmetric. Let \(P^{(\ell)}_{G-A}(u,y)\) be the shortest \(\ell\)-decomposable \(u\)-\(y\)-path in \(G-A\), that is, \(d^{(\ell)}(u,y,A)=|P^{(\ell)}_{G-A}(u,y)|\). For the sake of contradiction, assume \(|P[b_{u}..y]|-\lambda>4|P^{(\ell)}_{G-A}(u,y)|\). Let \(P_{b_{u}}\) be the shortest path in \(G-A\) from \(b_{u}\) to \(u\). Since \(b_{u}\in\mathcal{B}\cap\operatorname{ball}_{G-A}(u,\lambda)\), it holds that \(|P_{b_{u}}|\leqslant\lambda\).
We first prove that \(P^{\prime}=P_{b_{u}}\circ P^{(\ell)}_{G-A}[u..y]\circ P[y..b_{v}]\) is an \(\ell\)-expath with granularity \(\lambda\). Let \(P=P_{a}\circ P_{b}\circ P_{c}\) be the constituting decomposition of \(P\) as an expath with granularity. That means \(P_{a}\) and \(P_{c}\) contain at most \(\lambda\) edges each, while \(P_{b}\) is the concatenation \(P_{0}\circ\ldots\circ P_{2\log_{2}n}\) of \(2\log_{2}(n)+1\), \(\ell\)-decomposable paths such that \(|P_{i}|\leqslant\min(2^{i},2^{2\log_{2}n-i})\). To show that \(P^{\prime}\) is an \(\ell\)-expath with granularity \(\lambda\), we define \(\ell\)-decomposable paths \(P^{\prime}_{0},\ldots,P^{\prime}_{2\log n}\) in \(G-A\) such that \(|P^{\prime}_{i}|\leqslant\min(2^{i},2^{2\log_{2}n-i})\) and \(P^{\prime}\) is the concatenation of \(P_{b_{u}}\circ P^{\prime}_{0}\circ\ldots\circ P^{\prime}_{2\log n}\circ P_{c}\).
We have \(|P[b_{u}..y]|>\lambda\) and thus \(|P[b_{u}..y]|-|P_{a}|\geqslant 1\). Let \(j_{0}=\lfloor\log_{2}(|P[b_{u}..y]|-|P_{a}|)\rfloor-1\). Be aware that \(j_{0}=-1\) is possible. Since \(|P_{i}|\leqslant 2^{i}\), we have
\[|P_{a}|+\sum_{i=0}^{j_{0}}|P_{i}|<|P_{a}|+2^{j_{0}+1}\leqslant|P_{a}|+|P[b_{u}..y]|-|P_{a}|=|P[b_{u}..y]|.\]
This implies that either \(y\) is contained in a subpath \(P_{j_{1}}\) of \(P_{b}\) for some \(j_{1}>j_{0}\) or that \(y\) is a vertex of \(P_{c}\). The latter case is impossible as \(|P_{c}|\leqslant\lambda\) while \(|P[y..b_{v}]|>\lambda\). So \(y\) is indeed on \(P_{j_{1}}\).
We are now ready to define the new subpaths \(P^{\prime}_{0},P^{\prime}_{1},\ldots,P^{\prime}_{2\log_{2}n}\). For every \(0\leqslant i<j_{0}\), we define \(P^{\prime}_{i}\) as the empty path, and set \(P^{\prime}_{j_{0}}=P^{(\ell)}_{G-A}(u,y)\). For every \(j_{0}<i<j_{1}\), we define \(P^{\prime}_{i}\) as the empty path again, and \(P^{\prime}_{j_{1}}\) is the suffix of \(P_{j_{1}}\) starting at \(y\). Finally, for every \(i>j_{1}\), we set \(P^{\prime}_{i}=P_{i}\). Clearly all the \(P^{\prime}_{i}\) are \(\ell\)-decomposable paths in \(G-A\). The only index where the length bound is possibly in doubt is \(j_{0}\). We need to prove that \(|P^{(\ell)}_{G-A}(u,y)|\leqslant\min(2^{j_{0}},2^{2\log_{2}n-j_{0}})\). Note that \(j_{0}<\log_{2}n\) as otherwise \(|P[b_{u}..y]|-|P_{a}|>n\), thus
\[\min(2^{j_{0}},2^{2\log_{2}n-j_{0}})=2^{j_{0}}=2^{\lfloor\log(|P[b _{u}..x]|-|P_{a}|)\rfloor-1}\\ \geqslant\frac{|P[b_{u}..x]|-|P_{a}|}{4}\geqslant\frac{|P[b_{u}.. x]|-\lambda}{4}>|P^{(\ell)}_{G-A}(u,y)|.\]
The last step uses the assumption \(|P[b_{u}..y]|-\lambda>4|P^{(\ell)}_{G-A}(u,y)|\).
We have established that \(P^{\prime}=P_{b_{u}}\circ P^{(\ell)}_{G-A}[u..y]\circ P[y..b_{v}]\) is some \(\ell\)-expath with granularity \(\lambda\) from \(b_{u}\) to \(b_{v}\) in \(G-A\). However, its length is
\[|P^{\prime}| =|P_{b_{u}}|+|P^{(\ell)}_{G-A}(u,y)|+|P[y..b_{v}]|<\lambda+\frac{|P [b_{u}..y]|-\lambda}{4}+|P[y..b_{v}]|\] \[=\frac{3\lambda+|P[b_{u}..y]|}{4}+|P[y..b_{v}]|<|P[b_{u}..y]|+|P[y..b _{v}]|=|P|,\]
where the last proper inequality follows from \(|P[b_{u}..y]|>\lambda\). This is a contradiction to \(P\) being the shortest \(\ell\)-expath with granularity \(\lambda\) from \(b_{u}\) to \(b_{v}\) in \(G-A\).
We use the results to show that also Lemma 16 transfers to non-vanishing granularity, but with a slight loss in the approximation. Again, \(d^{(2f+1)}_{\varepsilon/9}(u,b,F)\) is the length of the shortest \((2f{+}1)\)-decomposable \(u\)-\(v\)-path in \(G-F\) that is _far away_ from all failures.
**Lemma 24**.: _Define \(\delta=8\lambda/L\). Let \(u,v\in V\) be such that both \(|\mathrm{ball}_{G-F}(u,\lambda)|,|\mathrm{ball}_{G-F}(u,\lambda)|>L^{f}\), and \(b_{u},b_{v}\in\mathcal{B}\) the associated new pivots. Let \(P\) be any \((2f{+}1)\)-decomposable path between \(u\) and
_in \(G-F\) that is far away from \(F\). Then, \(d_{G-F}(u,v)\leqslant FT_{\lambda}(b_{u},b_{v},F)+2\lambda\leqslant 3|P|+\delta L\). Moreover, if the shortest \((2f{+}1)\)-decomposable path between \(u\) and \(v\) in \(G-F\) that is far away from \(F\) has more than \(L\) edges, then, \(FT_{\lambda}(b_{u},b_{v},F)+2\lambda\leqslant(3+\delta)\cdot d_{\varepsilon/9} ^{(2f{+}1)}(u,v,F)\)._
Proof.: We prove the survival of \(P\) all the way to the output node \(\nu^{*}\) of \(FT_{\lambda}(b_{u},b_{v})\) when queried with set \(F\), as in Lemma16 We have to take care of the fact that \(P\) and \(P_{\nu^{*}}\) may have different endpoints. In fact, we argue about a longer path. Let \(P(b_{u},u,F)\) be the replacement path between \(b_{u}\) and \(u\) in \(G{-}F\). \(P(b_{u},u,F)\) has at most \(\lambda\) edges by the choice \(b_{u}\in\operatorname{ball}_{G-F}(u,\lambda)\), same for \(P(v,b_{v},F)\). Also \(P\) is \((2f{+}1)\)-decomposable, thus \(Q=P(b_{u},u,F)\circ P\circ P(v,b_{v},F)\) is an \((2f{+}1)\)-expath with granularity \(\lambda\). We argue by induction that \(Q\) exists in the graph \(G-A_{\nu}\) for every visited node \(\nu\). This is clear for the root. For a non-output node \(\nu\neq\nu^{*}\), let \(\nu^{\prime}\) be its visited child.
To reach a contradiction, assume \(Q\) does not exist in \(G-A_{\nu^{\prime}}\) Thus, there is a segment of \(P_{\nu}\) that contains both a failing edge of \(F\) and an edge of \(Q\). Without loosing generality, we choose \(e_{F}\in F\) and \(e_{Q}\in E(Q)\) such that both \(e_{F}\) and \(e_{Q}\) are in \(P_{\nu}\) and the subpath of \(P_{\nu}\) containing both edges contains no other failing edge. Let \(y\in e_{Q}\) the endpoint closer to \(e_{F}\) along \(P_{\nu}\), and let \(z\in e_{F}\) the endpoint closer to \(e_{Q}\). The subpath \(P_{\nu}[y..z]\) is entirely in \(G-F\).
It must be that \(e_{F}\neq e_{Q}\) as \(Q\) lies in \(G-F\). Segments with more than one edge only appear in the middle part of the stored expath, \(|P_{\nu}[b_{u}..y]|,|P_{\nu}[y..b_{v}]|>\lambda\). By Lemmas22 and 23, this means
\[|\operatorname{seg}_{\lambda}(e_{F},P)|\leqslant\frac{\varepsilon} {36}\Big{(}\min\bigl{(}|P[b_{u}..y]|,|P[y..b_{v}]|\bigr{)}-\lambda\Big{)}\] \[\leqslant\frac{\varepsilon}{36}\Big{(}\min\bigl{(}4\,d^{(2f{+}1)}( u,y,A_{\nu})+\lambda,4\,d^{(2f{+}1)}(y,v,A_{\nu})+\lambda\bigr{)}-\lambda\Big{)}\] \[=\frac{\varepsilon}{9}\min\bigl{(}d^{(2f{+}1)}(u,y,A_{\nu}),d^{(2 f{+}1)}(y,v,A_{\nu})\bigr{)}.\]
Subpaths of expaths with granularity are again expaths with granularity (the components \(P_{a},P_{c}\) in Definition21 can be empty). The subpath \(P_{\nu}[b_{u}..y]\) (resp. \(P_{\nu}[y..b_{v}]\)) is the _shortest_\((2f{+}1)\)-expath with granularity \(\lambda\) from \(b_{u}\) to \(y\) (from \(y\) to \(b_{v}\)) in \(G-A_{\nu}\). By induction \(Q\) exists in \(G-A_{\nu}\). \(Q[b_{u}..y]\) (resp. \(Q[y..b_{v}]\)) is _some_\((2f{+}1)\)-expath. Together with \(|P_{\nu}[b_{u}..y]|,|P_{\nu}[y..b_{v}]|>\lambda\), this shows that \(y\) must also lie in the middle part of \(Q\), that is, in \(P\). Moreover \(P\) is some \((2f{+}1)\)-decomposable path from \(u\) to \(v\) in \(G-A_{\nu}\). In other words, \(d^{(2f{+}1)}(u,y,A_{\nu})\leqslant|P[u,y]|\) and \(d^{(2f{+}1)}(y,v,A_{\nu})\leqslant|P[y..v]|\).
Since \(P\) is far away from \(e_{F}\), we get the contradiction
\[|\operatorname{seg}_{\lambda}(e_{F},P)|\geqslant|P_{\nu}[y..z]|\geqslant d_{G- F}(y,z)>\frac{\varepsilon}{9}\min(|P[u..y]|,|P[y..b]|).\]
For the approximation, we focus on proving
\[d_{G-F}(u,v)\leqslant FT_{\lambda}(b_{u},b_{v},F)+2\lambda\leqslant(3{+} \delta)\cdot d_{\varepsilon/9}^{(2f{+}1)}(u,v,F)\]
with \(\delta=8\lambda/L\) if \(P\) is the _shortest_\((2f{+}1)\)-decomposable path from \(u\) to \(v\) in \(G-F\) and has more than \(L\) edges; in particular, if \(|P|=d_{\varepsilon/9}^{(2f{+}1)}(u,v,F)\). The other claim is established in passing.
Recall that \(FT_{\lambda}(b_{u},b_{v},F)\leqslant 3|P_{\nu^{*}}|\) for the output node \(\nu^{*}\), for which we determined that \(3|P_{\nu^{*}}|>d_{G-F}(b_{u},b_{v})\). By the triangle inequality, it holds that \(FT_{\lambda}(b_{u},b_{v},F)+2\lambda\geqslant d_{G-F}(b_{u},b_{v})+d_{G-F}(u,b _{u})+d_{G-F}(b_{v},v)\geqslant d_{G-F}(u,v)\). We have seen that \(Q\) survives until \(\nu^{*}\) and that \(P_{\nu^{*}}\) is not longer than \(Q\). In summary,
\[FT_{\lambda}(b_{u},b_{v},F)+2\lambda\leqslant 3|Q|+2\lambda \leqslant 3(|P|+2\lambda)+2\lambda\\ \leqslant 3|P|+8\lambda=3|P|+\delta L<(3+\delta)\,d_{\varepsilon/9}^{ (2f{+}1)}(u,v,F).\qed\]
Before formally proving the \(3+\varepsilon\) stretch of the new query algorithm, we sketch the necessary changes to Lemma13. Recall that we assume \(\varepsilon>0\) to be bounded away from \(3\), thus \(\Delta=3-\varepsilon>0\) is a constant. We define \(\lambda=\frac{\Delta}{96}\varepsilon L\), which in turn implies \(\delta=\frac{\Delta}{12}\varepsilon\). As it turns out, any \(\delta\leqslant\frac{3-\varepsilon}{9+\varepsilon}\varepsilon\) would do as this ensures \(\delta+(6+\delta+\varepsilon)\frac{\varepsilon}{9}\leqslant\varepsilon\). In Lemma13 we had \(w_{H^{F}}(u,v)\leqslant 3\,d_{G-F}(u,v)\) if the path was short ("first case") or long but far away from all failures ("second case"). We now only have the weaker inequality \(w_{H^{F}}(u,v)\leqslant(3{+}\delta)\,d_{G-F}(u,v)\) due to the dense ball case. For the "third case", we are going to use the \(x\)-\(y\)-\(z\)-argument of Lemma10 again. A similar reasoning as before gives \(w_{H^{F}}(u,z)\leqslant(3{+}\delta)(1{+}\frac{\varepsilon}{9})d_{G-F}(u,y)\) (instead of \((3{+}\frac{\varepsilon}{3})\,d_{G-F}(u,y)\)). The crucial part is to carefully adapt the chain of inequalities to show that also this slightly higher factor gives the desired stretch of \(3+\varepsilon\).
**Lemma 25**.: _With the changes made in Section6, and when setting \(\lambda=\frac{3-\varepsilon}{96}\varepsilon L\) and \(\delta=\frac{8\lambda}{L}\), the inequalities \(d_{G-F}(s,t)\leqslant d_{H^{F}}(s,t)\leqslant(3{+}\varepsilon)\,d_{G-F}(s,t)\) hold with high probability over all queries._
Proof.: The structure of the proof is like the one for Lemma13. Recall the the graph \(H^{F}\) given a query \((s,t,F)\). It has an edge for every pair in \(\binom{V(F)\cup\{s,t\}}{2}\) and the weight \(w_{H^{F}}(u,v)\) is the minimum of \(\widehat{d^{\leqslant L}}(u,v,F)\) and \(w^{\prime}_{H^{F}}(u,v)\), where the computation of the latter depends on whether we are in the sparse ball case or the dense ball case. The details are given in Equations1 and 2
The first inequality \(d_{G-F}(s,t)\leqslant d_{H^{F}}(s,t)\) again follows from the fact that the oracle calls used to compute \(w_{H^{F}}(u,v)\) never underestimate the true replacement distance \(d_{G-F}(u,v)\). The second one is implied by \(d_{H^{F}}(u,v)\leqslant(3{+}\varepsilon)\,d_{G-F}(u,v)\) holding for all pairs, which we show by induction over the replacement distance. Consider a pair \(u,v\in V(H^{F})\) and assume the statement holds for all pairs of vertices with smaller distance. We distinguish the same three cases as before, beginning with the replacement path \(P=P(u,v,F)\) having at most \(L\) edges. The same argument as before, using Theorem2, gives \(d_{H^{F}}(u,v)\leqslant 3\,d_{G-F}^{\leqslant L}(u,v)=3\,d_{G-F}(u,v) \leqslant(3+\varepsilon)\,d_{G-F}(u,v)\) w.h.p.
In the second case, \(P\) is long and far away from all failures. If \(\operatorname{ball}_{G-F}(u,\lambda)\) or \(\operatorname{ball}_{G-F}(v,\lambda)\) contain at most \(L^{f}\) vertices, then Lemma19 also shows that \(d_{H^{F}}(u,v)\leqslant w_{H}^{F}(u,v)\leqslant 3\cdot d_{G-F}(u,v)\). For the subcase that \(|\operatorname{ball}_{G-F}(u,\lambda)|,|\operatorname{ball}_{G-F}(v,\lambda)| >L^{f}\), recall that \(\delta=8\lambda/L=\frac{3-\varepsilon}{12}\varepsilon\) and that \(d_{\varepsilon/9}^{(2f+1)}(u,v,F)\) is the length of the shortest \((2f{+}1)\)-decomposable \(u\)-\(v\)-path in \(G-F\) that is far away from \(F\). The replacement path \(P\) is \(f\)-decomposable and therefore also \((2f{+}1)\)-decomposable. It is far away by assumption, so we get \(d_{\varepsilon/9}^{(2f+1)}(u,v,F)=|P|=d_{G-F}(u,v)\). The second part of Lemma24 now implies that \(d_{H^{F}}(u,v)\leqslant w_{H^{F}}^{\prime}(u,v)\leqslant(3+\delta)\cdot d_{G- F}(u,v)\leqslant(3+\varepsilon)\cdot d_{G-F}(u,v)\).
The main part of the proof consists in recovering the third case, where the replacement path \(P\) is long but not far away from \(F\). By Lemma10, there are vertices \(x\in\{u,v\}\), \(y\!\in\!V(P)\), and \(z\!\in\!\operatorname{tr}_{G-F}^{\varepsilon/9}(P)\cap V(F)\) such that \(d_{G-F}(z,y)\leqslant\frac{\varepsilon}{9}\cdot d_{G-F}(x,y)\) and the path \(P^{\prime}=P[x..y]\circ P(y,z,F)\) is far away. \(P(y,z,F)\) denotes the shortest path from \(y\) to \(z\) in \(G-F\). The length of \(P^{\prime}\) is bounded from above by \((1+\frac{\varepsilon}{9})\cdot d_{G-F}(x,y)\).
We again assume \(x=u\) for simplicity. Just as before, if \(P^{\prime}\) has at most \(L\) edges, we have \(w_{H^{F}}(u,z)\leqslant\widehat{d^{\leqslant L}}(u,z,F)\leqslant 3|P^{\prime}| \leqslant 3(1+\frac{\varepsilon}{9})\,d_{G-F}(u,y)\). If \(P^{\prime}\) has more than \(L\) edges, we have to distinguish the sparse ball and dense ball subcases again. First, note that \(P[u..y]\) is a subpath of the replacement path \(P=P(u,v,F)\), so it is itself the unique replacement path \(P(u,y,F)\). Therefore, \(P^{\prime}=P[u..y]\circ P(y,z,F)\) and _all its subpaths_ are a concatenation of at most two replacement paths. Moreover, since replacement paths are \(f\)-decomposable, all subpaths of \(P^{\prime}\) are \((2f{+}1)\)-decomposable. Finally, recall that all subpaths are far away from all failures in \(F\).
For the sparse balls case, let \(a\in\{u,z\}\) be a vertex such that \(\operatorname{ball}_{G-F}(a,\lambda)\) is sparse. Consider the vertex \(a^{\prime}\) that is exactly \(\lambda\) steps away from \(a\) on the path \(P^{\prime}\). Then, \(P^{\prime}[a...a^{\prime}]\) is a concatenation of two replacement paths and has \(\lambda\) edges. We adjusted the sampling probability for \(B\) to ensure
that there is a (regular) pivot pivot \(b^{*}\in B\cap\operatorname{ball}_{G-F}(a,\lambda)\) on \(P^{\prime}\). The subpath \(P^{\prime}[b^{*}..z]\) is \((2f{+}1)\)-decomposable and far away from all failures, so the exact same argument as in Lemma19 gives \(w_{H^{F}}(u,z)\leqslant 3|P^{\prime}|\leqslant 3(1+\frac{\varepsilon}{9})\cdot d_{G-F }(u,y)\).
Regarding the dense ball case, the whole path \(P^{\prime}\) is \((2f{+}1)\)-decomposable. The first part of Lemma24 together with \(L<|P^{\prime}|\) gives \(w_{H^{F}}(u,z)\leqslant 3|P^{\prime}|+\delta L<(3+\delta)|P^{\prime}|\leqslant(3+ \delta)(1+\frac{\varepsilon}{9})\cdot d_{G-F}(u,y)\). In summary, we have \(w_{H^{F}}(u,z)\leqslant(3+\delta)(1+\frac{\varepsilon}{9})\cdot d_{G-F}(u,y)\) in all cases. Vertex \(z\) lies in the trapezoid associated with the path \(P=P(u,v,F)\), so \(d_{G-F}(z,v)<d_{G-F}(u,v)\). By induction, we get \(d_{H^{F}}(z,v)\leqslant(3+\varepsilon)d_{G-F}(z,v)\). Recall that our choices of \(\lambda\) and \(\delta\) imply \(\delta+(6+\delta+\varepsilon)\frac{\varepsilon}{9}\leqslant\varepsilon\). Combining all this, we arrive at
\[d_{H^{F}}(u,v) \leqslant w_{H^{F}}(u,z)+d_{H^{F}}(z,v)\leqslant(3{+}\delta) \Bigl{(}1{+}\frac{\varepsilon}{9}\Bigr{)}\,d_{G-F}(u,y)+(3{+}\varepsilon)d_{G -F}(z,v)\] \[=(3{+}\delta)\Bigl{(}1{+}\frac{\varepsilon}{9}\Bigr{)}\,d_{G-F}( u,y)+(3{+}\varepsilon)d_{G-F}(z,y)+(3{+}\varepsilon)d_{G-F}(y,v)\] \[\leqslant(3{+}\delta)\Bigl{(}1{+}\frac{\varepsilon}{9}\Bigr{)}\, d_{G-F}(u,y)+(3{+}\varepsilon)\frac{\varepsilon}{9}d_{G-F}(u,y)+(3{+} \varepsilon)d_{G-F}(y,v)\] \[=3d_{G-F}(u,y)+\delta\cdot d_{G-F}(u,y)+(6+\delta+\varepsilon) \frac{\varepsilon}{9}\cdot d_{G-F}(u,y)\,+(3{+}\varepsilon)\,d_{G-F}(y,v)\] \[\leqslant 3d_{G-F}(u,y)+\varepsilon\cdot d_{G-F}(u,y)+(3{+} \varepsilon)\,d_{G-F}(y,v)=(3{+}\varepsilon)\,d_{G-F}(u,v).\qed\]
We can now complete the proof of Theorem1, which we restate below. It follows in the same fashion as Lemma8 (see Section5.5), but takes into account the changes made in this section. The main difference is the transition from \(L\) to \(\lambda\), giving an extra \(1/\varepsilon\) factor in the space and preprocessing time, and, of course, the improved query time.
**Theorem 1**.: _Let \(f\geqslant 2\) be a positive integer and \(0<\alpha<\nicefrac{{1}}{{2}}\) a constant. For any undirected, unweighted graph \(G\) with unique shortest paths and any \(\varepsilon>0\), there is a \((3{+}\varepsilon)\)-approximate \(f\)-DSO for \(G\) that takes space \(\widetilde{O}(n^{2-\frac{\alpha}{f+1}}/\varepsilon)\cdot O(\log n/\varepsilon) ^{f+1}\), has query time \(O(n^{\alpha}/\varepsilon^{2})\), and preprocessing time \(\widetilde{O}(n^{2-\frac{\alpha}{f+1}}(\frac{m}{\varepsilon}+\frac{1}{ \varepsilon^{2}}))\cdot O(\log n/\varepsilon)^{f}\)._
Proof.: The stretch of \(3+\varepsilon\) is treated in Lemma25, requiring \(\lambda=O(\varepsilon L)\). The derivation of the other parameters of the theorem is very similar to the proof of Lemma8 (see Section5.5). The main difference is that now the vertices for \(B\) are sampled with probability \(\widetilde{O}(f/\lambda)=\widetilde{O}(1/\varepsilon L)\) (as opposed to \(\widetilde{O}(1/L)\) before). We again choose \(L=n^{\alpha/(f+1)}\).
The space requirement of the whole construction is dominated by the FT-trees with granularity in the dense ball case. Additionally, we have to store all the pivot sets \(B_{G_{i}}(x,\lambda)\) for graphs \(G_{i}\) of the \((L,f)\)-replacement path covering, giving an extra term of
\[\widetilde{O}\Biggl{(}\frac{nL^{2f+o(1)}}{\lambda}\Biggr{)}=\widetilde{O} \Biggl{(}\frac{nL^{2f-1+o(1)}}{\varepsilon}\Biggr{)}=\widetilde{O}\left(\frac{n ^{1+\frac{\alpha(2f-1+o(1))}{f+1}}}{\varepsilon}\right)=\frac{n^{1+\alpha\frac{2 f-1}{f+1}+o(1)}}{\varepsilon}.\]
The last transformation uses that \(\alpha\) and \(f\) are both constants. We claim that this additional term is not dominating. As discussed in Section6.2, using the assumption \(f\geqslant 2\), we get the following space requirement for the new FT-trees,
\[\widetilde{O}\Biggl{(}\frac{n^{2}}{\lambda}\Biggr{)}\cdot O\biggl{(}\frac{ \log n}{\varepsilon}\Biggr{)}^{f+1}=\widetilde{O}\Biggl{(}\frac{n^{2-\frac{ \alpha}{f+1}}}{\varepsilon}\Biggr{)}\cdot O\biggl{(}\frac{\log n}{\varepsilon} \Biggr{)}^{f+1}\,.\]
Indeed, due to \(\alpha<\frac{1}{2}\leqslant\frac{f+1}{2f+o(1)}\), the latter part dominates.
The main effort when answering a query is computing the edge weights in the auxiliary graph \(H^{F}\) in the sparse ball case There, we have to scan all pivots in \(B\cap\operatorname{ball}_{G-F}(x,\lambda)\). (The dense ball case only takes time \(\widetilde{O}(L^{o(1)}(\lambda+1/\varepsilon)=\widetilde{O}(\varepsilon L^{1+o (1)}+L^{o(1)}/\varepsilon)\).) Recall that \(H^{F}\) is of size \(O(f^{2})=O(1)\). The query time is
\[\widetilde{O}\!\left(\frac{L^{f+o(1)}}{\varepsilon\lambda}\right)=\widetilde{ O}\!\left(\frac{L^{f-1+o(1)}}{\varepsilon^{2}}\right)=\widetilde{O}\!\left( \frac{n^{\frac{\alpha(f-1+o(1))}{f+1}}}{\varepsilon^{2}}\right)=\frac{n^{ \alpha\frac{f-1}{f+1}+o(1)}}{\varepsilon^{2}}=O\!\left(\frac{n^{\alpha}}{ \varepsilon}\right).\]
For the preprocessing time, we assume that we can compute an expath with granularity \(\lambda\) in time \(\widetilde{O}(fm+\lambda)=\widetilde{O}(m)\). Even though the FT-trees with granularity are much larger, we need fewer of them and it still takes longer to construct all the (regular) FT-trees. Compared to Lemma 8, we get an additional \(1/\varepsilon\) factor due to the transition from \(L\) to \(\lambda\), yielding \(\widetilde{O}((1/\varepsilon)\cdot n^{2-\frac{\alpha}{f+1}}(m+1/\varepsilon) )\cdot O(\log n/\varepsilon)^{f}=\widetilde{O}(n^{2-\frac{\alpha}{f+1}}(\frac {m}{\varepsilon}+\frac{1}{\varepsilon^{2}}))\cdot O(\log n/\varepsilon)^{f}\). Finally, it takes time \(\widetilde{O}(nL^{3f+o(1)})=n^{1+3\alpha\frac{f}{f+1}+o(1)}\) to prepare the sets \(B_{G_{i}}(x,\lambda)\). This is negligible compared to the \(n^{2-\frac{\alpha}{f+1}}\cdot\frac{m}{\varepsilon}=\Omega(n^{3-\frac{\alpha}{ f+1}})\) term.
## 7 Computing Shortest \((2f\!+\!1)\)-Expaths in \(\widetilde{O}(fm)\) Time
We finally turn to computing shortest \((2f\!+\!1)\)-expaths in \(\widetilde{O}(fm)\) time. We assume that we are given access to the all-pairs distances in the original graph \(G\). Since the latter data can be obtained in time \(\widetilde{O}(mn)\), this completes the proof of the preprocessing time in Lemma 8 an Theorem 1. It also allows us to improve the time needed to construct the (superquadratic-space) \(f\)-DSO with stretch \((1+\varepsilon)\) by Chechik et al. [19]. More precisely, the preprocessing time, which was \(O_{\varepsilon}(n^{5+o(1)})\)[19], is now reduced to \(O_{\varepsilon}(mn^{2+o(1)})\) (Theorem 4). We thus improve the complexity by a factor of \(n^{3}/m\).
In this section, we describe our algorithm to compute \((2f\!+\!1)\)-expaths in _weighted_ undirected graphs and with a sensitivity of up to \(f=o(\log n/\log\log n)\). Any edge \(\{a,b\}\in E\) carries a weight \(w(a,b)\) between \(1\) and some maximum weight \(W=\mathsf{poly}(n)\). The reason for this generalization is to fit the framework of [19]. The definition of graph distances is adjusted accordingly. This also has an effect on the definition of decomposable paths. For any non-negative integer \(\ell\), Afek et al. [2, Theorems 1] showed that in unweighted graphs after at most \(\ell\) edge failures shortest paths are the concatenation of up to \(\ell+1\) shortest paths in \(G\); if \(G\) is weighted, this changes to a concatenation of up to \(\ell+1\) shortest paths and \(\ell\) interleaving edges. We mean the latter whenever we speak of \(\ell\)-decomposable paths below. Let \(D=n\cdot W\) be an upper bound of the diameter of \(G\) An \(\ell\)-expath is now a concatenation of \(1+2\log_{2}D\)\(\ell\)-decomposable paths such that, for every \(0\leqslant i\leqslant 2\log_{2}D\), the length of the \(i\)-th \(\ell\)-decomposable has length at most \(\min(2^{i},2^{2\log_{2}(D)-i})\).
### Shortest \((2f\!+\!1)\)-Decomposable Paths
As a warm-up for our technique, we first describe how to compute \((2f\!+\!1)\)-decomposable paths efficiently using a modification of Dijkstra's algorithm. We later extend our approach to \((2f\!+\!1)\)-expaths, with or without granularity.
Let \(A\subseteq E\) be a set of edges in \(G\) and \(s,t\in V\) two vertices. We denote the shortest \(\ell\)-decomposable path between \(u\) and \(v\) in \(G-A\) by \(P^{(\ell)}(s,t,A)\), its length is \(d^{(\ell)}(s,t,A)\). Recall that \(d^{(\ell)}(s,t,A)\) may be larger than \(d_{G-A}(s,t)\) if \(|A|>\ell\).
**Lemma 26**.: _Given the original distances \(d_{G}(u,v)\) for all \(u,v\in V\) and an edge set \(A\subseteq E\), the distance \(d^{(2f+1)}(s,t,A)\) is computable in time \(\widetilde{O}(fm)\). Moreover, one can compute \(d^{(2f+1)}(s,v,A)\) for all vertices \(v\in V\) within the same time bound._
Proof.: We prove the lemma by induction over \(\ell\) running from \(0\) to \(2f{+}1\), computing \(d^{(\ell)}(s,v,A)\) for all targets \(v\in V\) in the \(\ell\)-th step. For the base case, note that \(d^{(0)}(s,v,A)=d_{G}(s,v)\) if the shortest \(s\)-\(v\)-path in \(G\) does not use an edge in \(A\) (that is, if it also exists in \(G-A\)); and \(d^{(0)}(s,v,A)=+\infty\) otherwise. We use a modified version of Dijkstra's algorithm in the graph \(G-A\) from the source \(s\). Let \(d^{\prime}(s,a)\) be the distance from \(s\) to some vertex \(a\) computed so far by our algorithm. During relaxation of an edge \(e=\{a,b\}\), we check if the current path is also the shortest path in \(G\) by testing if \(d^{\prime}(s,a)+w(a,b)=d_{G}(s,b)\), with the right-hand side being precomputed. If this fails, we do not decrease the key of vertex \(b\).
We now argue that \(d^{\prime}(s,v)=d^{(0)}(s,v,A)\). Note that if the shortest \(s\)-\(v\)-path in \(G\) also lies in \(G{-}A\), then all its edges are relaxed at one point and the corresponding checks in the modification succeed. Indeed, the last key of \(v\) in the priority queue (i.e., \(d^{\prime}(s,v)\)) then is \(d_{G}(s,v)=d^{(0)}(s,v,A)\). Otherwise, due to the uniqueness of shortest paths, _every_\(s\)-\(v\)-path in \(G-A\) has length larger than \(d_{G}(s,v)\). Therefore, the key of \(v\) is never decreased, it remains at \(+\infty\).
For the induction step, we construct a new _directed_ graph \(G^{*}=(V^{*},E^{*})\) with \(V^{*}=\{s_{0}\}\cup V_{1}\cup V_{2}\), where \(V_{1}\) and \(V_{2}\) are two copies of \(V\). See Figure2 for an example of this construction. For a given \(v\in V\), we denote by \(v_{1}\) and \(v_{2}\) the copies of \(v\) that are contained in \(V_{1}\) and \(V_{2}\), respectively. The set \(E^{*}\) contains the following edges.
* \((s_{0},v_{1})\) of weight \(w_{G^{*}}(s_{0},v_{1})=d^{(\ell-1)}(s,v,A)\) for every \(v\in V\) with \(d^{(\ell-1)}(s,v,A)\neq+\infty\);
* \((v_{1},v_{2})\) of weight \(w_{G^{*}}(v_{1},v_{2})=0\) for every \(v\in V\);
* \((u_{1},v_{2})\), \((v_{1},u_{2})\) both of weight \(w(u,v)\) for every edge \(\{u,v\}\in E\backslash A\);
* \((u_{2},v_{2})\), \((v_{2},u_{2})\) of weight \(w(u,v)\) for every \(\{u,v\}\in E\backslash A\).
We compute the values \(d^{\prime}(s_{0},v_{i})\) in \(G^{*}\), for \(i\in 1,2\), with a similar Dijkstra modification as in the base case. The relaxation of any out-edge of \(s_{0}\) or any edge from \(V_{1}\) to \(V_{2}\) remains unchanged. For the relaxation of \(e=(a_{2},b_{2})\) in \(G^{*}[V_{2}]\), let \(w_{2}\) be the first vertex from \(V_{2}\) on the shortest path from \(s_{0}\) to \(a_{2}\) that we found. We check whether \(d^{\prime}(s_{0},a_{2})-d^{\prime}(s_{0},w_{2})+w_{G^{*}}(a_{2},b_{2})=d_{G}(w,b)\) from the original graph \(G\). If not, we do not decrease the key of \(b_{2}\). This makes sure that the subpath from \(w_{2}\) to \(b_{2}\) corresponds to the shortest \(w\)-\(b\)-path in \(G\) (not only in \(G-A\)). To have access to \(w_{2}\) in constant time, we store the _entry vertex_\(w_{2}\) for each \(a_{2}\) at the time the key of \(a_{2}\) is decreased. If this happens using an outgoing edge from \(V_{1}\), then we set the entry vertex of \(a_{2}\) to \(a_{2}\) itself. Otherwise, we set it to be equal to the entry vertex of its predecessor.
The main part of the proof is to show that the computed distance \(d^{\prime}(s_{0},v_{2})\) is indeed \(d^{(\ell)}(s,v,A)\). This inductively implies that the algorithm eventually produces \(d^{(2f+1)}(s,v,A)\). Any path in
Figure 2: Example construction of the graph \(G^{*}\) (right) from \(G-A\) (left) for step \(\ell\) of the algorithm to compute shortest (\(2f{+}1\))-decomposable paths (Lemma26).
from \(s_{0}\) to some vertex \(v_{2}\in V_{2}\) has at least 3 vertices and its prefix has the form \((s_{0},u_{1},w_{2})\). By construction, we have \(w_{G^{*}}(s_{0},u_{1})=d^{(\ell-1)}(s,u,A)\), which corresponds to the shortest \((\ell-1)\)-decomposable path \(P^{(\ell-1)}(s,u,A)\) in \(G\). Next, note that there is no edge leaving \(G^{*}[V_{2}]\), so the rest of the path from \(w_{2}\) to \(v_{2}\) exclusively uses vertices from \(V_{2}\). Let \(P\) denote the corresponding path in \(G\) (meaning it uses the corresponding vertices from \(V\)). Our checks in the modification ensure that \(P\) is the shortest \(w\)-\(v\)-path in \(G\). Slightly abusing notation, let \(e=\{u,w\}\in E\backslash A\) be an edge in case \(u\neq w\); and \(e=\emptyset\) otherwise. In summary, the computed path through \(G^{*}\) corresponds to the path \(P_{\ell}=P^{(\ell-1)}(s,u,A)\circ e\circ P\) in \(G\). Since \(P^{(\ell-1)}(s,u,A)\) is \((\ell-1)\)-decomposable, and \(P\) is a shortest path, \(P_{\ell}\) is \(\ell\)-decomposable. It lies entirely in \(G-A\).
We now prove that \(P_{\ell}\) is also the _shortest_\(\ell\)-decomposable path from \(s\) to \(v\) in \(G-A\). To reach a contradiction, let \(Q_{\ell}\) be a strictly shorter \(\ell\)-decomposable \(s\)-\(v\)-path. \(Q_{\ell}\) is not \((\ell-1)\)-decomposable since otherwise the two-edge path \((s_{0},v_{1},v_{2})\) in \(G^{*}\) of length \(|Q_{\ell}|\) would be strictly shorter than \(P_{\ell}\). Our algorithm would have found that path instead (even without any modifications).
There exists a decomposition \(Q_{\ell}=Q_{\ell-1}\circ e^{\prime}\circ Q\), where \(Q_{\ell-1}\) is an \((\ell-1)\)-decomposable path in \(G-A\) ending in some vertex \(a\), \(e^{\prime}\) is either empty or a single edge \(\{a,b\}\in E\backslash A\), and \(Q\) is a shortest path in \(G\) from \(b\) to \(v\) that only uses edges from \(E\backslash A\). Let \(Q=(b,x^{(2)},\ldots,x^{(i)},v)\). Since \(d^{(\ell-1)}(s,a,A)\leqslant|Q_{\ell-1}|\), the path \((s_{0},a_{1},b_{2},x_{2}^{(2)},\ldots,x_{2}^{(i)},v_{2})\) through \(G^{*}\) has length at most \(|Q_{\ell}|<|P_{\ell}|\). It would have been preferred by our algorithm, a contradiction.
Concerning the runtime, we have \(O(f)\) steps. In each of them, we build the graph \(G^{*}=(V^{*},E^{*})\) with \(O(n)\) vertices and \(O(m)\) edges and run (the modified) Dijkstra's algorithm. It requires \(\widetilde{O}(m)\) time, so the overall time of our algorithm is \(\widetilde{O}(fm)\).
Unrolling the inductive computation of \((2f+1)\)-decomposable distances discussed above leads to a graph of \(2f+2\) layers as follows. As all the edges from \(s_{0}\) to \(v_{1}\in V_{1}\) in the graph \(G^{*}\) built for computing \(\ell\)-decomposable distances are modeling \((\ell-1)\)-decomposable paths, we could substitute all these edges by the construction graph used to compute the \((\ell-1)\)-decomposable paths and proceed recursively. In this way, we obtain an _exploded graph_ with a source \(s_{0}\) and additional \(2f+1\) layers \(V_{0},V_{1},\ldots,V_{2f+1}\), where layer \(V_{\ell}\) models the graph \(G-A\) as above and is used to compute shortest \(\ell\)-decomposable distances. Figure 3 gives an overview.
Running the modified version of Dijkstra's algorithm in the exploded graph guarantees that subpaths entirely contained in one layer are also shortest paths in \(G\) (while only using edges from \(E\backslash A\)). The decomposable distances may be computed out of order; for example, we may compute the value \(d^{(\ell)}(s,v,A)\) before \(d^{(\ell-1)}(s,u,A)\) provided that \(d^{(\ell)}(s,v,A)\leqslant d^{(\ell-1)}(s,u,A))\). Notwithstanding, for each vertex \(v_{\ell}\in V_{\ell}\), the computed distance is \(d^{(\ell)}(s,v,A)\).
We can further modify Dijkstra's algorithm to limit the length of partial paths to some upper
Figure 3: The exploded graph for computing shortest \(\ell\)-decomposable paths.
bound \(\delta_{i}\), by only allowing edges to be relaxed that do not increase the length of a subpath above that threshold. For this, we need to store information about the start of a subpath, e.g., the entry point into the \(i\)-th layer, for each of its descendant nodes. This can be propagated during edge relaxation as we did with \(w_{2}\) in the proof above. In the next subsection, we formally describe and combine these two ideas, exploded graphs and length restrictions, to compute \(\ell\)-expaths efficiently.
### Shortest \((2f{+}1)\)-Expaths
We now show how to compute the shortest \((2f{+}1)\)-expath in time \(\widetilde{O}(fm\log(nW))\), given access to all-pairs distances in \(G\). We define an \(i\)-partial \(\ell\)-expath as follows.
**Definition 27** (\(i\)-partial \(\ell\)-expath).: Let \(A\subseteq E\) be a set of edges and \(i,\ell\) non-negative integers. An \(i\)_-partial \(\ell\)-expath_ in \(G-A\) is path that is a concatenation of \(i+1\)\(\ell\)-decomposable paths in \(G-A\) such that, for every \(0\leqslant j\leqslant i\) the length of the \(j\)-th \(\ell\)-decomposable path is at most \(\delta_{j}=\min(2^{j},2^{2\log_{2}(nW)-j})\).
We write \(P^{i}=P_{0}\circ P_{1}\circ\ldots\circ P_{i}\) to refer to the constituting subpaths of an \(i\)-partial \(\ell\)-expath \(P^{i}\), i.e., \(P_{j}\) is a \(\ell\)-decomposable path and \(|P_{j}|\leqslant\delta_{j}\). Clearly, the definition of an \((2\log_{2}(nW))\)-partial \(\ell\)-expath coincides with that of an ordinary \(\ell\)-expath.
To compute the shortest \(\ell\)-expath from \(s\) to \(t\) in \(G-A\), we execute \(2\log_{2}(nW)\) phases. In the \(i\)-phase, we assume that already have the shortest \((i{-}1)\)-partial \(\ell\)-expath \(P^{i-1}\) from \(s\) to \(v\) in \(G-A\) for each \(v\in V\) and we extend it to the shortest \(i\)-partial \(\ell\)-expath \(P^{i}=P^{i-1}\circ P_{i}\) from \(s\) to \(v\) in \(G-A\), again, for each \(v\in V\).
Let \(d^{i,\ell}(s,v,A)\) denote the length of the shortest \(i\)-partial \(\ell\)-expath from \(s\) to \(v\) in \(G-A\). We define a directed graph \(G_{i}\) as follows. We set \(G_{i}=(V_{i},E_{i})\) with \(V_{i}=\{s^{*}\}\cup\bigcup_{j=0}^{\ell}V_{i,j}\), where all the \(V_{i,j}\) are copies of \(V\). In the remaining description, for every \(v\in V\) and every \(0\leqslant j\leqslant\ell\), we denote by \(v_{j}\) the copy of \(v\) contained in \(V_{i,j}\). The graph \(G_{i}\) has the following edges \(E_{i}\).
* \((s^{*},v_{0})\) of weight \(w_{G_{i}}(s^{*},v_{0})=d^{i-1,\ell}(s,v,A)\) for every \(v\in V\) with \(d^{i-1,\ell}(s,v,A)\neq+\infty\);
* \((v_{j-1},v_{j})\) of weight \(w_{G_{i}}(v_{j-1},v_{j})=0\) for every \(v\in V\) and every \(1\leqslant j\leqslant\ell\);
* \((u_{j-1},v_{j}),(v_{j-1},u_{j})\) both of weight \(w(u,v)\) for every edge \(\{u,v\}\in E\backslash A\) and every \(1\leqslant j\leqslant\ell\);
* \((u_{j},v_{j}),(v_{j},u_{j})\) of weight \(w(u,v)\) for every \(\{u,v\}\in E\backslash A\) and every \(0\leqslant j\leqslant\ell\).
Figure 4: Example construction of graph \(G_{i}\) (right) from \(G\) (left) for the algorithm to compute shortest \(i\)-partial \(\ell\)-expaths (Lemma 28). Each layer \(V_{i,j}\) is connected to its neighboring layers in the same way as the first two layers. If the red path is the shortest path from \(s_{0}\) to \(a_{\ell}\), then the entry nodes for \(a_{\ell}\) are \(w_{a_{\ell}}=b_{0}\) and \(x_{a_{\ell}}=a_{\ell}\).
The construction is visualized in Figure 4. Regarding the weights of the out-edges of \(s^{*}\) in the first graph \(G_{0}\), note that \(d^{-1,\ell}(s,v,A)\), by definition, is \(1\) if \(\{s,v\}\in E\backslash A\); and \(+\infty\) otherwise. Let \(d^{\prime}(s^{*},v_{j})\) be the values we compute in \(G_{i}\) by running Dijkstra's algorithm from the source \(s^{*}\) with the following modifications.
1. For each vertex \(v_{j}\in V_{i}\), we store _entry vertices_\(w_{v_{j}}\) and \(x_{v_{j}}\).
2. If DecreaseKey is called on \(v_{0}\in V_{i,0}\) upon relaxation of an edge \((s^{*},v_{0})\), \(w_{v_{0}}\) is set to \(v_{0}\).
3. If DecreaseKey is called on a vertex \(v_{j}\in V_{i,j}\) upon relaxation of an edge \((u_{j-1},v_{j})\) from the previous layer, then \(x_{v_{j}}\) is set to \(v_{j}\).
4. For all other calls of DecreaseKey, when relaxing edge \((u_{j},v_{j})\), set \(w_{v_{j}}=w_{u_{j}}\) and \(x_{v_{j}}=x_{u_{j}}\).
5. To relax an edge \((u_{j},v_{j})\), we require \(d^{\prime}(s^{*},u_{j})-d^{\prime}(s^{*},x_{u_{j}})+w_{G_{i}}(u_{j},v_{j})=d_{G} (u,v)\).
6. To relax an edge \((u_{j^{\prime}},v_{j})\), \(j^{\prime}\in\{j-1,j\}\), we require \(d^{\prime}(s^{*},u_{j^{\prime}})-d^{\prime}(s^{*},w_{u_{j^{\prime}}})+w(u_{j^{ \prime}},v_{j})\leqslant\delta_{i}\).
Finally, we set \(d^{i,\ell}(s,v,A)=d^{\prime}(s^{*},v_{\ell})\) for each \(v\in V\).
The modifications are such that \(w_{v_{j}}\) marks the entry point of the current shortest path from \(s^{*}\) to \(v_{j}\) into the graph induced by \(V_{i}\backslash\{s^{*}\}\), while \(x_{v_{j}}\) marks the entry point into the layer \(V_{i,j}\). Modification 5 further ensures that a path entirely contained within one layer corresponds to a shortest path in \(G\). This implies that a shortest path from \(s^{*}\) to a vertex \(v_{\ell}\in V_{i,\ell}\) corresponds to a composition of a \((i{-}1)\)-partial \(\ell\)-expath, via the edge \((s^{*},w_{v_{\ell}})\), and an \(\ell\)-decomposable path. Modification 6 enforces that the \(\ell\)-decomposable path we append has length bounded by \(\delta_{i}\).
**Lemma 28**.: _Given the original distances \(d_{G}(u,v)\) for all \(u,v\in V\), a set \(A\subseteq E\), and the distances \(d^{i-1,\ell}(s,v,A)\) for all \(v\in V\), the distances \(d^{i,\ell}(s,v,A)\) for all \(v\) are computable in total time \(\widetilde{O}(\ell m)\)._
Proof.: First, we show that, for an arbitrary vertex \(t\in V\) with \(d^{\prime}(s^{*},t_{\ell})\neq+\infty\), there exists an \(i\)-partial \(\ell\)-expath in \(G-A\) of length \(d^{\prime}(s^{*},t_{\ell})\). Consider the \(s^{*}\)-\(t_{\ell}\)-path \(Q\) through \(G_{i}\) computed by our algorithm. For each layer \(j\), let \(P_{i,j}\) be the path in \(G\) corresponding to the subpath of \(Q\) within \(V_{i,j}\). That means, for every \((u_{j},v_{j})\in E(Q)\), \(P_{i,j}\) contains the edge \(\{u,v\}\). Due to Modification 5, we only relax edges \((u_{j},v_{j})\), if the distance to the current entry vertex \(x_{v_{j}}\) into the \(j\)-th layer, corresponding to some \(x\in V\), is equal to the \(d_{G}(x,v)\). Thus, each \(P_{i,j}\) is a shortest path in \(G\).
Recall that we use symbol \(P_{i}\) for the \((i{+}1)\)-st constituting \(\ell\)-decomposable subpath of the \(i\)-partial \(\ell\)-expath \(P^{i}\) we aim to construct. We define \(P_{i}\) by interleaving all the paths \(P_{i,j}\) with the edges corresponding to the layer transitions. In more detail, let \((u_{j},v_{j+1})\) be the edge leaving \(V_{i,j}\). Note that \(Q\) never returns to \(V_{i,j}\). We add the corresponding edge \(\{u,v\}\) between \(P_{i,j}\) and \(P_{i,j+1}\) if \(u\neq v\); otherwise, we concatenate \(P_{i,j}\) and \(P_{i,j+1}\) directly. Since \(P_{i}\) consists of \(\ell+1\) shortest paths in \(G\) possibly interleaved with single edges, \(P_{i}\) is indeed an \(\ell\)-decomposable path. By the definition of \(E_{i}\), path \(P_{i}\) exclusively uses edges from \(E\backslash A\).
The path \(P_{i}\) starts in the vertex \(w\in V\) corresponding to \(w_{t_{\ell}}\in V_{i,0}\), whence the length of \(P_{i}\) is exactly \(d^{\prime}(s^{*},t_{\ell})-d^{\prime}(s^{*},w_{t_{\ell}})\) as our transformation preserves edge weights and the edges \((v_{j-1},v_{j})\) between vertices corresponding to the same \(v\) have weight \(0\) in \(G_{i}\). By Modification 6, the length of \(P_{i}\) is bounded by \(\delta_{i}\) as otherwise the last edge would not have been relaxed. Let \(P^{i-1}\) be the \((i{-}1)\)-partial \(\ell\)-expath corresponding to the edge \((s^{*},w_{t_{\ell}})\) with length \(d^{i,\ell}(s,w,A)\). In summary, \(P^{i}=P^{i-1}\circ P_{i}\) is an \(i\)-partial \(\ell\)-expath that has length \(|Q|=d^{\prime}(s^{*},t_{\ell})\).
It remains to prove that \(P^{i}\) is the shortest such path in \(G-A\). Assume there is a shorter \(i\)-partial \(\ell\)-expath \(P^{\prime}=P_{0}^{\prime}\circ\ldots\circ P_{i}^{\prime}\). Let \(x^{\prime}\) be the first vertex of \(P_{i}^{\prime}\), then \(w_{G_{i}}(s^{*},x^{\prime}_{0})=d^{i,\ell}(s,x,A)\leqslant\delta_{i}\).
\(w(P_{0}^{\prime}\circ\ldots\circ P_{i-1}^{\prime})\) as \(P_{0}^{\prime}\circ\ldots\circ P_{i-1}^{\prime}\) is an \((i{-}1)\)-partial \(\ell\)-expath from \(s\) to \(x\) in \(G-A\). Also, \(P_{i}^{\prime}\) is an \(\ell\)-decomposable path from \(x^{\prime}\) to \(t\) of length \(|P_{i}^{\prime}|\leqslant\delta_{i}\).
Let \(Q_{i}^{\prime}\) be the corresponding path through \(G_{i}\) from \(x_{0}^{\prime}\) to \(t_{\ell}\). Then the path \(Q^{\prime}=(s^{*},x_{0}^{\prime})\circ Q_{i}^{\prime}\) is a \(s^{*}\)-\(t_{\ell}\)-path in \(G_{i}\) that is shorter than the path \(Q\) that our algorithm found. Since Dijkstra's (original) algorithm is correct and \(Q_{i}^{\prime}\) has length \(|P_{i}^{\prime}|\leqslant\delta_{i}\), this can only happen due to Modification 5. During the computation, some edge \((u_{j},v_{j})\in E(Q_{i}^{\prime})\) satisfies \(d^{\prime}(s^{*},u_{j})-d^{\prime}(s^{*},x_{u_{j}})+w_{G_{i}}(u_{j},v_{j})>d_{ G}(u,v)\). Thus, the subpath of \(Q_{i}^{\prime}\) between \(x_{u_{j}}\) and \(v_{j}\) is entirely contained in \(V_{i,j}\) but not a shortest path in \(G\). This is a contradiction to \(P_{i}^{\prime}\) being \(\ell\)-decomposable. Since \(G_{i}\) has \(O(\ell n)\) vertices and \(O(\ell m)\) edges the \(i\)-th phase of our modified algorithm runs in time \(\widetilde{O}(\ell m)\).
If desired, the \(i\)-partial \(\ell\)-expath computed by this procedure may be reconstructed by storing the parent of the relaxed vertex whenever DecreaseKey is called. Additionally, we can label the start and endpoints of the \(\ell\)-decomposable paths as well as the shortest paths within them, by inserting labels for the first vertex after \(s^{*}\) and when an edge transitions from one layer to the next.
Lemma 28 implies the following result that we frequently referenced in Sections 5 and 6.13
Footnote 13: Recall that in Sections 5 and 6 the maximum weight is \(W=1\), whence the running time simplifies to \(\widetilde{O}(fm)\), and further to \(\widetilde{O}(m)\) as \(f\) is assumed to be constant.
**Corollary 29**.: _Given two vertices \(s,t\in V\), the original distances \(d_{G}(u,v)\) for all \(u,v\in V\), and a set of edges \(A\), the shortest \((2f{+}1)\)-expath from \(s\) to \(t\) in \(G-A\) is computable in time \(\widetilde{O}(fm\log(nW))\)._
### Expaths with Granularity
It is also not hard to extend the efficient expath computation to positive granularity \(\lambda\) (Definition 21). The difference is that the path now may have a pre- and suffix of up to \(\lambda\) edges each. Recall that \(d_{G-A}^{\leq\lambda}(u,v)\) is the minimum length of paths between vertices \(u,v\) in \(G-A\) that have at most \(\lambda\) edges; or \(+\infty\) if no such path exists. We first prepare the distances \(d_{G-A}^{\leq\lambda}(s,v)\) and \(d_{G-A}^{\leq\lambda}(v,t)\) for all \(v\in V\) by running Dijkstra's algorithm from \(s\) and from \(t\), respectively. This takes time \(\widetilde{O}(m)\) since \(\lambda\leqslant n\), whence it does not affect the total computation time.
Let \(G_{0},\ldots,G_{2\log n}\) be the graphs defined above, where \(G_{i}\) is used to compute the \(i\)-partial \(\ell\)-expath. We incorporate the prefix of \(\lambda\) edges in the Dijkstra run from \(s^{*}\) in \(G_{0}\). We set the weight of the edges \((s^{*},v_{0})\) for every \(v\in V\) to \(d_{G-A}^{\leq\lambda}(s,v)\) (and omit the edge in case \(d_{G-A}^{\leq\lambda}(s,v)=+\infty\)). For the suffix, we add a final node \(t^{*}\) after the last layer of the last graph \(G_{2\log n}\) The weight of the edges \((v_{\ell},t^{*})\) is \(d_{G-A}^{\leq\lambda}(v,t)\).
It is not difficult to see that the same algorithm for computing shortest \((2f{+}1)\)-expaths (without granularity) from Section 7.2 with these adaption now computes shortest \((2f{+}1)\)-expaths with granularity \(\lambda\) in \(G-A\).
### Improved Preprocessing of the Distance Sensitivity Oracle of
Chechik, Cohen, Fiat, and Kaplan
We now plug our expath computation into the preprocessing algorithm of the \(f\)-DSO in [19]. The sensitivity \(f\) can grow to \(o(\log n/\log\log n)\) in their setting and the underlying graph \(G\) is weighted with a polynomial maximum weight \(W=\mathsf{poly}(n)\). They use fault-tolerant trees \(FT(u,v)\) for all pairs of vertices \(u,v\in V\) incurring super-quadratic space. Every node \(\nu\) in an FT-tree is associated with a specific subgraph \(G_{\nu}\subseteq G\). To obtain the expath \(P_{\nu}\) from \(u\) to \(v\), all-pairs shortest paths in \(G_{\nu}\) are computed and then assembled in time \(\widetilde{O}(fn^{3}+n^{2}\log(nW)+n\log(nW)\log\log(nW))=\widetilde{O}(fn^{3})\)
per node._ With \(O(n^{2})\) FT-trees having \(O(\log(nW)/\varepsilon)^{f}\) nodes each, this makes for a preprocessing time of \(\widetilde{O}(fn^{5})\cdot O(\log(nW)/\varepsilon)^{f}=O(1/\varepsilon^{f}) \cdot n^{5+o(1)}\)
We have shown above that APSP is only needed in the original graph \(G\) to obtain the expaths in all relevant subgraphs, taking only \(\widetilde{O}(mn)\) time. Our algorithm for expaths then reduces the time to construct one node of an FT-tree to \(\widetilde{O}(fm)\). In total, we obtain a preprocessing time of \(\widetilde{O}(mn)+\widetilde{O}(fmn^{2})\cdot O(\log(nW)/\varepsilon)^{f}=O(1 /\varepsilon^{f})\cdot mn^{2+o(1)}\). The stretch of \(1+\varepsilon\), space \(\widetilde{O}(fn^{2})\cdot O(\log(nW)/\varepsilon)^{f}\), and query time \(O(f^{5}\log n)\) of the DSO remain the same as in [19, Theorem 3.2]. This proves Theorem 4.
## Acknowledgements
The authors thank Merav Parter for raising the question of designing distance sensitivity oracles that require only subquadratic space.
This project received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program, grant agreement No. 803118 "The Power of Randomization in Uncertain Environments (UncertainENV)" and grant agreement No. 101019564 "The Design and Evaluation of Modern Fully Dynamic Data Structures (MoDynStruct)".
|
2301.03473
|
Correlated carrier dynamics in a superconducting van der Waals
heterostructure
|
The study of Berezinskii-Kosterlitz-Thouless transitions in clean, layered
two-dimensional superconductors promises to provide insight into a host of
novel phenomena like re-entrant vortex-dynamics, underlying unconventional
metallic phases, and topological superconductivity. In this letter, we report
the study of charge carrier dynamics in a novel 2-dimensional superconducting
van der Waals heterostructure comprising monolayer MoS2 and few-layer NbSe2 (15
nm). Using low-frequency conductance fluctuation spectroscopy, we show that the
superconducting transition in the system is percolative. We present a
phenomenological picture of different phases across the transition correlating
with the evaluated noise. The analysis of the higher-order statistics of
fluctuation reveals non-Gaussian components around the transition indicative of
long-range correlation in the system.
|
Prakiran Baidya, Vivas Bagwe, Pratap Raychaudhuri, Aveek Bid
|
2023-01-09T16:08:37Z
|
http://arxiv.org/abs/2301.03473v1
|
# Correlated carrier dynamics in a superconducting van der Waals heterostructure
###### Abstract
Study of Berezinskii-Kosterlitz-Thouless transitions in clean, layered two-dimensional superconductors promises to provide insight into a host of novel phenomena like re-entrant vortex-dynamics, underlying unconventional metallic phases, and topological superconductivity. In this letter, we report the study of charge carrier dynamics in a novel 2-dimensional superconducting van der Waals heterostructure comprising of monolayer MoS\({}_{2}\) and few-layer NbSe\({}_{2}\) (\(\sim\) 15 nm). Using low-frequency conductance fluctuation spectroscopy, we show that the superconducting transition in the system is percolative. We present a phenomenological picture of different phases across the transition correlating with the evaluated noise. The analysis of the higher-order statistics of fluctuation reveals non-Gaussian components around the transition indicative of long-range correlation in the system.
## I Introduction
With the practical realization of graphene [1], the past decade has seen an extensive exploration of layered systems. The van der Waals heterostacking of these crystalline layered materials promises to exhibit parameter-driven exotic phenomena including topologically non-trivial states [2; 3; 4; 5] and strongly correlated phases [6; 7]. An example is atomically thin superconductors in the 'true' 2-dimensional (2D) limit. Contrary to the 3-dimensional super
conductors, for a superconductor in the 2D limit, the transition occurs through Berezinskii-Kosterlitz-Thouless (BKT) mechanism [8; 9]. Below a characteristic critical temperature \(T_{BKT}\), the vortices and antivortices are bound - the thermal unbinding of these pairs at \(T>T_{BKT}\) gives rise to the transition from the dissipationless to a finite resistive state in the system.
There are two distinct experimental strategies employed to identify a BKT transition as one approaches it from above - (i) measurement of the superfluid density \(n_{S}\) which is expected to go to zero discontinuously at the transition [11; 31] and (2) extrapolation of the temperature dependence of the resistivity measured at \(T>T_{BKT}\) to lower \(T\) using the formalism developed by Halperin and Nelson [12]. The first technique of looking for discontinuity in the superfluid density as a signature of BKT physics does not work well for superconductors buried inside heterostructures. The second method fails for disordered superconductors due to two reasons: (i) the presence of impurities tends to broaden the transition [13; 14; 15] and (ii) the inhomogeneities change the value of the vortex-core energy from that predicted within the 2D XY model [16].
The study of carrier dynamics through resistance fluctuation spectroscopy has emerged as a powerful alternative probe to identify BKT transitions [17; 14]. Though this technique has been employed to probe the BKT physics in thin-film superconductors, the study of fluctuation statistics is not well explored in layered systems, specifically in van der Waals heterostructures. With increasing interest in such systems as platforms for realizing low dimensional superconductivity in the clean limit and topological superconductivity, there is an urgent need for a detailed study of such systems.
In a previous study, we have reported the observation of two-dimensional Ising superconductivity in a van der Waals heterostructure comprising of single-layer-MoS\({}_{2}\) (SL-MoS\({}_{2}\)) and bulk NbSe\({}_{2}\)[18]. We established that the reduced dimensionality comes from an effective thinning of NbSe\({}_{2}\) due to the coupling with the MoS\({}_{2}\) layer making it a perfect example of a 'buried' van der Waals superconductor. Thus, the conductance fluctuation spectroscopy technique becomes very relevant to probe the BKT physics in this system.
In this letter, we report a detailed study of the carrier dynamics of this heterostructure through low-frequency conductance fluctuation spectroscopy around the BKT transition. Through systematic measurements, we establish that superconductivity has a percolative nature. We also find proof of correlated dynamics arising from long-range interaction of
vortices-antivortices near the \(T_{BKT}\) establishing universal BKT nature of the superconducting transition in this system.
## II Device fabrication
To fabricate the device, we mechanically exfoliated single-layer flakes of MoS\({}_{2}\) from a bulk crystal [18]. The thickness of the flake was confirmed from optical contrast and through Raman spectroscopy. The flake was then transferred onto gold contact probes pre-patterned on hBN substrates. Subsequently, a flake of NbSe\({}_{2}\) exfoliated inside a glove box (with oxygen and moisture levels maintained at less than one ppm) was transferred on top of the MoS\({}_{2}\) flake. The thickness of the NbSe\({}_{2}\) flake was estimated from its optical contrast to be \(\sim 15\) nm. Before extraction from inside the glove box, the heterostack of SL-MoS\({}_{2}\)/NbSe\({}_{2}\) was covered by a hBN flake of thickness \(\sim 30\) nm to protect it from environmental degradation. Subsequently, the stack was annealed at \(200^{\circ}\) C to increase the coupling between the layers.
## III Results and discussion
For the initial characterization of superconducting properties of the heterostructures, electrical transport measurements were done using a DC current source and a nano voltmeter in a four-probe configuration (Fig. 1(a)). The temperature dependence of resistance \(R\) shows a metallic behavior at high temperatures followed by a transition to a superconducting state (Fig. 1(c)) at \(T_{C}^{0}=6\) K. From the \(R\) versus \(T\) plot and also from non-linear current-voltage relations, we estimate \(T_{BKT}\) to be \(6.1\pm 0.1\) K.(see Supplementary Information S1 for details, also Ref. [18]).
We investigated the fluctuation statistics of the system around the \(T_{BKT}\) using a 4-probe resistance fluctuation spectroscopy technique [20; 21; 35] - the details are discussed in Supplementary Information S2. Briefly, the device was current biased, the voltage developed across it pre-amplified and detected by a dual-phase lock-in-amplifier (LIA). The demodulated output of the LIA was digitized at a sampling rate of 1024 points/s using a 16 bit analog-to-digital conversion card and transferred in the computer memory for further processing. The biasing current was always maintained at a value much smaller than the critical current of the superconductor. The acquired time series of resistance fluctuations were dec
imated and filtered digitally to eliminate aliasing and related digital artifacts. The power spectral density of the resistance fluctuation \(S_{R}\left(f\right)\) was then calculated over a frequency range 4 mHz-4 Hz.
Fig. 2(a) is a plot of the time traces of resistance fluctuations for our device measured at a few representative temperatures, \(T\). The fluctuations increase in amplitude with \(T\) approaching \(T_{BKT}\) from above. The corresponding \(S_{R}\left(f\right)\) were found to have a frequency dependence of the form \(S_{R}\left(f\right)\propto 1/f^{\alpha}\) (Fig. 2(b)). One can see that the power spectral density \(S_{R}\left(f\right)\) increases by several orders of magnitude with decreasing temperature, reflecting the observed increased resistance fluctuation in Fig. 2(a). Additionally, the value of the exponent \(\alpha\) for \(f<0.5\) Hz (the method of evaluation the exponent is discussed in Supplementary Information S3) increases monotonically from \(\sim 1\) at higher temperature to \(\sim 2.4\) near \(T_{BKT}\) (Fig. 3(a)). There can be two possible reasons for this increase in \(\alpha\) - (i) transition of the system across different vortex phases, e.g., from an ordered to a disordered regime [22] or (ii) fluctuation in the domain parameter of different phases across the transition range [23]. Discriminating between these two scenarios requires further analysis and is beyond the scope of the current letter.
The relative variance of resistance fluctuations (we refer to this as the noise) was evaluated by integrating \(S_{R}\left(f\right)\) over the bandwidth of measurement [20; 21; 35]:
\[\mathcal{R}=\frac{\left\langle\delta R^{2}\right\rangle}{\left\langle R^{2} \right\rangle}=\frac{1}{R^{2}}\int S_{R}(f)df. \tag{1}\]
Fig. 3(b) shows the plots of relative variance and the normalized resistance against \(T/T_{critical}\) (\(T/T_{BKT}\)) for the heterostructure region. We observe that \(\mathcal{R}\) in the normal state is \(\sim 10^{-10}\). This value is almost five orders of magnitude lower than that reported for a typical semiconducting TMD [24] attesting to the high quality of our heterostructure.
With decreasing \(T\), \(\mathcal{R}\) increases by nearly six orders of magnitude over a very narrow temperature window near \(T_{BKT}\). As we discuss later, this divergence in noise can be well explained in terms of a percolation network model of superconducting fluctuations [14]. Moreover according to the percolation model in the transition regime the system should follow the relation, \(S_{R}\left(f\right)/R^{2}\ \propto R^{-l_{rs}}\) where \(l_{rs}\) is the percolation exponent which takes up the value \(\sim 0.9\) in the classical picture [36]. In our case, the exponent \(l_{rs}\) comes out to be \(0.89\pm 0.03\) (see Supplementary Information S4), establishing the percolative nature of the system. Notably, when compared with that of the pristine NbSe\({}_{2}\), the noise in the
heterostructure is almost an order higher around the respective transition temperatures (c.f. Fig. 3(b)).
Before we proceed further, the effect of thermal fluctuations on the measured noise needs to be considered. \(dR/dT\) diverges close to the critical temperature for a superconductor. Consequently, any minor fluctuation in temperature can give rise to large resistance fluctuations near \(T_{BKT}\). To eliminate this trivial effect as the the origin of the large resistance fluctuations seen in our device, we evaluated the relative contribution of temperature fluctuations to the noise using the relation \(\left[(dR/dT)\times(\delta T/R)\right]^{2}\). Here \(\delta T\) is the temperature fluctuation in the measurement system, which has been measured to be \(<5\) mK in our case. The evaluated value of this relative variance at \(T_{BKT}\) is \(\sim 10^{-7}\) (see Supplementary Information S5). This value is at least two orders of magnitude smaller than the measured \(\mathcal{R}\) near \(T_{BKT}\), thus ruling out any significant contributions of temperature fluctuations in the measured noise.
In Fig. 4 we present a phenomenological explanation of the effect of percolation dynamics on the resistance fluctuations in a 2D superconductor in terms of a percolation network model of superconducting fluctuations [14]. The squares below the plot show the microscopic status of the system schematically in terms of a superconducting-normal network in different \(T\)-regimes.
Region-I is a purely superconducting phase. On approaching \(T_{BKT}\) from below, small patches of dissipative (metallic) domains begin nucleating in the superconducting background (region-II). With increasing temperature, fluctuations in the superconducting order parameter result in the formation of a dynamic network of interconnected superconducting and normal (dissipative) regions [36]. This effect is especially severe in the case of 2D superconductors. The enhanced noise in this \(T\)-regime has two major components - (i) resistance fluctuations in dissipative regions; (ii) fluctuations in the number/size of the superconducting clusters [26; 27]. Beyond \(T=T_{BKT}\), the system crosses into region III, where the proportion of superconducting and non-superconducting domains become almost equal. At this temperature (which we denote as \(T_{max}\)), the resistance of the device is nearly half of the normal state resistance, and the resistance fluctuation is at its maximum. The other boundary of region III comes at \(T_{BCS}\), which is at \(\sim 6.5\) K for the system. For \(T>T_{BCS}\), the fraction of the superconducting phase decreases sharply with increasing \(T\) till the entire system becomes dissipative. Consequent to this decrease in electronic phase segregation of
the system, the variance of resistance fluctuations decreases as the system approaches the metallic phase.
We turn now to the nature of the correlations between the fluctuating entities at different \(T\)-ranges. In 2D superconductors undergoing BKT transition, the XY model predicts the fluctuations to be non-Gaussian around \(T_{BKT}\)[28]. These non-Gaussian resistance fluctuations have emerged as a unique signature of BKT physics and have successfully been used to discriminate between 2D and 3D superconductors [14; 17]. We quantify the non-Gaussianity of the resistance fluctuations through their 'Second spectrum,' which is the four-point correlation function of \(\delta R\), calculated over a frequency octave \(\left(f_{l},f_{h}\right)\). Being extremely sensitive to the presence of non-Gaussian components (NGC), this parameter is a highly effective tool to probe correlation in a system [29; 30]. To estimate the second spectrum, repeated measurements of the PSD, \(S_{R}\left(f\right)\) is done over a selected frequency range \(\left(f_{l},f_{h}\right)\). The power spectrum of this series over a frequency octave gives the second spectrum [29; 30]:
\[\begin{split} S_{R}^{f_{1}}\left(f_{2}\right)=\int_{0}^{\infty} \left\langle\delta R^{2}\left(t\right)\delta R^{2}\left(t+\tau\right)\right \rangle\cos\left(2\pi f_{2}\tau\right)d\tau,\\ \sigma^{\left(2\right)}=\int_{0}^{f_{l}-f_{h}}S_{R}^{f_{1}} \left(f_{2}\right)df_{2}\bigg{/}\bigg{[}\int_{f_{l}}^{f_{h}}S_{R}\left(f \right)df\bigg{]}^{2}\end{split} \tag{2}\]
Here \(f_{1}\) is the center frequency of the chosen octave and \(f_{2}\) is the spectral frequency. \(\sigma^{\left(2\right)}\) is the normalized second spectrum; it equals 3 for Gaussian fluctuations.
Fig. 5 shows the plot of measured \(\sigma^{\left(2\right)}\) as a function of \(T/T_{BKT}\) for the heterostructure region. As can be observed, while decreasing the temperature, \(\sigma^{\left(2\right)}\) increases from a baseline value of \(\sim 3\) at higher \(T\) to \(\sim 30\) near \(T_{BKT}\) before decaying again to the Gaussian base value for \(T<T_{BKT}\). This enhancement of \(\sigma^{\left(2\right)}\) in the narrow window of \(T\) in Region-III establishes clearly the appearance of non-Gaussian resistance fluctuations near \(T=T_{BKT}\). In contrast, for the pristine NbSe\({}_{2}\) region \(\sigma^{\left(2\right)}\) remains at the baseline value of \(\sim 3\) (black solid circle in Fig. 5) throughout the temperature range around the transition indicating a Gaussian distribution of fluctuations as expected for a 3D superconductor.
Non-Gaussian fluctuations in superconductors can have different origins - (1) long-range correlation among the vortices as has been observed in previous studies [17], (2) the dominance of percolation kinetics around superconducting transition seen in inhomogeneous superconductors [14], and (3) dynamic current redistribution which appears as a consequence of substantial transport inhomogeneity and large local resistivity fluctuations that
translate to the necessary condition of \(\delta R/R>>1\)[30]. The third cause can be immediately ruled out by noting that in our system\(\delta R/R<<1\). To discriminate between the remaining two scenarios, note that a comparison of the \(T\) dependence of \(\mathcal{R}\) and \(\sigma^{(2)}\) (Fig. 5) reveals that significant fluctuations in the resistance extends beyond the onset of normal state (i.e. \(R/R_{N}=1\)) at \(T_{BCS}\) of \(\sim 6.5\) K. In the low-temperature limit, it extends to \(T<T_{BKT}\). On the other hand, the deviation from the Gaussian value in \(\sigma^{(2)}\) stays confined in region III within \(T_{BKT}<T<T_{BCS}\). This deviation indicates that the increase of \(\sigma^{(2)}\) that marks the existence of non-Gaussian fluctuations in the fluctuation is a consequence of an independent process which is unlikely to be the percolation kinetics that dominates the spectrum of resistance fluctuations. This strongly suggests that the first scenario of correlated vortices is at play in inducing the non-Gaussian fluctuations in the system, as has been reported earlier for clean, homogeneous superconductors [17]. Further theoretical and experimental studies are essential to establish unequivocally if this is the case.
## IV Conclusion
In summary, we have studied the carrier dynamics of SL-MoS\({}_{2}\)/NbSe\({}_{2}\) heterostructures near the superconducting transition by probing the low-frequency conductance fluctuations of the system. The first spectrum (resistance noise) shows signatures of the percolative nature of the superconducting transition. We provide a phenomenological explanation of the different phase-space regions around the transition temperature in terms of a percolative microstructure picture and correlate the resulting fluctuations with it. Furthermore, we establish the presence of strong correlations in the system around \(T_{BKT}\) arising most probably from the interacting vortices and thus established that the superconducting transition in the system is of the universal BKT type.
Acknowledgments: The authors acknowledge device fabrication facilities in NNFC, CeNSE, IISc. A.B. acknowledges funding from SERB (No. HRR/2015/000017) and DST (No. DST/SJF/PSA01/2016-17)
Figure 1: (a) A schematic of the device structure. (b) False colour Differential Interference Contrast image of the device with different colors defining different layers of the heterostructure – SL-MoS\({}_{2}\)(green) and pristine NbSe\({}_{2}\) (orange) and overlap region (light-red). (c) Temperature dependence of the four-probe resistance of the heterostructure.
Figure 2. (a) Time series of resistance fluctuations of the heterostructure region at a few representative temperatures. (b) Plots of \(S_{R}\left(f\right)\) as function of frequency at the same values of \(T\) as in (a).
Figure 3: (a) Plot of exponent \(\alpha\) versus \(T/T_{BKT}\) for heterostructure. (b) Plots of the relative variance of resistance fluctuations \(\mathcal{R}\) for heterostructure (solid green circles) and for the pristine 3D NbSe\({}_{2}\) region (open orange triangles) as function of \(T/T_{critical}\). Here \(T/T_{critical}\) is \(T/T_{BKT}\) for the heterostructure and \(T/T_{c}^{0}\) for the pristine NbSe\({}_{2}\). On the right-axis are plotted the normalized resistance \(R/R_{N}\) for the heterostructure (red line).
Figure 4: (a) Plots of the variance of noise \(\left<\delta R^{2}\right>\) (left-axis, yellow filled circles) and of the normalized resistance (right-axis, black solid line) as a function of \(T/T_{BKT}\) for the heterostructure region. The color gradient indicates the transition from superconducting (blue) to normal state (red). (b) Schematics representing the microscopic status of the system in each electronic phase (for details see text). The color bar gives the value of \(R/R_{N}\) – blue represents the zero resistance i.e. superconducting state and red represents the normal state.
Figure 5. Plots of the variance of resistance fluctuations \(\left<\delta R^{2}\right>\) (filled yellow circles, left axis) and the normalized second spectrum \(\sigma^{(2)}\) for heterostructure (filled blue circles, right axis) and for pristine 3D NbSe\({}_{2}\) (filled black circles, right axis) as a function of \(T/T_{critical}\). Here \(T/T_{critical}\) is \(T/T_{BKT}\) for the heterostructure and \(T/T_{c}^{0}\) for the pristine NbSe\({}_{2}\). The color gradient has the same connotation as in Fig. 4. (for details see text).
## S1 Evaluation of BKT transition temperature
For the initial characterization of superconducting properties of the heterostructure, electrical transport measurements were done using a DC current source and a nano voltmeter in a four-probe configuration. As reported in our previous work, the superconductivity in system is of 2D nature. We thus expect the observed superconducting transition to be of Berezinskii-Kosterlitz-Thouless (BKT) type. For a 2D-superconductor there exists a characteristic temperature, \(T_{BKT}\) below which a finite electric current can unbind the vortex-antivortex pairs system, giving rise to a dissipation which is reflected in the current-voltage characteristic as a non-linear behavior of the form, \(V\propto I^{\gamma(T)}\)[31, 32]. \(\gamma\) is a temperature dependent exponent that takes the value 3 at \(T=T_{BKT}\) and eventually goes to 1 in the normal ohmic state. Fig. S1(a) shows the zero field DC non-linear current-voltage characteristics. The value of the exponent \(\gamma\), evaluated through linear fitting of each curve within the marked region, are shown in Fig. S1(b). From this plot, we evaluate \(T_{BKT}\) to be 6.13 K. We also evaluated \(T_{BCS}\) (defined as the temperature where onset of transition occurs or in other
words where the IV becomes linear i.e. \(\gamma=1\)) to be 6.5 K.
The BKT transition temperature can also be obtained from resistance vs temperature plot as near \(T_{BKT}\) the resistance takes the form \(R=R_{0}\)exp\([-b_{R}/(T-T_{BKT})^{1/2}]\) (where \(b_{R}\) gives the vortex-antivortex interaction strength) [31; 33; 34]. To evaluate \(T_{BKT}\) we reduced the formula to a form, \(\left(dlnR/dT\right)^{-2/3}=\left(2/b_{R}\right)^{2/3}\left(T-T_{BKT}\right)\). As shown in Fig. S1(c), the intercept of the plot of \(\left(dlnR/dT\right)^{-2/3}\) vs \(T\) gives \(T_{BKT}\) as to be 6.14 K.
## S2 Details of noise measurement technique
We investigated the fluctuation statistics of the system around superconducting transition through analysis of the zero field temperature dependent resistance fluctuations acquired using an ac technique that allows us to measure the fluctuations from system and the background simultaneously [35]. Fig. S2 is schematic of the measurement setup. The sample was current-biased using the sine wave output of a lock-in amplifier (SR830). A resistor, \(R_{series}\) in series with the sample controls the current through it. The value of the excitation current was always maintained to be lower than the critical current of the superconductor.
The voltage developed across the sample was detected using the dual-phase lock-in-amplifier coupled with a preamplifier (SR552). The excitation frequency of the current was kept at the eye of the noise figure of the preamplifier to minimize the contribution of amplifier noise in measured the background noise. The time constant were set to be 30 ms with a filter roll off of 24 dB/octave - this subsequently determines the upper cutoff frequency of the power spectral density (PSD). The output of the LIA was digitized at a sampling rate of 1024 points/s using a 16 bit analog-to-digital conversion card and transferred in the computer memory for further processing. The in-phase channel (X-channel) picks up the excess noise from the sample as well as the background whereas the quadrature channel (Y-channel) picks up only the fluctuations from background. At every temperature, the time series of the resistance fluctuations was acquired for a duration of 30 minutes (\(\sim 1.8X10^{6}\) data points). These were subsequently decimated with a factor of 128 and digitally filtered to eliminate aliasing and related digital artifacts. These filtered time series were then used to calculate the power spectral density (PSD) over the specific frequency range. The PSD of the sample noise were finally obtained by subtracting the PSD of X-channel fluctuation from that of the Y-channel.
## S3 Evaluation of the exponent \(\alpha\) from \(S_{R}\left(f\right)\)
As mentioned in the main manuscript, the power spectral density, \(S_{R}\left(f\right)\) has a frequency dependency \(S_{R}\left(f\right)\propto 1/f^{\alpha}\). To evaluate the exponent \(\alpha\) we plotted \(S_{R}\left(f\right)\) as function of frequency, \(f\) in log-log scale, as shown in Fig. S3. The slope of these plots gives the value of \(\alpha\). As can be seen here the slope i.e. \(\alpha\) is \(\sim 1\) at 8 K in which the system is in normal state whereas at 6.2 K which is near to \(T_{BKT}\) the value becomes \(\sim 2.4\).
## S4 Classical picture of percolation
For a system having percolative nature it follows that the spectral density of relative resistance fluctuation at a certain frequency, \(S_{R}\left(f\right)/R^{2}\) grows as power law to the decreasing resistance towards superconductivity with a form given by [36]
\[\frac{S_{R}\left(f\right)}{R^{2}}\ \propto R^{-l_{rs}}\] (S1)
where \(l_{rs}\) is the percolation exponent, which takes the value \(\sim 0.9\) in the classical picture. As can be seen in Fig. S4 the percolation exponent for the heterostructure comes out to be \(0.89\pm 0.03\) which matches quite well with the classical percolation picture.
## S5 Contribution of temperature fluctuations to the measured noise
As mentioned in the main manuscript, we have evaluated the relative contribution of the temperature fluctuation of the measurement system to the measured noise. The temperature stability of a system depends mainly on the PID value of the temperature controller used in the experiment. We have fixed this PID value in such a way that we were able to have a temperature fluctuation, \(\delta T<5\) mK at all temperatures at which noise measurements were done.
In Fig S5(a), we show a plot of \(T-T_{setpoint}\) versus time over a period of 30 minutes. This
is the typical time for a single noise run. Here \(T_{setpoint}\) is the target temperature value (in this case, 12 K), and \(T(t)\) is the instantaneous value of temperature. From Fig. S5(a), the maximum fluctuation is about 3 mK indicating that taking 5 mK as \(\delta T\) in our calculation is a safe choice. Fig. S5(b) shows the plots of the measured relative variance (olive solid circles) and that estimated from temperature fluctuations. One can see that near \(T_{BKT}\), the value of relative variance of resistance fluctuations estimated from the temperature fluctuations is almost two orders of magnitude smaller than the measured relative variance of resistance fluctuations, \(\mathcal{R}\) of the heterojunction. This establishes that temperature fluctuations play a negligibly small role in the measured noise.
## S6 Noise data from another device
Fig. S6 shows the relative variance of resistance fluctuations of another device D2 having identical structure to the device D1 whose data were presented in the main text. As is evident from the plot, the data from D2 is very similar to that from D1 - it shows percolative transition near \(T_{BKT}\). Similar to the data for device D1, for D2 also we observe an order of magnitude higher value of the relative variance for heterostructure region in comparison to pristine NbSe\({}_{2}\) section of the device near \(T/T_{BKT}\).
Fig. S7 shows the variance of the resistance fluctuations (olive solid circle, left axis) along with the normalized resistance, \(R/R_{N}\) (red line, right axis) of the heterostructure region measured for device D2 as a function of \(T/T_{BKT}\). As can be seen, the increased fluctuation extends beyond \(T_{BCS}\) into the normal state just as for D1 in the main text.
The deviation of the normalized second spectrum, \(\sigma^{(2)}\) from the baseline value of 3 in Fig. S8 are constricted within the region bounded by \(T_{BKT}\) and \(T_{BCS}\) suggesting that, like device D1 in main text, the non-Gaussian nature for device D2 also occurs due to the long range correlations between the vortex-antivortex pair around the transition. Moreover, as
expected we observed \(\sigma^{(2)}\) to be \(\sim 3\) for pristine NbSe\({}_{2}\) region of device D2 throughout the temperature range indicating the Gaussian nature of the fluctuations. This similarity in the evaluated results for the two different devices of similar heterostructure thus proves that the main observed phenomenons are inherent to the system and not device specific.
Figure S7. Plots of the variance of noise \(\left<\delta R^{2}\right>\) (left-axis, olive solid circles) and of the normalized Resistance (right-axis, red solid line) as a function of \(T/T_{BKT}\) for the heterostructure region of device D2.
Figure S8. Plots of the variance of resistance fluctuations \(\left<\delta R^{2}\right>\) (solid olive circles, left axis) and the normalized second spectrum \(\sigma^{(2)}\) for heterostructure (solid red circles, right axis) and for pristine 3D NbSe\({}_{2}\) (solid blue circles, right axis) as a function of \(T/T_{critical}\) for device D2. Here \(T/T_{critical}\) is \(T/T_{BKT}\) for the heterostructure and \(T/T_{c}^{0}\) for the pristine NbSe\({}_{2}\)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.